Designing for Transparency in Machine Learning
How can we ethically design products using machine learning, and create spaces for user agency?
How do we ethical use AI in product design? In this talk, I will outline design methodologies, suggestions, use cases, and real life examples for creating more transparent and equitable AI for product design. This talk will cover examples from Facebook and algorithmic timelines to civic technology using AI, to predictive policing and missteps that have existed in AI as well as great use cases of AI in product design. The future is going to be weird but it doesn’t have to be broken, especially for design that touches the lives of everyday users.
Caroline Sinders is a machine learning design researcher and artist. For the past few years, she has been focusing on the intersections of natural language processing, artificial intelligence, abuse, online harassment and politics in digital, conversational spaces. Caroline is the founder of Convocation Design + Research, a design and research agency focusing on the intersections of machine learning, user research, designing for public good, and solving communication difficult problems. As a designer and researcher, she’s worked with groups like Amnesty International, Intel, IBM Watson, the Wikimedia Foundation as well as others.
Caroline has held fellowships with the Yerba Buena Centers of the Arts, Eyebeam, the Studio for Creative Inquiry and the International Center of Photography. Her work has been featured at MoMA PS1, the Houston Center for Contemporary Art, Slate, Quartz, the Channels Biennale, as well as others. Caroline holds a masters from New York University’s Interactive Telecommunications Program.