2-8 February, 2020 — Milan, Italy
interaction 19 logo

Designing Transparent AI

Artificial intelligence (AI) is found in many of today’s complex systems (e.g., nuclear power plants, cockpits), and has also become pervasive in many widely used consumer products (e.g., navigation systems, digital assistants).

Although AI offers a promising opportunity to improve many areas of our lives, an issue with the AI-based systems is that these are less than 100% reliable (e.g., Yampolskiy & Spellchecker, 2016). Systems with less than perfect reliability still have the potential to provide significant benefit to users (e.g., Wickens & Dixon, 2007), but this is dependent on the user’s ability to understand the system’s processes and limitations. Further, providing information to users about the reliability of the AI helps calibrate the appropriate level of user trust (e.g., Lee & See, 2004). Unfortunately, many of today’s AI systems are essentially “black boxes” that are difficult to interpret, understand, and trust (Steinruecken et al., 2018). Designers should therefore strive to make various elements of system performance transparent to users. In this talk, we will discuss and provide examples of the following techniques that can be adopted to make AI algorithms and vulnerabilities more transparent to users:

Outline

• Set appropriate expectations about what the AI can and cannot do • Provide accurate and timely feedback • Present contextual information about automation reliability • Display information about the source of an automation failure and what users should do in a given situation • Group and isolate less reliable or vulnerable AI components/functions so that user trust of reliable components/functions does not erode • Provide information about how the AI algorithm works • Provide information about the uncertainty of the AI’s predictions and decisions • Balance AI transparency and information overload wisely

Arathi Sethumadhavan

Arathi Sethumadhavan

I am a Senior Design Research Manager at Microsoft, where I lead a team that conducts user research on AI and ethics. Prior to joining Microsoft, I worked for several years at Medtronic designing medical technology that saved lives, including the world’s smallest pacemaker. I also hold an Adjunct Faculty position at the College of Design at the University of Minnesota. I am the Department Editor of Ergonomics in Design and writes articles on topics ranging from human robot interaction, in-vehicle technologies, design, and sustainability. I have delivered more than 45 talks and is currently editing a book on Designing Technology for Health. I have won awards from the American Psychological Association, the American Psychological Foundation, and the Human Factors and Ergonomics Society. I have an M.A. and Ph.D. in Experimental Psychology with a specialization in Human Factors from Texas Tech University, and a Bachelors degree in Computer Science from India.

Dr. Samuel J. Levulis

Dr. Samuel J. Levulis

I am a Design Researcher at Microsoft, where I conduct user research on AI and ethics. I received my M.A. and Ph.D. in Experimental Psychology (with an emphasis in Human Factors) from Texas Tech University, where I conducted research in diverse areas such as driver attentional and perceptual processes, best practices in usability testing techniques, and display design for supervisory control settings. I have twelve peer-reviewed publications and have delivered fifteen conference presentations

Subscribe to newsletter

Stay informed with our monthly update:

Code of Conduct   •   Copyright 2004 › 2019 Interaction Design Association   •   Accessibility