Abstract
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
- Leena Aarikka-Stenroos and Elina Jaakkola. 2012. Value co-creation in knowledge intensive business services: A dyadic perspective on the joint problem solving process. Industrial marketing management, Vol. 41, 1 (2012), 15--26.Google Scholar
- Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 582.Google ScholarDigital Library
- Mark S Ackerman. 2000. The intellectual challenge of CSCW: the gap between social requirements and technical feasibility. Human-Computer Interaction (2000).Google Scholar
- Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, Vol. 6 (2018), 52138--52160.Google ScholarCross Ref
- P Agre. 1997 a. Toward a critical technical practice: Lessons learned in trying to reform AI in Bowker. Social science, technical systems, and cooperative work: Beyond the Great Divide (1997).Google Scholar
- Philip E Agre. 1997 b. Computation and human experience. Cambridge University Press.Google Scholar
- Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces.Google ScholarDigital Library
- Steven Alter. 2010. Design spaces for sociotechnical systems. (2010).Google Scholar
- Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--13.Google ScholarDigital Library
- McKane Andrus, Sarah Dean, Thomas Krendl Gilbert, Nathan Lambert, and Tom Zick. 2020. AI development for the public interest: From abstraction traps to sociotechnical risks. In 2020 IEEE International Symposium on Technology and Society (ISTAS). IEEE.Google ScholarDigital Library
- Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, Aleksandra Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, et al. 2019. FactSheets: Increasing trust in AI services through supplier's declarations of conformity. IBM Journal of Research and Development, Vol. 63, 4/5 (2019), 6--1.Google ScholarCross Ref
- Alejandro Barredo Arrieta, Natalia D'iaz-Rodr'iguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc'ia, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58 (2020).Google Scholar
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).Google Scholar
- Maryam Ashoori and Justin D Weisz. 2019. In AI we trust? Factors that influence trustworthiness of AI-infused decision-making processes. arXiv preprint arXiv:1912.02675 (2019).Google Scholar
- American Psychiatric Association et al. 2015. The American Psychiatric Association practice guidelines for the psychiatric evaluation of adults. American Psychiatric Pub.Google Scholar
- Aaron Balick. 2014. Technology, social media, and psychotherapy: Getting with the programme. Contemporary Psychotherapy, Vol. 6, 2 (2014).Google Scholar
- Natalya N Bazarova, Yoon Hyung Choi, Victoria Schwanda Sosik, Dan Cosley, and Janis Whitlock. 2015. Social sharing of emotions on Facebook: Channel differences, satisfaction, and replies. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 154--164.Google ScholarDigital Library
- GM Beal and JM Bohlen. 1957. The diffusion process (Special Report Nܘ 18, Agricultural Experiment Station). Iowa State College (1957).Google Scholar
- Martin Bella and Bruce Hanington. 2012. Universal methods of design. Beverly, MA: Rockport Publishers (2012), 204.Google Scholar
- Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al. 2018. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943 (2018).Google Scholar
- Hugh R Beyer and Karen Holtzblatt. 1996. Contextual techniques starter kit. interactions, Vol. 3, 6 (1996), 44--50.Google Scholar
- Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology, Vol. 3, 2 (2006), 77--101.Google Scholar
- Zana Bucc inca, Phoebe Lin, Krzysztof Z Gajos, and Elena L Glassman. 2020. Proxy tasks and subjective measures can be misleading in evaluating explainable ai systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces.Google Scholar
- Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. PMLR, 77--91.Google Scholar
- Daniel Buschek, Lukas Mecke, Florian Lehmann, and Hai Dang. 2021. Nine Potential Pitfalls when Designing Human-AI Co-Creative Systems. arXiv preprint arXiv:2104.00358 (2021).Google Scholar
- Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces.Google ScholarDigital Library
- Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics, Vol. 8, 8 (2019), 832.Google ScholarCross Ref
- Stevie Chancellor and Munmun De Choudhury. 2020. Methods in predictive techniques for mental health status on social media: a critical review. NPJ digital medicine (2020).Google Scholar
- Zhengping Che, Sanjay Purushotham, Robinder Khemani, and Yan Liu. 2016. Interpretable deep models for ICU outcome prediction. In AMIA Annual Symposium Proceedings, Vol. 2016.Google Scholar
- Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems.Google ScholarDigital Library
- EunJeong Cheon and Norman Makoto Su. 2016. Integrating roboticist values into a Value Sensitive Design framework for humanoid robots. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 375--382.Google ScholarCross Ref
- EunJeong Cheon and Norman Makoto Su. 2018. Futuristic autobiographies: Weaving participant narratives to elicit values around robots. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction.Google ScholarDigital Library
- Clifford Christians. 1989. A theory of normative technology. In Technological Transformation. Springer, 123--139.Google Scholar
- Massimiliano Sassoli de Bianchi. 2013. The observer effect. Foundations of science, Vol. 18, 2 (2013), 213--243.Google Scholar
- Munmun De Choudhury, Scott Counts, and Eric Horvitz. 2013. Predicting postpartum changes in emotion and behavior via social media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13).Google ScholarDigital Library
- Munmun De Choudhury, Scott Counts, Eric J. Horvitz, and Aaron Hoff. 2014. Characterizing and predicting postpartum depression from shared facebook data. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing - CSCW '14. ACM Press, Baltimore, Maryland, USA, 626--638.Google ScholarDigital Library
- Munmun De Choudhury, Min Kyung Lee, Haiyi Zhu, and David A Shamma. 2020. Introduction to this special issue on unifying human computer interaction and artificial intelligence. Human-Computer Interaction (2020).Google Scholar
- Shipi Dhanorkar, Christine T Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021.Google ScholarDigital Library
- Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces.Google ScholarDigital Library
- Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).Google Scholar
- Paul Dourish and Genevieve Bell. 2011. Divining a digital future: Mess and mythology in ubiquitous computing. Mit Press.Google ScholarDigital Library
- Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17 (2017), 278--288. https://doi.org/10.1145/3025453.3025739Google ScholarDigital Library
- Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021a. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.Google ScholarDigital Library
- Upol Ehsan, Samir Passi, Q Vera Liao, Larry Chan, I Lee, Michael Muller, Mark O Riedl, et al. 2021b. The who in explainable ai: How ai background shapes perceptions of ai explanations. arXiv preprint arXiv:2107.13509 (2021).Google Scholar
- Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.Google ScholarDigital Library
- Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021).Google Scholar
- Upol Ehsan and Mark O Riedl. 2022. Social Construction of XAI: Do We Need One Definition to Rule Them All? arXiv preprint arXiv:2211.06499 (2022).Google Scholar
- Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark Riedl. 2019. Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions. In Proceedings of the International Conference on Intelligence User Interfaces.Google ScholarDigital Library
- Upol Ehsan, Philipp Wintersberger, Q Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, and Mark O Riedl. 2022. Human-Centered Explainable AI (HCXAI): beyond opening the black-box of AI. In CHI Conference on Human Factors in Computing Systems Extended Abstracts.Google ScholarDigital Library
- Malin Eiband, Daniel Buschek, and Heinrich Hussmann. 2021. How to support users in understanding intelligent systems? Structuring the discussion. In 26th International Conference on Intelligent User Interfaces. 120--132.Google ScholarDigital Library
- Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In 23rd international conference on intelligent user interfaces. 211--223.Google ScholarDigital Library
- Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "I always assumed that I wasn't really that close to [her]" Reasoning about Invisible Algorithms in News Feeds. In Proceedings of CHI conference on human factors in computing systems.Google Scholar
- Deborah Finfgeld-Connett. 2010. Generalizability and transferability of meta-synthesis research findings. Journal of advanced nursing, Vol. 66, 2 (2010), 246--254.Google ScholarCross Ref
- Carl E. Fisher and Paul S. Appelbaum. 2017. Beyond Googling. Harvard Review of Psychiatry (2017), 1. https://doi.org/10.1097/hrp.0000000000000145Google ScholarCross Ref
- Andrea Forte and Cliff Lampe. 2013. Defining, understanding, and supporting open collaboration: Lessons from the literature. American behavioral scientist, Vol. 57, 5 (2013), 535--547.Google Scholar
- Batya Friedman. 1996. Value-sensitive design. interactions, Vol. 3, 6 (1996), 16--23.Google Scholar
- Batya Friedman and David Hendry. 2012. The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. In Proceedings of the SIGCHI conference on human factors in computing systems. 1145--1148.Google ScholarDigital Library
- Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods. University of Washington technical report 2--12 (2002).Google Scholar
- Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018).Google Scholar
- Katy Ilonka Gero, Zahra Ashktorab, Casey Dugan, Qian Pan, James Johnson, Werner Geyer, Maria Ruiz, Sarah Miller, David R Millen, Murray Campbell, et al. 2020. Mental Models of AI Agents in a Cooperative Game Setting. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--12.Google ScholarDigital Library
- Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL) Toward AI Explanations as Interfaces for Machine Teachers. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (2021).Google Scholar
- Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv preprint arXiv:1806.00069 (2018).Google Scholar
- Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 19--31.Google ScholarDigital Library
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018a. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018).Google Scholar
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018b. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42.Google Scholar
- David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, Vol. 2 (2017), 2.Google Scholar
- Sharath Chandra Guntuku, David B Yaden, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences (2017).Google Scholar
- Karen Hao. 2019. AI is sending people to jail -- and getting it wrong. MIT Technology Review (21 January 2019). https://www.technologyreview.com/s/612775/algorithms- criminal-justice-ai/ Retrieved 26-August-2019 fromGoogle Scholar
- MD Romael Haque, Katherine Weathington, Joseph Chudzik, and Shion Guha. 2020. Understanding Law Enforcement and Common Peoples' Perspectives on Designing Explainable Crime Mapping Algorithms. In Conference Companion Publication of the 2020 on Computer Supported Cooperative Work and Social Computing. 269--273.Google Scholar
- Gillian R Hayes. 2011. The relationship of action research to human-computer interaction. ACM Transactions on Computer-Human Interaction (TOCHI), Vol. 18, 3 (2011), 1--20.Google ScholarDigital Library
- Michael Hind. 2019. Explaining explainable AI. XRDS: Crossroads, The ACM Magazine for Students, Vol. 25, 3 (2019), 16--19.Google ScholarDigital Library
- Michael Hind, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ramamurthy, Alexandra Olteanu, and Kush R Varshney. 2018. Increasing trust in AI services through supplier's declarations of conformity. arXiv preprint arXiv:1808.07261, Vol. 18 (2018), 2813--2869.Google Scholar
- Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, Vol. 57, 3 (2015), 407--434.Google ScholarCross Ref
- Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI conference on human factors in computing systems.Google ScholarDigital Library
- Sarah Holland, Ahmed Hosny, and Sarah Newman. 2020. The dataset nutrition label. Data Protection and Privacy: Data Protection and Democracy (2020), Vol. 1 (2020).Google ScholarCross Ref
- Andreas Holzinger, Chris Biemann, Constantinos S Pattichis, and Douglas B Kell. 2017. What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923 (2017).Google Scholar
- Andrew JI Jones, Alexander Artikis, and Jeremy Pitt. 2013. The design of intelligent socio-technical systems. Artificial Intelligence Review, Vol. 39, 1 (2013), 5--20.Google ScholarDigital Library
- Gajendra Jung Katuwal and Robert Chen. 2016. Machine learning model interpretability for precision medicine. arXiv preprint arXiv:1610.09045 (2016).Google Scholar
- Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.Google ScholarDigital Library
- Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2017. Human Decisions and Machine Predictions. The Quarterly Journal of Economics, Vol. 133, 1 (2017), 237--293. https://doi.org/10.1093/qje/qjx032Google ScholarCross Ref
- Bran Knowles and John T Richards. 2021. The Sanction of Authority: Promoting Public Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 262--271.Google ScholarDigital Library
- Sivam Krish. 2011. A practical generative design method. Computer-Aided Design, Vol. 43, 1 (2011), 88--100.Google ScholarDigital Library
- Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on visual languages and human centric computing. IEEE.Google ScholarCross Ref
- Vivian Lai, Han Liu, and Chenhao Tan. 2020. " Why is' Chicago'deceptive?" Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.Google ScholarDigital Library
- Ellen E Lee, John Torous, Munmun De Choudhury, Colin A Depp, Sarah A Graham, Ho-Cheol Kim, Martin P Paulus, John H Krystal, and Dilip V Jeste. 2021. Artificial Intelligence for Mental Healthcare: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging (2021).Google Scholar
- Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, et al. 2019. WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--35.Google Scholar
- Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.Google ScholarDigital Library
- Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, and Daby Sow. 2021. Question-Driven Design Process for Explainable AI User Experiences. arXiv:2104.03483 [cs] (Sept. 2021). http://arxiv.org/abs/2104.03483 arXiv: 2104.03483.Google Scholar
- Q Vera Liao and Kush R Varshney. 2021. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).Google Scholar
- Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. 195--204.Google ScholarDigital Library
- Brian Y Lim and Anind K Dey. 2010. Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing. 13--22.Google ScholarDigital Library
- Brian Y Lim and Anind K Dey. 2011. Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th international conference on Ubiquitous computing. 415--424.Google ScholarDigital Library
- Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009a. Why and Why Not Explanations Improve the Intelligibility of Context-aware Intelligent Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 2119--2128. https://doi.org/10.1145/1518701.1519023Google ScholarDigital Library
- Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009b. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI conference on human factors in computing systems.Google ScholarDigital Library
- Zachary C Lipton. 2018. The mythos of model interpretability. Queue, Vol. 16, 3 (2018), 31--57.Google ScholarDigital Library
- Tyler J. Loftus, Patrick J. Tighe, Amanda C. Filiberto, Philip A. Efron, Scott C. Brakenridge, Alicia M. Mohr, Parisa Rashidi, Jr Upchurch, Gilbert R., and Azra Bihorac. 2020. Artificial Intelligence and Surgical Decision-making. JAMA Surgery (2020).Google Scholar
- Tania Lombrozo. 2011. The instrumental value of explanations. Philosophy Compass, Vol. 6, 8 (2011).Google ScholarCross Ref
- Tania Lombrozo. 2012. Explanation and abductive inference. Oxford handbook of thinking and reasoning (2012).Google Scholar
- Duri Long and Brian Magerko. 2020. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--16.Google ScholarDigital Library
- Ewa Luger and Abigail Sellen. 2016. " Like Having a Really Bad PA" The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI conference on human factors in computing systems. 5286--5297.Google ScholarDigital Library
- Henrietta Lyons, Eduardo Velloso, and Tim Miller. 2021. Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--25.Google ScholarDigital Library
- Donald MacKenzie. 2018. Material Signals: A Historical Sociology of High-Frequency Trading. Amer. J. Sociology, Vol. 123, 6 (2018), 1635--1683. https://doi.org/10.1086/697318Google ScholarCross Ref
- Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Jonathan Magnusson. [n.d.]. Improving Dark Pattern Literacy of End Users. ( [n.,d.]).Google Scholar
- Erin E Makarius, Debmalya Mukherjee, Joseph D Fox, and Alexa K Fox. 2020. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, Vol. 120 (2020).Google ScholarCross Ref
- Masike Malatji, Sune Von Solms, and Annlizé Marnewick. 2019. Socio-technical systems cybersecurity framework. Information & Computer Security (2019).Google Scholar
- Jon McCormack, Alan Dorin, and Troy Innocent. 2004. Generative Design: A Paradigm for Design Research. (2004).Google Scholar
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, Vol. 267 (2019), 1--38.Google ScholarCross Ref
- Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220--229.Google ScholarDigital Library
- Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279--288.Google ScholarDigital Library
- Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology (2020), 1--26.Google Scholar
- Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv (2018), arXiv--1811.Google Scholar
- Geoffrey A Moore and Regis McKenna. 1999. Crossing the chasm. (1999).Google Scholar
- Evgeny Morozov. 2013. To save everything, click here: The folly of technological solutionism. Public Affairs.Google Scholar
- Michael Muller and Q Vera Liao. [n.d.]. Exploring AI Ethics and Values through Participatory Design Fictions. ( [n.,d.]).Google Scholar
- Sean A Munson, Hasan Cavusoglu, Larry Frisch, and Sidney Fels. 2013. Sociotechnical challenges and progress in using social media for health. Journal of medical Internet research, Vol. 15, 10 (2013), e226.Google ScholarCross Ref
- John Murawski. 2019. Mortgage Providers Look to AI to Process Home Loans Faster. Wall Street Journal (18 March 2019). https://www.wsj.com/articles/mortgage-providers-look-to-ai-to-process-home-loans-faster-11552899212 Retrieved 16-September-2020 fromGoogle Scholar
- Trevor J Pinch and Wiebe E Bijker. 1984. The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social studies of science, Vol. 14, 3 (1984), 399--441.Google Scholar
- Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018).Google Scholar
- Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. 2022. Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. arXiv preprint arXiv:2204.01075 (2022).Google Scholar
- Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 103.Google ScholarDigital Library
- Emilee Rader and Rebecca Gray. 2015. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 173--182. https://doi.org/10.1145/2702123.2702174Google ScholarDigital Library
- Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer.Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 1135--1144.Google ScholarDigital Library
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.Google ScholarCross Ref
- Cynthia Rudin, Caroline Wang, and Beau Coker. 2020. The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review, Vol. 2, 1 (31 3 2020). https://doi.org/10.1162/99608f92.6ed64b30 https://hdsr.mitpress.mit.edu/pub/7z10o269.Google ScholarCross Ref
- Selma vS abanović. 2010. Robots in society, society in robots. International Journal of Social Robotics, Vol. 2, 4 (2010), 439--450.Google ScholarCross Ref
- Koustuv Saha, Pranshu Gupta, Gloria Mark, Emre Kiciman, and Munmun De Choudhury. 2023. Observer Effect in Social Media Use. (2023).Google Scholar
- Koustuv Saha, Jordyn Seybolt, Stephen M Mattingly, Talayeh Aledavood, Chaitanya Konjeti, Gonzalo J Martinez, Ted Grover, Gloria Mark, and Munmun De Choudhury. 2021. What life events are disclosed on social media, how, when, and by whom?. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1--22.Google ScholarDigital Library
- Koustuv Saha, Benjamin Sugar, John Torous, Bruno Abrahao, Emre Kiciman, and Munmun De Choudhury. 2019. A social media study on the effects of psychiatric medication use. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 440--451.Google ScholarCross Ref
- Koustuv Saha, Asra Yousuf, Ryan L Boyd, James W Pennebaker, and Munmun De Choudhury. 2022. Social media discussions predict mental health consultations on college campuses. Scientific reports (2022).Google Scholar
- Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What does it mean to'solve'the problem of discrimination in hiring? social, technical and legal perspectives from the UK on automated hiring systems. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 458--468.Google ScholarDigital Library
- Lindsay Sanneman and Julie A Shah. 2020. A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI. In International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems.Google ScholarDigital Library
- Benjamin Saunders, Julius Sim, Tom Kingstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. Saturation in qualitative research: exploring its conceptualization and operationalization. Quality & quantity, Vol. 52, 4 (2018), 1893--1907.Google Scholar
- Jakob Schoeffer and Niklas Kuehl. 2021. Appropriate fairness perceptions? On the effectiveness of explanations in enabling people to assess the fairness of automated decision systems. In Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing. 153--157.Google ScholarDigital Library
- Douglas Schuler and Aki Namioka. 1993. Participatory design: Principles and practices. CRC Press.Google ScholarDigital Library
- Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency.Google ScholarDigital Library
- Phoebe Sengers, Kirsten Boehner, Shay David, and Joseph'Jofish' Kaye. 2005. Reflective design. In Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility. 49--58.Google ScholarDigital Library
- Ben Shneiderman. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, Vol. 36, 6 (2020), 495--504.Google ScholarCross Ref
- Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal, Vol. 31, 2 (2018), 47--53.Google Scholar
- Supriya Singh, Anuja Cabraal, Catherine Demosthenous, Gunela Astbrink, and Michele Furlong. 2007. Password sharing: implications for security design based on social practice. In Proceedings of the SIGCHI conference on Human factors in computing systems. 895--904.Google ScholarDigital Library
- Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S Weld, and Leah Findlater. 2020. No explainability without accountability: An empirical study of explanations and feedback in interactive ml. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.Google ScholarDigital Library
- Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.Google ScholarDigital Library
- Ernest T Stringer. 2007. Action research third edition. (2007).Google Scholar
- Simone Stumpf, Adrian Bussone, and Dympna O'sullivan. 2016. Explanations considered harmful? user interactions with machine learning systems. In ACM SIGCHI Workshop on Human-Centered Machine Learning.Google Scholar
- Jiao Sun, Q Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, and Justin D Weisz. 2022. Investigating Explainability of Generative AI for Code through Scenario-based Design. In 27th International Conference on Intelligent User Interfaces. 212--228.Google ScholarDigital Library
- Harini Suresh and John V Guttag. 2019. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002 (2019).Google Scholar
- Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, and Supriyo Chakraborty. 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552 (2018).Google Scholar
- Jennifer Wortman Vaughan and Hanna Wallach. [n.d.]. 1 A Human-Centered Agenda for Intelligible Machine Learning. ( [n.,d.]).Google Scholar
- Viswanath Venkatesh and Fred D Davis. 2000. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management science, Vol. 46, 2 (2000), 186--204.Google Scholar
- Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. 2003. User acceptance of information technology: Toward a unified view. MIS quarterly (2003), 425--478.Google ScholarDigital Library
- Daniel Vigo, Graham Thornicroft, and Rifat Atun. 2016. Estimating the true global burden of mental illness. The Lancet Psychiatry, Vol. 3, 2 (2016), 171--178.Google ScholarCross Ref
- Guy H Walker, Neville A Stanton, Paul M Salmon, and Daniel P Jenkins. 2008. A review of sociotechnical systems theory: a classic concept for new command and control paradigms. Theoretical issues in ergonomics science, Vol. 9, 6 (2008).Google Scholar
- Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems.Google ScholarDigital Library
- Qiaosi Wang, Koustuv Saha, Eric Gregori, David Joyner, and Ashok Goel. 2021. Towards Mutual Theory of Mind in Human-AI Interaction: How Language Reflects What Students Perceive About a Virtual Teaching Assistant. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.Google ScholarDigital Library
- Daniel A Wilkenfeld and Tania Lombrozo. 2015. Inference to the best explanation (IBE) versus explaining for the best inference (EBI). Science & Education, Vol. 24, 9--10 (2015), 1059--1077.Google ScholarCross Ref
- Christine Wolf and Jeanette Blomberg. 2019. Evaluating the promise of human-algorithm collaborations in everyday work practices. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--23.Google ScholarDigital Library
- Christine T Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 252--257.Google ScholarDigital Library
- Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang'Anthony' Chen. 2020. CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.Google ScholarDigital Library
- Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020a. How do visual explanations foster end users' appropriate trust in machine learning?. In Proc. IUI.Google ScholarDigital Library
- Qian Yang. 2019. Profiling Artificial Intelligence as a Material for User Experience Design. Ph.D. Dissertation.Google Scholar
- Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020b. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proc. CHI.Google ScholarDigital Library
- Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proc. CHI.Google ScholarDigital Library
- Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F Antaki. 2016. Investigating the heart pump implant decision process: opportunities for decision support tools to help. In Proc. CHI.Google ScholarDigital Library
- Dong Whi Yoo, Michael L Birnbaum, Anna R Van Meter, Asra F Ali, Elizabeth Arenare, Gregory D Abowd, and Munmun De Choudhury. 2020. Designing a clinician-facing tool for using insights from patients' social media activity: Iterative co-design approach. JMIR Mental Health (2020).Google Scholar
- Dong Whi Yoo, Sindhu Kiranmai Ernala, Bahador Saket, Domino Weir, Elizabeth Arenare, Asra F Ali, Anna R Van Meter, Michael L Birnbaum, Gregory D Abowd, and Munmun De Choudhury. 2021. Clinician perspectives on using computational mental health insights from patients' social media activities: design and qualitative evaluation of a prototype. JMIR Mental Health, Vol. 8, 11 (2021), e25455.Google ScholarCross Ref
- Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. 2015. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015).Google Scholar
- Quanshi Zhang, Yu Yang, Haotian Ma, and Ying Nian Wu. 2019. Interpreting cnns via decision trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6261--6270.Google ScholarCross Ref
- Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM.Google ScholarDigital Library
- Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction, Vol. 2, CSCW (2018), 1--23.Google ScholarDigital Library
Index Terms
- Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Recommendations
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing SystemsA surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI ...
Seamful XAI: Operationalizing Seamful Design in Explainable AI
CSCWMistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. ...
Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
AbstractThe past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and ...
Comments