skip to main content
review-article
Public Access

The challenge of crafting intelligible intelligence

Published:21 May 2019Publication History
Skip Abstract Section

Abstract

To trust the behavior of complex AI algorithms, especially in mission-critical settings, they must be made intelligible.

References

  1. Amershi, S., Cakmak, M., Knox, W. and Kulesza, T. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (2014), 105--120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Anderson, J.R., Boyle, F. and Reiser, B. Intelligent tutoring systems. Science 228, 4698 (1985), 456--462.Google ScholarGoogle ScholarCross RefCross Ref
  3. Besold, T. et al. Neural-Symbolic Learning and Reasoning: A Survey and Interpretation. CoRR abs/1711.03902 (2017). arXiv:1711.03902Google ScholarGoogle Scholar
  4. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M. and Elhadad, N. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Dietterich, T. Steps towards robust artificial intelligence. AI Magazine 38, 3 (2017).Google ScholarGoogle ScholarCross RefCross Ref
  6. Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning. ArXiv (2017), arXiv:1702.08608Google ScholarGoogle Scholar
  7. Ferguson, G. and Allen, J.F. TRIPS: An integrated intelligent problem-solving assistant. In AAAI/IAAI, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Fox, M., Long, D. and Magazzeni, D. Explainable Planning. In IJCAI XAI Workshop, 2017; http://arxiv.org/abs/1709.10256Google ScholarGoogle Scholar
  9. Goodfellow, I.J., Shlens, J. and Szegedy, C. 2014. Explaining and Harnessing Adversarial Examples. ArXiv (2014), arXiv:1412.6572Google ScholarGoogle Scholar
  10. Grice, P. Logic and Conversation, 1975, 41--58.Google ScholarGoogle Scholar
  11. Halpern, J. and Pearl, J. Causes and explanations: A structural-model approach. Part I: Causes. The British J. Philosophy of Science 56, 4 (2005), 843--887.Google ScholarGoogle Scholar
  12. Hardt, M., Price, E. and Srebro, N. Equality of opportunity in supervised learning. In NIPS, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hendricks, L., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B. and Darrell, T. Generating visual explanations. In ECCV, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  14. Hendricks, L.A., Hu, R., Darrell, T. and Akata, Z. Grounding visual explanations. ArXiv (2017), arXiv:1711.06465Google ScholarGoogle Scholar
  15. Hilton, D. Conversational processes and causal explanation. Psychological Bulletin 107, 1 (1990), 65.Google ScholarGoogle ScholarCross RefCross Ref
  16. Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, 2011; http://a.co/hGYmXGJGoogle ScholarGoogle Scholar
  17. Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F. and Sayres, R. 2017. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors. ArXiv e-prints (Nov. 2017); arXiv:stat.ML/1711.11279Google ScholarGoogle Scholar
  18. Koehler, D.J. Explanation, imagination, and confidence in judgment. Psychological Bulletin 110, 3 (1991), 499.Google ScholarGoogle ScholarCross RefCross Ref
  19. Koh, P. and Liang, P. Understanding black-box predictions via influence functions. In ICML, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Krause, J., Dasgupta, A., Swartz, J., Aphinyanaphongs, Y. and Bertini, E. A workflow for visual diagnostics of binary classifiers using instance-level explanations. In IEEE VAST, 2017.Google ScholarGoogle ScholarCross RefCross Ref
  21. Kulesza, T., Burnett, M., Wong, W. and Stumpf, S. Principles of explanatory debugging to personalize interactive machine learning. In IUI, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lakkaraju, H., Kamar, E., Caruana, R. and Leskovec, J. Interpretable & explorable approximations of black box models. KDD-FATML, 2017.Google ScholarGoogle Scholar
  23. Lewis, D. Causal explanation. Philosophical Papers 2 (1986), 214--240.Google ScholarGoogle Scholar
  24. Lim, B.Y. and Dey, A.K. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11<sup>th</sup> International Conference on Ubiquitous Computing (2009). ACM, 195--204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Lipton, Z. The Mythos of Model Interpretability. In Proceedings of ICML Workshop on Human Interpretability in ML, 2016.Google ScholarGoogle Scholar
  26. Lombrozo, T. Simplicity and probability in causal explanation. Cognitive Psychology 55, 3 (2007), 232--257.Google ScholarGoogle ScholarCross RefCross Ref
  27. Lou, Y., Caruana, R. and Gehrke, J. Intelligible models for classification and regression. In KDD, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lundberg, S. and Lee, S. A unified approach to interpreting model predictions. NIPS, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. McCarthy, J. and Hayes, P. Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence (1969), 463--502.Google ScholarGoogle Scholar
  30. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2018), 1--38.Google ScholarGoogle ScholarCross RefCross Ref
  31. Norman, D.A. Some observations on mental models. Mental Models, Psychology Press, 2014, 15--22.Google ScholarGoogle Scholar
  32. Papadimitriou, A., Symeonidis, P. and Manolopoulos, Y. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (2012), 555--583. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Ribeiro, M., Singh, S. and Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In KDD, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Ribeiro, M., Singh, S. and Guestrin, C. Anchors: High-precision model- agnostic explanations. In AAAI, 2018.Google ScholarGoogle Scholar
  35. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484--489.Google ScholarGoogle ScholarCross RefCross Ref
  36. Sloman, S. Explanatory coherence and the induction of properties. Thinking & Reasoning 3, 2 (1997), 81--110.Google ScholarGoogle ScholarCross RefCross Ref
  37. Sreedharan, S., Srivastava, S. and Kambhampati, S. Hierarchical expertise- level modeling for user specific robot-behavior explanations. ArXiv e-prints, (Feb. 2018), arXiv:1802.06895Google ScholarGoogle Scholar
  38. Swartout, W. XPLAIN: A system for creating and explaining expert consulting programs. Artificial Intelligence 21, 3 (1983), 285--325. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z. Rethinking the inception architecture for computer vision. In CVPR, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  40. Zeiler, M. and Fergus, R. Visualizing and understanding convolutional networks. In ECCV, 2014.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The challenge of crafting intelligible intelligence

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Communications of the ACM
        Communications of the ACM  Volume 62, Issue 6
        June 2019
        85 pages
        ISSN:0001-0782
        EISSN:1557-7317
        DOI:10.1145/3336127
        Issue’s Table of Contents

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 21 May 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • review-article
        • Popular
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format