Abstract
To trust the behavior of complex AI algorithms, especially in mission-critical settings, they must be made intelligible.
- Amershi, S., Cakmak, M., Knox, W. and Kulesza, T. Power to the people: The role of humans in interactive machine learning. AI Magazine 35, 4 (2014), 105--120.Google ScholarDigital Library
- Anderson, J.R., Boyle, F. and Reiser, B. Intelligent tutoring systems. Science 228, 4698 (1985), 456--462.Google ScholarCross Ref
- Besold, T. et al. Neural-Symbolic Learning and Reasoning: A Survey and Interpretation. CoRR abs/1711.03902 (2017). arXiv:1711.03902Google Scholar
- Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M. and Elhadad, N. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, 2015. Google ScholarDigital Library
- Dietterich, T. Steps towards robust artificial intelligence. AI Magazine 38, 3 (2017).Google ScholarCross Ref
- Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning. ArXiv (2017), arXiv:1702.08608Google Scholar
- Ferguson, G. and Allen, J.F. TRIPS: An integrated intelligent problem-solving assistant. In AAAI/IAAI, 1998. Google ScholarDigital Library
- Fox, M., Long, D. and Magazzeni, D. Explainable Planning. In IJCAI XAI Workshop, 2017; http://arxiv.org/abs/1709.10256Google Scholar
- Goodfellow, I.J., Shlens, J. and Szegedy, C. 2014. Explaining and Harnessing Adversarial Examples. ArXiv (2014), arXiv:1412.6572Google Scholar
- Grice, P. Logic and Conversation, 1975, 41--58.Google Scholar
- Halpern, J. and Pearl, J. Causes and explanations: A structural-model approach. Part I: Causes. The British J. Philosophy of Science 56, 4 (2005), 843--887.Google Scholar
- Hardt, M., Price, E. and Srebro, N. Equality of opportunity in supervised learning. In NIPS, 2016. Google ScholarDigital Library
- Hendricks, L., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B. and Darrell, T. Generating visual explanations. In ECCV, 2016.Google ScholarCross Ref
- Hendricks, L.A., Hu, R., Darrell, T. and Akata, Z. Grounding visual explanations. ArXiv (2017), arXiv:1711.06465Google Scholar
- Hilton, D. Conversational processes and causal explanation. Psychological Bulletin 107, 1 (1990), 65.Google ScholarCross Ref
- Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, 2011; http://a.co/hGYmXGJGoogle Scholar
- Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F. and Sayres, R. 2017. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors. ArXiv e-prints (Nov. 2017); arXiv:stat.ML/1711.11279Google Scholar
- Koehler, D.J. Explanation, imagination, and confidence in judgment. Psychological Bulletin 110, 3 (1991), 499.Google ScholarCross Ref
- Koh, P. and Liang, P. Understanding black-box predictions via influence functions. In ICML, 2017. Google ScholarDigital Library
- Krause, J., Dasgupta, A., Swartz, J., Aphinyanaphongs, Y. and Bertini, E. A workflow for visual diagnostics of binary classifiers using instance-level explanations. In IEEE VAST, 2017.Google ScholarCross Ref
- Kulesza, T., Burnett, M., Wong, W. and Stumpf, S. Principles of explanatory debugging to personalize interactive machine learning. In IUI, 2015. Google ScholarDigital Library
- Lakkaraju, H., Kamar, E., Caruana, R. and Leskovec, J. Interpretable & explorable approximations of black box models. KDD-FATML, 2017.Google Scholar
- Lewis, D. Causal explanation. Philosophical Papers 2 (1986), 214--240.Google Scholar
- Lim, B.Y. and Dey, A.K. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11<sup>th</sup> International Conference on Ubiquitous Computing (2009). ACM, 195--204. Google ScholarDigital Library
- Lipton, Z. The Mythos of Model Interpretability. In Proceedings of ICML Workshop on Human Interpretability in ML, 2016.Google Scholar
- Lombrozo, T. Simplicity and probability in causal explanation. Cognitive Psychology 55, 3 (2007), 232--257.Google ScholarCross Ref
- Lou, Y., Caruana, R. and Gehrke, J. Intelligible models for classification and regression. In KDD, 2012. Google ScholarDigital Library
- Lundberg, S. and Lee, S. A unified approach to interpreting model predictions. NIPS, 2017. Google ScholarDigital Library
- McCarthy, J. and Hayes, P. Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence (1969), 463--502.Google Scholar
- Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (Feb. 2018), 1--38.Google ScholarCross Ref
- Norman, D.A. Some observations on mental models. Mental Models, Psychology Press, 2014, 15--22.Google Scholar
- Papadimitriou, A., Symeonidis, P. and Manolopoulos, Y. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (2012), 555--583. Google ScholarDigital Library
- Ribeiro, M., Singh, S. and Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In KDD, 2016. Google ScholarDigital Library
- Ribeiro, M., Singh, S. and Guestrin, C. Anchors: High-precision model- agnostic explanations. In AAAI, 2018.Google Scholar
- Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484--489.Google ScholarCross Ref
- Sloman, S. Explanatory coherence and the induction of properties. Thinking & Reasoning 3, 2 (1997), 81--110.Google ScholarCross Ref
- Sreedharan, S., Srivastava, S. and Kambhampati, S. Hierarchical expertise- level modeling for user specific robot-behavior explanations. ArXiv e-prints, (Feb. 2018), arXiv:1802.06895Google Scholar
- Swartout, W. XPLAIN: A system for creating and explaining expert consulting programs. Artificial Intelligence 21, 3 (1983), 285--325. Google ScholarDigital Library
- Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z. Rethinking the inception architecture for computer vision. In CVPR, 2016.Google ScholarCross Ref
- Zeiler, M. and Fergus, R. Visualizing and understanding convolutional networks. In ECCV, 2014.Google ScholarCross Ref
Index Terms
- The challenge of crafting intelligible intelligence
Recommendations
Generating Intelligible Audio Speech From Visual Speech
This paper is concerned with generating intelligible audio speech from a video of a person talking. Regression and classification methods are proposed first to estimate static spectral envelope features from active appearance model visual features. Two ...
Intelligible enhancement of 3D articulation animation by incorporating airflow information
2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)The 3D talking head has been developed fast, in which both external and internal articulators were demonstrated. For Mandarin pronunciation, the aspiration airflow is crucial to discriminate confusable Mandarin consonants. In this paper, we present a 3D ...
Is spoken Danish less intelligible than Swedish?
The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss ...
Comments