skip to main content
research-article
Free Access

DeepXplore: automated whitebox testing of deep learning systems

Published:24 October 2019Publication History
Skip Abstract Section

Abstract

Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains such as self-driving cars and malware detection, where the correctness and predictability of a system's behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs.

We design, implement, and evaluate DeepXplore, the first white-box framework for systematically testing real-world DL systems. First, we introduce neuron coverage for measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques.

DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets such as ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model's accuracy by up to 3%.

References

  1. Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604,07316 (2016).Google ScholarGoogle Scholar
  2. Brubaker, C., Jana, S., Ray, B., Khurshid, S., Shmatikov V. Using frankencerts for automated adversarial testing of certificate validation in SSL/TLS implementations. In Proceedings of the 35th IEEE Symposium on Security and Privacy (2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Goodfellow, I., Shlens, J., Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations (2015).Google ScholarGoogle Scholar
  4. Grosse, K., Papernot, N., Manoharan, P., Backes, M., McDaniel, P. Adversarial examples for malware detection. In European Symposium on Research in Computer Security (2017).Google ScholarGoogle ScholarCross RefCross Ref
  5. He, K., Zhang, X., Ren, S., Sun, J. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2016).Google ScholarGoogle ScholarCross RefCross Ref
  6. Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J. Policy compression for aircraft collision avoidance systems. In Proceedings of the 35th IEEE/AIAA Digital Avionics Systems Conference (2016).Google ScholarGoogle ScholarCross RefCross Ref
  7. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J. Reluplex: An efficient smt solver for verifying deep neural networks. In Proceedings of the 29th International Conference on Computer Aided Verification (2017).Google ScholarGoogle ScholarCross RefCross Ref
  8. LeCun, Y., Cortes, C., Burges, C.J. MNIST handwritten digit database. 2010.Google ScholarGoogle Scholar
  9. Liu, M.-Y., Breuel, T., Kautz, J. Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Odena, A., Goodfellow, I. Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. arXiv preprint arXiv:1807.10875 (2018).Google ScholarGoogle Scholar
  11. Pei, K., Cao, Y., Yang, J., Jana, S. Towards practical verification of machine learning: The case of computer vision systems. arXiv preprint arXiv:1712.01785 (2017).Google ScholarGoogle Scholar
  12. Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations (2015).Google ScholarGoogle Scholar
  13. Tian, Y., Pei, K., Jana, S., Ray, B. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, ACM (2018), 303--314Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Šrndic, N., Laskov, P. Practical evasion of a learning-based classifier: a case study. In Proceedings of the 35th IEEE Symposium on Security and Privacy (2014).Google ScholarGoogle Scholar
  15. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems (2018).Google ScholarGoogle Scholar
  16. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wong, E., Kolter, Z. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (2018).Google ScholarGoogle Scholar
  18. Yosinski, J., Clune, J., Fuchs, T., Lipson, H. Understanding neural networks through deep visualization. In 2015 ICML Workshop on Deep Learning (2015).Google ScholarGoogle Scholar

Index Terms

  1. DeepXplore: automated whitebox testing of deep learning systems

          Recommendations

          Reviews

          Amos O Olagunju

          Many of us use trustworthy electronic systems, from self-driving car owners to online bankers and shoppers. How should real-life computer systems be methodically tested for nearly all potential faults and malware threats, to instill confidence in users Pei et al. present DeepXplore, the first system of its kind that uses deep learning (DL) techniques to design and exhaustively test for impending malware threats and defects in software. The authors identify two major drawbacks of current deep neural network (DNN) testing strategies: (1) the exorbitant human endeavors to create accurate behaviors and classifications for specific chores, and (2) the marginal assessment of various behavioral rules. Consequently, they present DeepXplore, a programmed whitebox archetype for methodically assessing inaccurate situation actions in DNNs, such as self-reliant cars bumping into shield fences. DeepXplore uses untagged source inputs to create numerous representative neurons for testing the multiplicity of behaviors in DNNs. The unique DeepXplore DL algorithm simultaneously capitalizes on the coverage of neurons and the variety of actions in DNNs to uncover a variety of system faults and failures. The authors present efficient algorithms for resolving the joint optimization problems of neuron coverage and different behaviors in DNNs. Is DeepXplore effective in exploring threats and failures in emerging online computerized systems Experiments were performed with datasets that originated from public images, driving, malicious attacks, and different DNNs. The results reveal that neuron coverage is a reliable predictor of DNN testing. But what about issues related to testing simulation shadows, the "efficient search for error-inducing test cases for arbitrary transformations," and the error-free gradient-based local search used in DeepXplore I invite colleagues from computational and applied mathematics to immediately investigate these problems and solutions. Clearly, the authors offer compelling, futuristic ideas about the nature of DL and DNN research challenges.

          Access critical reviews of Computing literature here

          Become a reviewer for Computing Reviews.

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image Communications of the ACM
            Communications of the ACM  Volume 62, Issue 11
            November 2019
            136 pages
            ISSN:0001-0782
            EISSN:1557-7317
            DOI:10.1145/3368886
            Issue’s Table of Contents

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 24 October 2019

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format