skip to main content
research-article
Open Access

Shadow Symbolic Execution for Testing Software Patches

Published:25 September 2018Publication History
Skip Abstract Section

Abstract

While developers are aware of the importance of comprehensively testing patches, the large effort involved in coming up with relevant test cases means that such testing rarely happens in practice. Furthermore, even when test cases are written to cover the patch, they often exercise the same behaviour in the old and the new version of the code. In this article, we present a symbolic execution-based technique that is designed to generate test inputs that cover the new program behaviours introduced by a patch. The technique works by executing both the old and the new version in the same symbolic execution instance, with the old version shadowing the new one. During this combined shadow execution, whenever a branch point is reached where the old and the new version diverge, we generate a test input exercising the divergence and comprehensively test the new behaviours of the new version. We evaluate our technique on the Coreutils patches from the CoREBench suite of regression bugs, and show that it is able to generate test inputs that exercise newly added behaviours and expose some of the regression bugs.

References

  1. Domagoj Babić, Lorenzo Martignoni, Stephen McCamant, and Dawn Song. 2011. Statically directed dynamic automated test generation. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’11). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Marcel Böhme, Bruno C. D. S. Oliveira, and Abhik Roychoudhury. 2013. Partition-based regression verification. In Proceedings of the 35th International Conference on Software Engineering (ICSE’13).Google ScholarGoogle ScholarCross RefCross Ref
  3. Marcel Böhme, Bruno C. D. S. Oliveira, and Abhik Roychoudhury. 2013. Regression tests to expose change interaction errors. In Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM Symposium on the Foundations of Software Engineering (ESEC/FSE’13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Marcel Böhme, Van-Thuan Pham, Manh-Dung Nguyen, and Abhik Roychoudhury. 2017. Directed greybox fuzzing. In Proceedings of the 24th ACM Conference on Computer and Communications Security (CCS’17). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Marcel Böhme and Abhik Roychoudhury. 2014. Corebench: Studying complexity of regression errors. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’14).Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Stefan Bucur, Vlad Ureche, Cristian Zamfir, and George Candea. 2011. Parallel symbolic execution for automated real-world software testing. In Proceedings of the 6th European Conference on Computer Systems (EuroSys’11). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cristian Cadar and Hristina Palikareva. 2014. Shadow symbolic execution for better testing of evolving software. In Proceedings of the 36th International Conference on Software Engineering, New Ideas and Emerging Results (ICSE NIER’14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cristian Cadar and Koushik Sen. 2013. Symbolic execution for software testing: Three decades later. Commun. Assoc. Comput. Mach. 56, 2 (2013), 82–90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Thierry Titcheu Chekam, Mike Papadakis, Yves Le Traon, and Mark Harman. 2017. An empirical study on mutation, statement and branch coverage fault revelation that avoids the unreliable clean program assumption. In Proceedings of the 39th International Conference on Software Engineering (ICSE’17).Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Marcelo d’Amorim, Steven Lauterburg, and Darko Marinov. 2007. Delta execution for efficient state-space exploration of object-oriented programs. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’07). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Leonardo De Moura and Nikolaj Bjørner. 2011. Satisfiability modulo theories: Introduction and applications. Commun. Assoc. Comput. Mach. 54, 9 (Sept. 2011), 69–77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: Directed automated random testing. In Proceedings of the Conference on Programing Language Design and Implementation (PLDI’05). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Shengjian Guo, Markus Kusano, and Chao Wang. 2016. Conc-iSE: Incremental symbolic execution of concurrent software. In Proceedings of the 31th IEEE International Conference on Automated Software Engineering (ASE’16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rajiv Gupta, Mary Jean Harrold, and Mary Lou Soffa. 1996. Program slicing-based regression testing techniques. Softw. Test. Verificat. Reliabil. 6 (1996), 83–112.Google ScholarGoogle ScholarCross RefCross Ref
  15. Kelly J. Hayhurst, Dan S. Veerhusen, John J. Chilenski, and Leanna K. Rierson. 2001. A Practical Tutorial on Modified Condition/Decision Coverage. Technical Report NASA/TM-2001-210876. NASA. Google ScholarGoogle Scholar
  16. Petr Hosek and Cristian Cadar. 2013. Safe software updates via multi-version execution. In Proceedings of the 35th International Conference on Software Engineering (ICSE’13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Petr Hosek and Cristian Cadar. 2015. Varan the unbelievable: An efficient N-version execution framework. In Proceedings of the 20th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’15). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Martin Kellogg, Benjamin Floyd, Stephanie Forrest, and Westley Weimer. 2016. Combining bug detection and test case generation. In Proceedings of the ACM Symposium on the Foundations of Software Engineering (FSE’16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Chang Hwan Peter Kim, Sarfraz Khurshid, and Don Batory. 2012. Shared execution for efficiently testing product lines. In Proceedings of the 23rd International Symposium on Software Reliability Engineering (ISSRE’12).Google ScholarGoogle Scholar
  20. Miryung Kim and David Notkin. 2006. Program element matching for multi-version program analyses. In Proceedings of the 2006 International Workshop on Mining Software Repositories (MSR’06). Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Shuvendu K. Lahiri, Kenneth L. McMillan, Rahul Sharma, and Chris Hawblitzel. 2013. Differential assertion checking. In Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM Symposium on the Foundations of Software Engineering (ESEC/FSE’13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Wei Le and Shannon D. Pattison. 2014. Patch verification via multiversion interprocedural control flow graphs. In Proceedings of the 36th International Conference on Software Engineering (ICSE’14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Daniel Liew, Daniel Schemmel, Cristian Cadar, Alastair Donaldson, Rafael Zähl, and Klaus Wehrle. 2017. Floating-point symbolic execution: A case study in N-version programming. In Proceedings of the 32nd IEEE International Conference on Automated Software Engineering (ASE’17). Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Kin-Keung Ma, Yit Phang Khoo, Jeffrey S. Foster, and Michael Hicks. 2011. Directed symbolic execution. In Proceedings of the 18th International Static Analysis Symposium (SAS’11). Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Paul Dan Marinescu and Cristian Cadar. 2012. High-coverage symbolic patch testing. In Proceedings of the 19th International SPIN Workshop on Model Checking of Software (SPIN’12).Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Paul Dan Marinescu and Cristian Cadar. 2012. make test-zesti: A symbolic execution solution for improving regression testing. In Proceedings of the 34th International Conference on Software Engineering (ICSE’12).Google ScholarGoogle ScholarCross RefCross Ref
  27. Paul Dan Marinescu and Cristian Cadar. 2013. KATCH: High-coverage testing of software patches. In Proceedings of the joint meeting of the European Software Engineering Conference and the ACM Symposium on the Foundations of Software Engineering (ESEC/FSE’13). Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Paul Dan Marinescu, Petr Hosek, and Cristian Cadar. 2014. Covrig: A framework for the analysis of code, test, and coverage evolution in real software. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’14).Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Matthew Maurer and David Brumley. 2012. TACHYON: Tandem execution for efficient live patch testing. In Proceedings of the 21st USENIX Security Symposium (USENIX Security’12). Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Yannic Noller, Hoang Lam Nguyen, Minxing Tang, and Timo Kehrer. 2018. Shadow symbolic execution with Java PathFinder. ACM SIGSOFT Softw. Eng. Notes 42, 4 (2018), 1–5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Hristina Palikareva, Tomasz Kuchta, and Cristian Cadar. 2016. Shadow of a doubt: Testing for divergences between software versions. In Proceedings of the 38th International Conference on Software Engineering (ICSE’16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Nimrod Partush and Eran Yahav. 2014. Abstract semantic differencing via speculative correlation. In Proceedings of the 29th Annual Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA’14). Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Corina S. Păsăreanu, Willem Visser, David Bushnell, Jaco Geldenhuys, Peter Mehlitz, and Neha Rungta. 2013. Symbolic PathFinder: Integrating symbolic execution with model checking for java bytecode analysis. Auto. Softw. Eng. 20, 3 (Sep 2013), 391–425.Google ScholarGoogle Scholar
  34. Suzette Person, Matthew B. Dwyer, Sebastian Elbaum, and Corina S. Pǎsǎreanu. 2008. Differential symbolic execution. In Proceedings of the ACM Symposium on the Foundations of Software Engineering (FSE’08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Suzette Person, Guowei Yang, Neha Rungta, and Sarfraz Khurshid. 2011. Directed incremental symbolic execution. In Proceedings of the Conference on Programing Language Design and Implementation (PLDI’11). Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Dawei Qi, Abhik Roychoudhury, and Zhenkai Liang. 2010. Test generation to expose changes in evolving programs. In Proceedings of the 25th IEEE International Conference on Automated Software Engineering (ASE’10). Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. R. Santelices, P. K. Chittimalli, T. Apiwattanapong, A. Orso, and M. J. Harrold. 2008. Test-suite augmentation for evolving software. In Proceedings of the 23rd IEEE International Conference on Automated Software Engineering (ASE’08). Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Koushik Sen, Darko Marinov, and Gul Agha. 2005. CUTE: A concolic unit testing engine for C. In Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM Symposium on the Foundations of Software Engineering (ESEC/FSE’05). Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. 2012. AddressSanitizer: A fast address sanity checker. In Proceedings of the 2012 USENIX Annual Technical Conference (USENIX ATC’12). Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Kunal Taneja, Tao Xie, Nikolai Tillmann, and Jonathan de Halleux. 2011. eXpress: Guided path exploration for efficient regression test generation. In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA’11). Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. The 50th CREST Open Workshop—Genetic Improvement 2017. Retrieved from http://crest.cs.ucl.ac.uk/cow/50/.Google ScholarGoogle Scholar
  42. Joseph Tucek, Weiwei Xiong, and Yuanyuan Zhou. 2009. Efficient online validation with delta execution. In Proceedings of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS’09). Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Zhihong Xu and Gregg Rothermel. 2009. Directed test suite augmentation. In Proceedings of the 16th Asia-Pacific Software Engineering Conference (ASPEC’09). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Shadow Symbolic Execution for Testing Software Patches

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Software Engineering and Methodology
          ACM Transactions on Software Engineering and Methodology  Volume 27, Issue 3
          July 2018
          210 pages
          ISSN:1049-331X
          EISSN:1557-7392
          DOI:10.1145/3276753
          Issue’s Table of Contents

          Copyright © 2018 Owner/Author

          This work is licensed under a Creative Commons Attribution International 4.0 License.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 25 September 2018
          • Accepted: 1 June 2018
          • Revised: 1 May 2018
          • Received: 1 November 2017
          Published in tosem Volume 27, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader