skip to main content
article

A comparative study of coarse- and fine-grained safe regression test-selection techniques

Published:01 April 2001Publication History
Skip Abstract Section

Abstract

Regression test-selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. Over the past two decades, numerous regression test-selection techniques have been described in the literature. Initial empirical studies of some of these techniques have suggested that they can indeed benefit testers, but so far, few studies have empirically compared different techniques. In this paper, we present the results of a comparative empirical study of two safe regression test-selection techniques. The techniques we studied have been implemented as the tools DejaVu and TestTube; we compared these tools in terms of a cost model incorporating precision (ability to eliminate unnecessary test cases), analysis cost, and test execution cost. Our results indicate, that in many instances, despite its relative lack of precision, TestTube can reduce the time required for regression testing as much as the more precise DejaVu. In other instances, particularly where the time required to execute test cases is long, DejaVu's superior precision gives it a clear advantage over TestTube. Such variations in relative performance can complicate a tester's choice of which tool to use. Our experimental results suggest that a hybrid regression test-selection tool that combines features of TestTube and DejaVu may be an answer to these complications; we present an initial case study that demonstrates the potential benefit of such a tool.

References

  1. AGRAWAL, H., HORGAN, J., KRAUSER, E., AND LONDON, S. 1993. Incremental regression testing. In Proceedings of the Conference on Software Maintenance (Sept.). 348-357. Google ScholarGoogle Scholar
  2. BALCER, M., HASLING, W., AND OSTRAND, T. 1989. Automatic generation of test scripts from formal test specifications. In Proceedings of the ACM SIGSOFT '89 Third Symposium on Software Testing, Analysis, and Verification (TAV3, Key West, FL, Dec. 13-15), R. A. Kemmerer, Ed. ACM Press, New York, NY, 210-218. Google ScholarGoogle Scholar
  3. BALL, T. 1998. On the limit of control flow analysis for regression test selection. In Proceedings of ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA '98, Clearwater Beach, FL, Mar. 2-5), W. Tracz, Ed. ACM Press, New York, NY, 134-142. Google ScholarGoogle Scholar
  4. BENEDUSI, P., CIMITILE, A., AND DE CARLINI, U. 1988. Post-maintenance testing based on path change analysis. In Proceedings of the Conference on Software Maintenance (Oct.). 352-361.Google ScholarGoogle Scholar
  5. BINKLEY, D. 1997. Semantics guided regression test cost reduction. IEEE Trans. Softw. Eng. 23, 8 (Aug.), 498-516. Google ScholarGoogle Scholar
  6. CHEN, Y.-F., ROSENBLUM,D.S.,AND VO, K.-P. 1994. TestTube: A system for selective regression testing. In Proceedings of the 16th International Conference on Software Engineering (ICSE '94, Sorrento, Italy, May 16-21), B. Fadini, L. Osterweil, and A. van Lamsweerde, Chairs. IEEE Computer Society Press, Los Alamitos, CA, 211-220. Google ScholarGoogle Scholar
  7. GRAVES, T., HARROLD,M.J.,KIM, J.-M., PORTER, A., AND ROTHERMEL, G. 1998. An empirical study of regression test selection techniques. In Proceedings of the 20th International Conference on Software Engineering (ICSE '98, Kyoto, Japan, Apr.). IEEE Press, Piscataway, NJ. Google ScholarGoogle Scholar
  8. GUPTA, R., HARROLD,M.J.,AND SOFFA, M. 1992. An approach to regression testing using slicing. In Proceedings of the 1992 Conference on Software Maintenance (Nov.). 299-308.Google ScholarGoogle Scholar
  9. HARROLD,M.J.AND ROTHERMEL, G. 1997. Aristotle: A system for research on and development of program-analysis-based tools. OSU-CISRC-3/97-TR17. Ohio State University, Columbus, OH.Google ScholarGoogle Scholar
  10. HARROLD,M.J.AND SOFFA, M. L. 1988. An incremental approach to unit testing during maintenance. In Proceedings of the Conference on Software Maintenance (Oct.). 362-367.Google ScholarGoogle Scholar
  11. HARROLD,M.J.,ROSENBLUM, D., ROTHERMEL, G., AND WEYUKER, E. 2001. Empirical studies of a prediction model for regression test selection. IEEE Trans. Softw. Eng. Google ScholarGoogle Scholar
  12. HUTCHINS, M., FOSTER, H., GORADIA, T., AND OSTRAND, T. 1994. Experiments on the effectiveness of dataflow- and controlflow-based test adequacy criteria. In Proceedings of the 16th International Conference on Software Engineering (ICSE '94, Sorrento, Italy, May 16-21), B. Fadini, L. Osterweil, and A. van Lamsweerde, Chairs. IEEE Computer Society Press, Los Alamitos, CA, 191-200. Google ScholarGoogle Scholar
  13. LASKI,J.AND SZERMER, W. 1992. Identification of program modifications and its applications in software maintentance. In Proceedings of the 1992 Conference on Software Maintenance (Nov.). 282-290.Google ScholarGoogle Scholar
  14. LEUNG,H.AND WHITE, L. 1990. A study of integration testing and software regression at the integration level. In Proceedings of the Conference on Software Maintenance. 290-300.Google ScholarGoogle Scholar
  15. LEUNG,H.AND WHITE, L. 1991. A cost model to compare regression test strategies. In Proceedings of the Conference on Software Maintenance (Oct.). 201-208.Google ScholarGoogle Scholar
  16. OSTRAND,T.J.AND BALCER, M. J. 1988. The category-partition method for specifying and generating fuctional tests. Commun. ACM 31, 6 (June), 676-686. Google ScholarGoogle Scholar
  17. ROSENBLUM,D.AND ROTHERMEL, G. 1997. A comparative study of regression test-selection techniques. In Proceedings of the International Workshop on Empirical Studies of Software Maintenance (Oct.). 89-94.Google ScholarGoogle Scholar
  18. ROSENBLUM,D.S.AND WEYUKER, E. J. 1997. Using coverage information to predict the cost-effectiveness of regression testing strategies. IEEE Trans. Softw. Eng. 23, 3, 146-156. Google ScholarGoogle Scholar
  19. ROTHERMEL, G. 1996. Efficient, effective regression testing using safe test selection techniques. Ph.D. Dissertation. Clemson University, Clemson, SC. Google ScholarGoogle Scholar
  20. ROTHERMEL,G.AND HARROLD, M. J. 1996. Analyzing regression test selection techniques. IEEE Trans. Softw. Eng. 22, 8 (Aug.), 529-551. Google ScholarGoogle Scholar
  21. ROTHERMEL,G.AND HARROLD, M. J. 1997. A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. 6, 2, 173-210. Google ScholarGoogle Scholar
  22. ROTHERMEL,G.AND HARROLD, M. J. 1998. Empirical studies of a safe regression test selection technique. IEEE Trans. Softw. Eng. 24, 6, 401-419. Google ScholarGoogle Scholar
  23. VOKOLOS,F.I.AND FRANKL, P. G. 1997. Pythia: A regression test selection tool based on textual differencing. In IFIP TC5 WG5.4 3rd International Conference on Reliability, Quality and Safety of Software-Intensive Systems (ENCRESS '97, Athens, Greece, May 29-30), D. Gritzalis, Ed. Chapman and Hall, Ltd., London, UK, 3-21. Google ScholarGoogle Scholar
  24. VOKOLOS,F.I.AND FRANKL, P. G. 1998. Empirical evaluation of the textual differencing regression testing technique. In Proceedings of the International Conference on Software Maintenance (Nov.). 44-53. Google ScholarGoogle Scholar
  25. WHITE,L.AND LEUNG, H. 1992. A firewall concept for both control-flow and data-flow in regression integration testing. In Proceedings of the 1992 Conference on Software Maintenance (Nov.). 262-270.Google ScholarGoogle Scholar
  26. WHITE, L., NARAYANSWAMY, V., FRIEDMAN, T., KIRSCHENBAUM, M., PIWOWARSKI, P., AND OHA,M. 1993. Test manager: A regression testing tool. In Proceedings of the Conference on Software Maintenance (Sept.). 338-347. Google ScholarGoogle Scholar

Index Terms

  1. A comparative study of coarse- and fine-grained safe regression test-selection techniques

    Recommendations

    Reviews

    John Joseph Cupak

    Testing software which has been modified to incorporate enhancements or fix defects is a time-consuming effort. Traditionally, test software which has been developed for the initial version of the software is completely re-run with the modified software to verify the changes do not introduce additional defects. The cost of retesting (known as regression tests), can be reduced by only running a subset of all the tests which exercise just the changed code. This paper presents the comparison of two tools which analyze the original and changed software to identify a subset of all the test cases which are sufficient to identify a fault in the modified software. The tools were used to analyze three different groups of software - from a suite of small programs to a single, large (49KSLOC) program. Both tools required excessive analysis time, and proved to be less efficient than simply reexecuting all test cases. However, the investigation by the authors and the resultant analysis indicates that the tools would work better if the best parts of each tool were combined. In addition, much of the execution time appears to be taken up by the analysis phase of each tool. This phase might be reduced by a compiler which retained semantic information and provided it to the tools. It would also be helpful if the authors provided a complexity measure of the software under test, as it appears to be a factor in the analysis time for each tool.

    Access critical reviews of Computing literature here

    Become a reviewer for Computing Reviews.

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Software Engineering and Methodology
      ACM Transactions on Software Engineering and Methodology  Volume 10, Issue 2
      April 2001
      106 pages
      ISSN:1049-331X
      EISSN:1557-7392
      DOI:10.1145/367008
      Issue’s Table of Contents

      Copyright © 2001 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 April 2001
      Published in tosem Volume 10, Issue 2

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader