skip to main content
10.5555/2486788.2486910acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Measuring architecture quality by structure plus history analysis

Published:18 May 2013Publication History

ABSTRACT

This case study combines known software structure and revision history analysis techniques, in known and new ways, to predict bug-related change frequency, and uncover architecture-related risks in an agile industrial software development project. We applied a suite of structure and history measures and statistically analyzed the correlations between them. We detected architecture issues by identifying outliers in the distributions of measured values and investigating the architectural significance of the associated classes. We used a clustering method to identify sets of files that often change together without being structurally close together, investigating whether architecture issues were among the root causes. The development team confirmed that the identified clusters reflected significant architectural violations, unstable key interfaces, and important undocumented assumptions shared between modules. The combined structure diagrams and history data justified a refactoring proposal that was accepted by the project manager and implemented.

References

  1. C. Bird, A. Bachmann, E. Aune, J. Duffy, A. Bernstein, V. Filkov, and P. Devanbu, “Fair and balanced? Bias in bug-fix datasets,” ESEC-FSE’09, August 23–28, 2009, Amsterdam, The Netherlands. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A.C. Cameron, and P.K. Trivedi (1998). Regression Analysis of Count Data. Cambridge University Press. ISBN 0-521-63201-3Google ScholarGoogle Scholar
  3. S.R. Childamber, and C.F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol.20, pp. 476-493, 1994 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. N. Fenton and N. Ohlsson, “Quantitative analysis of faults and failures in a complex software system,” IEEE Transactions on Software Engineering, vol. 26, pp. 797-814, August 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. Freeman (1977). "A set of measures of centrality based upon betweenness,” Sociometry 40: 35–41Google ScholarGoogle Scholar
  6. S. M. Henry, and D. Kafura, "Software structure metrics based on information flow," IEEE Transactions on Software Engineering, vol. 7, pp. 510-518, 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Institute of Electrical and Electronics Engineers. Systems and Software Engineering – Architecture description. (ISO/IEC/ IEEE Std 4210:2011). New York, NY: Institute of Electrical and Electronics Engineers, 2011. Also iso-architecture.org/42010/Google ScholarGoogle Scholar
  8. M. Kendall, "A new measure of rank correlation," Biometrika 30 (1–2): 81–93, 1938.Google ScholarGoogle ScholarCross RefCross Ref
  9. M.O. Lorenz, "Methods of measuring the concentration of wealth," Publications of the American Statistical Association (Publications of the American Statistical Association, Vol. 9, No. 70) 9 (70): 209–219, 1905.Google ScholarGoogle Scholar
  10. A. MacCormack, J. Rusnak, and C. Y. Baldwin, “Exploring the structure of complex software designs: an empirical study of open source and proprietary code,” Manage. Sci., vol. 52, pp. 1015–1030, July 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, vol. 2, no. 4, pp. 308–320, Dec. 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Menzies, J. Greenwald, and A. Frank, “Data mining static code attributes to learn defect predictors,” IEEE Transactions on Software Engineering, vol. 33, pp. 2-13, January 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Mockus, G. E. Stephen, and A. F. Karr, “On measurement and analysis of software changes,” 1999.Google ScholarGoogle Scholar
  14. N. Ohlsson and H. Alberg, “Predicting fault-prone software modules in telephone switches,” IEEE Transactions on Software Engineering, vol. 22, pp. 886-894, December 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. B. O’Sullivan, Mercurial: The Definitive Guide. O’Reilly Media: 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T.J. Ostrand, and E.J. Weyuker, and R.M. Bell, “Predicting the location and number of faults in large software systems,” IEEE Transactions on Software Engineering, vol 31, pp. 340-355, April, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. T.J. Ostrand, and E.J. Weyuker, “How to measure success of fault prediction models,” SOQUA ’07 Fourth international workshop on Software quality assurance, pp. 25-30, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. Park, "Software size measurement: a framework for counting source statements," Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, Technical Report CMU/SEI-92-TR-020, 1992.Google ScholarGoogle Scholar
  19. D. L. Parnas, “On the criteria to be used in decomposing systems into modules,” CACM, 15(12):1053–8, Dec. 1972. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. P. Robillard. “Topology analysis of software dependencies,” TOSEM, 17(4):18:1–18:36, Aug. 2008 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R.W. Schwanke and S.J. Hanson, “Using neural networks to modularize software,” Journal Machine Learning, vol.15, pp. 137-168, May 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. Sangal, E. Jordan, V. Sinha, and D. Jackson, “Using dependency models to manage complex software architecture,” In Proc. 20th OOPSLA, pages 167–176,Oct. 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. Wong, Y. Cai, M. Kim, and M. Dalton, “Detecting software modularity violations,” Proceedings of the 33rd International Conference on Software Engineering, pp. 411-420, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Zimmermann, and N. Nagappan, “Predicting defects using network analysis on dependency graphs,” ICSE ’08. 30th International Conference, pp. 531-540, May 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Measuring architecture quality by structure plus history analysis

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader