Abstract
There are many ways to measure quality before and after software is released. For commercial and internal-use-only products, the most important measurement is the user’s perception of product quality. Unfortunately, perception is difficult to measure, so companies attempt to quantify it through customer satisfaction surveys and failure/behavioral data collected from its customer base. This article focuses on the problems of capturing failure data from customer sites. To explore the pertinent issues I rely on experience gained from collecting failure data from Windows XP systems, but the problems you are likely to face when developing internal (noncommercial) software should not be dissimilar.
Index Terms
- Automating Software Failure Reporting: We can only fix those bugs we know about.
Recommendations
Discovering, reporting, and fixing performance bugs
MSR '13: Proceedings of the 10th Working Conference on Mining Software RepositoriesSoftware performance is critical for how users perceive the quality of software products. Performance bugs---programming errors that cause significant performance degradation---lead to poor user experience and low system throughput. Designing effective ...
Microtasking Software Failure Resolution: Early Results
Open source software development enabled distributed teams of programmers to contribute to large software systems that became standards in the operation of government and business. Crowdsourcing went further by enabling contributions in the form of ...
Towards Automating Code Review Activities
ICSE '21: Proceedings of the 43rd International Conference on Software EngineeringCode reviews are popular in both industrial and open source projects. The benefits of code reviews are widely recognized and include better code quality and lower likelihood of introducing bugs. However, since code review is a manual activity it comes at ...
Comments