ABSTRACT
Usability testing has long been considered a gold standard in evaluating the ease of use of software and websites-producing metrics to benchmark the experience and identifying areas for improvement. However, logistical complexities and costs can make frequent usability testing infeasible. Alternatives to usability testing include various forms of expert reviews that identify usability problems but fail to provide task performance metrics. This case study describes a method by which multiple teams of trained evaluators generated task usability ratings and compared them to metrics collected from an independently run usability test on three software products. Although inter-rater reliability ranged from modest to strong and the correlation between actual and predicted metrics did establish fair concurrent validity, opportunities for improved reliability and validity were identified. By establishing clear guidelines, this method can provide a useful usability rating for a range of products across multiple platforms, without costing significant time or money.
- Apple Computer. 1987. Human Interface Guidelines: The Apple Desktop Interface. AddisonWesley, Reading, MA. Google ScholarDigital Library
- J. M. Christian Bastien and Dominique L. Scapin. 1995. Evaluating a user interface with ergonomic criteria. Int. J. Hum.-Comput. Interact. 7, 2 (April 1995), 105-121. DOI: http://dx.doi.org/10.1080/1044731950952611 Google ScholarCross Ref
- Tasha Hollingsed and David G. Novick. 2007. Usability inspection methods after 15 years of research and practice. In Proceedings of the 25th Annual ACM International Conference on Design of Communication (SIGDOC '07). ACM Press, New York, NY, 249-255. DOI: http://dx.doi.org/10.1145/1297144.1297200 Google ScholarDigital Library
- International Organization for Standardization. 1998. ISO 9241--11: Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability. International Organization for Standardization, Genève, Switzerland. Retrieved December 2, 2015 from https://www.iso.org/obp/ui/#iso:std:iso:9241:11:ed-1:v1:enGoogle Scholar
- Robin Jeffries, James R. Miller, Cathleen Wharton, and Kathy M. Uyeda. 1991. User interface evaluation in the real world: A comparison of four techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '91). ACM Press, New York, NY, 119- 124. DOI: http://dx.doi.org/10.1145/108844.108862 Google ScholarDigital Library
- B. E. John and S. J. Marks. 1997. Tracking the effectiveness of usability evaluation methods. Behaviour and Information Technology, 16, 4/5, 188-202.Google Scholar
- Claire-Marie Karat, Robert Campbell, and Tarra Fiegel. 1992. Comparison of empirical testing and walkthrough methods in user interface evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92). ACM Press, New York, NY, 397-404. DOI: http://dx.doi.org/0.1145/142750.142873 Google ScholarDigital Library
- Effie L.-C. Law and Ebba T. Hvannberg. 2004. Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation. In Proceedings of the Third Nordic Conference on Human-Computer Interaction (NordiCHI '04). ACM Press, New York, NY, 241-250. DOI: http://dx.doi.org/10.1145/1028014.1028051 Google ScholarDigital Library
- Clayton Lewis, Peter G. Polson, Cathleen Wharton, and John Rieman. 1990. Testing a walkthrough methodology for theory-based design of walk-upand-use interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '90). ACM Press, New York, NY, 235- 242. DOI: http://dx.doi.org/10.1145/97243.97279 Google ScholarDigital Library
- Microsoft Corporation. 1995. The Windows Interface: Guidelines for Software Design. Microsoft Press, Redmond, WA. Google ScholarDigital Library
- Jakob Nielsen. 1993. Usability Engineering. Academic Press, Boston, MA. Google ScholarDigital Library
- Jakob Nielsen and Rolf Molich. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '90). ACM Press, New York, NY, 249-256. DOI: http://dx.doi.org/10.1145/97243.97281 Google ScholarDigital Library
- Jeff Sauro. 2012. How Effective are Heuristic Evaluations? (September 2012). Retrieved August 10, 2015 from http://www.measuringu.com/blog/effective-he.phpGoogle Scholar
- Sydney L. Smith and Jane N. Mosier. 1986. Guidelines for Designing User Interface Software. MITRE Technical Report MTR-10090. The MITRE Corporation, Bedford, MA.Google Scholar
Index Terms
- Practical Usability Rating by Experts (PURE): A Pragmatic Approach for Scoring Product Usability
Recommendations
The usability inspection performance of work-domain experts: An empirical study
It is a challenge for usability experts to perform usability inspections of interactive systems that are tailored to work-domains of which these experts have little knowledge. To counter this, usability inspections with work-domain experts have been ...
An Integrated Measurement Model for Evaluating Usability Attributes
IPAC '15: Proceedings of the International Conference on Intelligent Information Processing, Security and Advanced CommunicationIn order to develop an easy to use, effective and efficient software system that satisfies the stakeholder needs, usability attributes should be addressed and measured appropriately during all development stages. However, measuring usability is widely ...
How to classify to experts in usability evaluation
Interacción '14: Proceedings of the XV International Conference on Human Computer InteractionUsability inspections are a set of methods for evaluating one interactive system by experts. They try to find possible usability problems and determining the level of usability of the system without involving real users. One of these methods is ...
Comments