skip to main content
10.1145/1076034.1076064acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Information retrieval system evaluation: effort, sensitivity, and reliability

Published:15 August 2005Publication History

ABSTRACT

The effectiveness of information retrieval systems is measured by comparing performance on a common set of queries and documents. Significance tests are often used to evaluate the reliability of such comparisons. Previous work has examined such tests, but produced results with limited application. Other work established an alternative benchmark for significance, but the resulting test was too stringent. In this paper, we revisit the question of how such tests should be used. We find that the t-test is highly reliable (more so than the sign or Wilcoxon test), and is far more reliable than simply showing a large percentage difference in effectiveness measures between IR systems. Our results show that past empirical work on significance tests over-estimated the error of such tests. We also re-consider comparisons between the reliability of precision at rank 10 and mean average precision, arguing that past comparisons did not consider the assessor effort required to compute such measures. This investigation shows that assessor effort would be better spent building test collections with more topics, each assessed in less detail.

References

  1. Buckley, C., Voorhees, E.M. (2000) Evaluating evaluation measure stability, Proc. ACM SIGIR, 33--40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Buckley, C., Voorhees, E.M. (2004) Retrieval evaluation with incomplete information, in Proc. ACM SIGIR, 25--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Dunlop, M.D. (1997) Time Relevance and Interaction Modeling for Information Retrieval, in Proc. ACM SIGIR, 206--213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Hull, D. (1993) Using statistical testing in the evaluation of retrieval experiments, in Proc. of ACM SIGIR, 329--338. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Järvelin, K. & Kekääläinen, J. (2000) IR evaluation methods for retrieving highly relevant documents, in Proc. ACM SIGIR, 41--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Matthews, R. (2003) The numbers don't add up, New Scientist, March, p. 28, issue 2385.Google ScholarGoogle Scholar
  7. Savoy, J. (1997) Statistical inference in retrieval effectiveness evaluation, Information Processing & Management, 33(4):495--512. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Spärck Jones, K. (1974) Automatic indexing. Journal of Documentation, 30:393--432, 1974.Google ScholarGoogle ScholarCross RefCross Ref
  9. Spärck Jones, K., Van Rijsbergen, C.J. (1975) Report on the need for and provision of an 'ideal' information retrieval test collection, British Library Research and Development Report 5266, University Computer Laboratory, Cambridge.Google ScholarGoogle Scholar
  10. Tague-Sutcliffe, J., Blustein (1994) A Statistical Analysis of the TREC-3 Data, in Proc. TREC-3, 385--398.Google ScholarGoogle Scholar
  11. Van Rijsbergen, C.J. (1979) Information Retrieval, London: Butterworths. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Voorhees, E.M., Buckley, C. (2002) The effect of topic set size on retrieval experiment error, in Proc. ACM SIGIR, 316--323. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Voorhees, E.M., Harman, D. (1999) Overview of the 8th Text REtrieval Conference (TREC-8), in Proc. 8th Text REtrieval Conf.Google ScholarGoogle Scholar
  14. Zobel, J. (1998) How reliable are the results of large-scale information retrieval experiments?, in Proc. ACM SIGIR, 307--31. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Information retrieval system evaluation: effort, sensitivity, and reliability

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
      August 2005
      708 pages
      ISBN:1595930345
      DOI:10.1145/1076034

      Copyright © 2005 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 August 2005

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader