skip to main content
10.1145/1390334.1390455acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
poster

Relevance thresholds in system evaluations

Published:20 July 2008Publication History

ABSTRACT

We introduce and explore the concept of an individual's relevance threshold as a way of reconciling differences in outcomes between batch and user experiments.

References

  1. A. Al-Maskari, M. Sanderson, and P. Clough. The relationship between IR effectiveness measures and user satisfaction. SIGIR'07, p773--774. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Allan, B. Carterette, and J. Lewis. When will information retrieval be "good enough"? SIGIR'05, p433--440. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Clarke, N. Craswell, and I. Soboroff. Overview of the TREC 2004 terabyte track. Gaithersburg, MD, 2005.Google ScholarGoogle Scholar
  4. G.A Gescheider. Psychophysics: method, theory and application. Lawrence Erlbaum Ass., New Jersey, 1985.Google ScholarGoogle Scholar
  5. W. Hersh, A. Turpin, S. Price, B. Chan, D. Kraemer, L. Sacherek, and D. Olson. Do batch and user evaluations give the same results? SIGIR'00, p17--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. D. Kelly, X. Fu, and C. Shah. Effects of rank and precision of search results on users' evaluations of system performance. TR-2007-02, U. of North Carolina, 2007.Google ScholarGoogle Scholar
  7. A. Turpin and W. Hersh. Why batch and user evaluations do not give the same results. SIGIR'01, p225--231. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Turpin and F. Scholer. User performance versus precision measures.. SIGIR'06, p11--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. E. M. Voorhees and D. K. Harman. TREC : experiment and evaluation in informationretrieval. MIT Press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Relevance thresholds in system evaluations

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGIR '08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
        July 2008
        934 pages
        ISBN:9781605581644
        DOI:10.1145/1390334

        Copyright © 2008 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 20 July 2008

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • poster

        Acceptance Rates

        Overall Acceptance Rate792of3,983submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader