skip to main content
10.1145/1277741.1277839acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

How well does result relevance predict session satisfaction?

Published:23 July 2007Publication History

ABSTRACT

Per-query relevance measures provide standardized, repeatable measurements of search result quality, but they ignore much of what users actually experience in a full search session. This paper examines how well we can approximate a user's ultimate session-level satisfaction using a simple relevance metric. We find that thisrelationship is surprisingly strong. By incorporating additional properties of the query itself, we construct a model which predicts user satisfaction even more accurately than relevance alone.

References

  1. J. Allan, B. Carterette, and J. Lewis. When will information retrieval be \good enough"? In SIGIR'05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 433--440, New York, NY, USA, 2005. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Bilal. Children's use of the Yahooligans! Web search engine: Cognitive, physical, and affective behaviors on fact-based search tasks. J. Am. Soc. Inf. Sci., 51(7):646--665, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. Borlund. The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research, 8(3), April 2003.Google ScholarGoogle Scholar
  4. A. Broder. A taxonomy of web search. SIGIR Forum, 36(2):3--10, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Buckley and E. M. Voorhees. Evaluating evaluation measure stability. In SIGIR '00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 33--40, New York, NY, USA, 2000. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Buckley and E. M. Voorhees. Retrieval evaluation with incomplete information. In SIGIR '04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 25--32, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. W. Cleverdon. The signifcance of the Cranfeld tests on index languages. In SIGIR '91: Proceedings of the 14th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3--12, New York, NY, USA, 1991. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. B. J. Jansen and U. Pooch. A review of web searching studies and a framework for future research. J. Am. Soc. Inf. Sci. Technol., 52(3):235--246, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst., 20(4):422--446, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Kahneman, P. P. Wakker, and R. Sarin. Back to Bentham? Explorations of experienced utility. The Quarterly Journal of Economics, 112(2):375--405, May 1997.Google ScholarGoogle ScholarCross RefCross Ref
  11. J. Reid. A task-oriented non-interactive evaluation methodology for information retrieval systems. Information Retrieval, 2(1):115--129, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. E. Rose and D. Levinson. Understanding user goals in web search. In WWW '04: Proceedings of the 13th international conference on World Wide Web, pages 13--19, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. M. Russell and C. Grimes. Assigned and self-chosen tasks are not the same in web search. In HICSS '07: Proceedings of the 40th Annual International Conference on Systems and Software, 2007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Sanderson and J. Zobel. Information retrieval system evaluation: effort, sensitivity, and reliability. In SIGIR '05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 162--169, New York, NY, USA, 2005. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Spink. A user-centered approach to evaluating human interaction with web search engines: an exploratory study. Inf. Process. Manage.,38(3):401--426, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. J. Teevan, C. Alvarado, M. S. Ackerman, and D. R. Karger. The perfect search engine is not enough: A study of orienteering behavior in directed search. In CHI '04: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 415--422, New York, NY, USA, 2004. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Turpin and F. Scholer. User performance versus precision measures for simple search tasks. In SIGIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 11--18, New York, NY, USA, 2006. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. H. Turpin and W. Hersh. Why batch and user evaluations do not give the same results. In SIGIR '01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 225--231, New York, NY, USA, 2001. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. E. M. Voorhees. Evaluation by highly relevant documents. In SIGIR '01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 74--82, New York, NY, USA, 2001. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. How well does result relevance predict session satisfaction?

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
      July 2007
      946 pages
      ISBN:9781595935977
      DOI:10.1145/1277741

      Copyright © 2007 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 23 July 2007

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate792of3,983submissions,20%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader