skip to main content
10.1145/1840784.1840826acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiiixConference Proceedingsconference-collections
poster

Assessors' search result satisfaction associated with relevance in a scientific domain

Authors Info & Claims
Published:18 August 2010Publication History

ABSTRACT

In this poster we investigate the associations between perceived ease of assessment of situational relevance made by a four-point scale, perceived satisfaction with retrieval results and the actual relevance assessments and retrieval performance made by test collection assessors based on their own genuine information tasks. Ease of assessment and search satisfaction are cross tabulated with retrieval performance measured by Normalized Discounted Cumulated Gain. Results show that when assessors find small numbers of relevant documents they tend to regard the search results with dissatisfaction and, in addition, they obtain lower performance for all document types involved, except for monographic records.

References

  1. Voorhees, E.M and Harman, D.K. 2005. TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kamps, J., Lalmas, M. and J. Pehcevski. 2007. Evaluating relevant in context: Document retrieval with a twist. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press, New York, NY, 723--724. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Lykke, M., Larsen, B., Lund, H. and Ingwersen, P. 2010. Developing a Test Collection for the Evaluation of Integrated Search. In: Advances in Information Retrieval. Proceedings of 32nd European Conference on IR Research, ECIR 2010, Milton Keynes, UK, March 28-31. Springer, Berlin, Germany, 627--630. DOI - 10.1007/978-3-642-12275-0_63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kekäläinen, J. 2005. Binary and graded relevance in IR evaluations -- Comparison of the effects on ranking of IR systems. Inf. Proc. & Man., 41(5), 1019--1033. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kekäläinen, J. and Järvelin, K. 2002. Using graded relevance assessments in IR evaluation. J. Am. Soc. Inf. Sc. Tech., 53(13), 1120--1129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Sormunen, E. 2002. Liberal relevance criteria of TREC -- Counting on negligible documents? In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press, New York, NY, 320--330. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Järvelin, K and Kekäläinen, J. 2002. Cumulated gain-based evaluation of IR techniques. ACM Trans. In. Syst. (ACM TOIS), 20(4), 422--446. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Voorhees, E.M. 1998. Variations in relevance judgments and the measurement of retrieval effectiveness. In: Proceedings of the 21th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press, New York, NY, 315--323. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Malik, S., Klas, H.-P., Fuhr, N., Larsen, B. and Tombros, A. 2006. Designing a user interface for interactive retrieval of structured documents -- Lessons learned from the INEX interactive track. In: Research and Advanced Technology for Digital Libraries. Springer Verlag, Heidelberg, 291--302. DOI: 10.1007/11863878_25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Borlund, P. 2003. The IIR evaluation model: A framework for evaluation of interactive information retrieval systems. Inf. Res., 8(3), paper no. 152.Google ScholarGoogle Scholar
  11. Kelly, D. and Fu, X. 2007. Eliciting better information need descriptions from users of information search systems. Inf. Proc.& Man., 43(1), 30--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Arapakis, I., Jose, J.M. and Gray, P.D. 2008. Affective feedback: An investigation into the role of emotions in the information seeking process. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM Press, New York, NY, 395--402. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Assessors' search result satisfaction associated with relevance in a scientific domain

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      IIiX '10: Proceedings of the third symposium on Information interaction in context
      August 2010
      408 pages
      ISBN:9781450302470
      DOI:10.1145/1840784

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 August 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • poster

      Acceptance Rates

      Overall Acceptance Rate21of45submissions,47%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader