skip to main content
10.1145/3176349.3176903acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

WEPIR 2018: Workshop on Evaluation of Personalisation in Information Retrieval

Published:01 March 2018Publication History

ABSTRACT

The purpose of the WEPIR 2018 workshop is to bring together researchers from different backgrounds, interested in advancing the evaluation of personalisation in information retrieval. The workshop focus is on the development of a common understanding of the challenges, requirements and practical limitations of meaningful evaluation of personalisation in information retrieval. The planned outcome of the workshop is the proposal of methodologies to support evaluation of personalised information retrieval from both the perspectives of the user experience in interactive search settings, and of user models for personalised information retrieval and their algorithmic incorporation in the search process.

References

  1. Maristella Agosti, Norbert Fuhr, Elaine Toms, and Perti Vakkari . 2014. Evaluation methodologies in information retrieval. Dagstuhl Seminar 13441. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Schloss Dagstuhl, Germany. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Nicholas J. Belkin, Susan Dumais, Noriko Kando, and Mark Sanderson . 2016. Whole-session evaluation of interactive information retrieval systems. NII Shonan Meeting Report 2012--7. National Institute of Informatics, Japan, Tokyo, Japan.Google ScholarGoogle Scholar
  3. Paul N. Bennett, Filip Radlinski, Ryen W. White, and Emine Yilmaz . 2011. Inferring and Using Location Metadata to Personalize Web Search Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011). ACM, Beijing, China, 135--144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Owen Conlan, Liadh Kelly, Kevin Koidl, Seamus Lawless, Killian Levacher, and Athanasios Staikopoulos (Eds.). . 2016. EvalUMAP2016: Towards Comparative Evaluation in the User Modelling, Adaptation and Personalization. Halifax, Canada.Google ScholarGoogle Scholar
  5. Susan Dumais and Nicholas J. Belkin . 2005. The TREC interactive tracks: Putting the user into search. TREC. Experiment and evaluation in information retrieval, bibfieldeditorEllen M. Voorhees and Donna K. Harman (Eds.). MIT Press, Cambridge, MA, 123 -- 152.Google ScholarGoogle Scholar
  6. Evangelos Kanoulas, Ben Carterette, Paul D. Clough, and Mark Sanderson . 2011. Evaluating Multi-query Sessions. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011). ACM, Beijing, China, 1053--1062. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gabriella Pasi, Gareth J. F. Jones, Stefania Marrara, Camilla Sanvitto, Debasis Ganguly, and Prochzta Sen . 2017. Overview of the CLEF 2017 Personalised Information Retrieval Pilot Lab (PIR-CLEF 2017) Proceedings of CLEF 2017. Springer, Dublin, Ireland.Google ScholarGoogle Scholar
  8. Camilla Sanvitto, Debasis Ganguly, Gareth J. F. Jones, and Gabriella Pasi . 2016. A Laboratory-Based Method for the Evaluation of Personalised Search Proceedings of The Seventh International Workshop on Evaluating Information Access (EVIA 2016). Tokyo, Japan.Google ScholarGoogle Scholar

Index Terms

  1. WEPIR 2018: Workshop on Evaluation of Personalisation in Information Retrieval

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                CHIIR '18: Proceedings of the 2018 Conference on Human Information Interaction & Retrieval
                March 2018
                402 pages
                ISBN:9781450349253
                DOI:10.1145/3176349

                Copyright © 2018 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 1 March 2018

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article

                Acceptance Rates

                CHIIR '18 Paper Acceptance Rate22of57submissions,39%Overall Acceptance Rate55of163submissions,34%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader