skip to main content
10.1145/3018661.3018699acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article
Public Access
Best Paper

Unbiased Learning-to-Rank with Biased Feedback

Published:02 February 2017Publication History

ABSTRACT

Implicit feedback (e.g., clicks, dwell times, etc.) is an abundant source of data in human-interactive systems. While implicit feedback has many advantages (e.g., it is inexpensive to collect, user centric, and timely), its inherent biases are a key obstacle to its effective use. For example, position bias in search rankings strongly influences how many clicks a result receives, so that directly using click data as a training signal in Learning-to-Rank (LTR) methods yields sub-optimal results. To overcome this bias problem, we present a counterfactual inference framework that provides the theoretical basis for unbiased LTR via Empirical Risk Minimization despite biased data. Using this framework, we derive a Propensity-Weighted Ranking SVM for discriminative learning from implicit feedback, where click models take the role of the propensity estimator. In contrast to most conventional approaches to de-biasing the data using click models, this allows training of ranking functions even in settings where queries do not repeat. Beyond the theoretical support, we show empirically that the proposed learning method is highly effective in dealing with biases, that it is robust to noise and propensity model misspecification, and that it scales efficiently. We also demonstrate the real-world applicability of our approach on an operational search engine, where it substantially improves retrieval performance.

References

  1. A. Borisov, I. Markov, M. de Rijke, and P. Serdyukov. A neural click model for web search. In Proceedings of the 25th International Conference on World Wide Web, pages 531--541, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. O. Chapelle, T. Joachims, F. Radlinski, and Y. Yue. Large-scale validation and analysis of interleaved search evaluation. ACM Transactions on Information Systems (TOIS), 30(1):6:1--6:41, 2012.Google ScholarGoogle Scholar
  3. O. Chapelle and Y. Zhang. A dynamic bayesian network click model for web search ranking. In International Conference on World Wide Web (WWW), pages 1--10. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Chuklin, I. Markov, and M. de Rijke. Click Models for Web Search. Synthesis Lectures on Information Concepts, Retrieval, and Services. Morgan & Claypool Publishers, 2015.Google ScholarGoogle Scholar
  5. N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In International Conference on Web Search and Data Mining (WSDM), pages 87--94. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Hofmann, A. Schuth, S. Whiteson, and M. de Rijke. Reusing historical interaction data for faster online learning to rank for ir. In International Conference on Web Search and Data Mining (WSDM), pages 183--192, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. G. Horvitz and D. J. Thompson. A generalization of sampling without replacement from a finite universe. Journal of the American Statistical Association, 47(260):663--685, 1952. Google ScholarGoogle Scholar
  8. G. Imbens and D. Rubin. Causal Inference for Statistics, Social, and Biomedical Sciences. Cambridge University Press, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  9. T. Joachims. Optimizing search engines using clickthrough data. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), pages 133--142, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. T. Joachims. Training linear SVMs in linear time. In ACM SIGKDD International Conference On Knowledge Discovery and Data Mining (KDD), pages 217--226, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, and G. Gay. Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems (TOIS), 25(2), April 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Langford, A. Strehl, and J. Wortman. Exploration scavenging. In Proceedings of the 25th International Conference on Machine Learning, pages 528--535, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. Li, W. Chu, J. Langford, and X. Wang. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In International Conference on Web Search and Data Mining (WSDM), pages 297--306, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. J. A. Little and D. B. Rubin. Statistical Analysis with Missing Data. John Wiley, 2002. Google ScholarGoogle ScholarCross RefCross Ref
  15. T.-Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3):225--331, Mar. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Raman and T. Joachims. Learning socially optimal information systems from egoistic users. In European Conference on Machine Learning (ECML), pages 128--144, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Raman, T. Joachims, P. Shivaswamy, and T. Schnabel. Stable coactive learning via perturbation. In International Conference on Machine Learning (ICML), pages 837--845, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: Estimating the click-through rate for new ads. In International Conference on World Wide Web (WWW), pages 521--530. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. P. R. Rosenbaum and D. B. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41--55, 1983. Google ScholarGoogle ScholarCross RefCross Ref
  20. T. Schnabel, A. Swaminathan, P. Frazier, and T. Joachims. Unbiased comparative evaluation of ranking functions. In ACM International Conference on the Theory of Information Retrieval (ICTIR), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. Schnabel, A. Swaminathan, A. Singh, N. Chandak, and T. Joachims. Recommendations as treatments: Debiasing learning and evaluation. In International Conference on Machine Learning (ICML), 2016.Google ScholarGoogle Scholar
  22. A. Schuth, H. Oosterhuis, S. Whiteson, and M. de Rijke. Multileave gradient descent for fast online learning to rank. In International Conference on Web Search and Data Mining (WSDM), pages 457--466, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. K. Sparck-Jones and C. J. V. Rijsbergen. Report on the need for and provision of an 'ideal' information retrieval test collection. Technical report, University of Cambridge, 1975.Google ScholarGoogle Scholar
  24. A. L. Strehl, J. Langford, L. Li, and S. Kakade. Learning from logged implicit exploration data. In Conference on Neural Information Processing Systems (NIPS), pages 2217--2225, 2010.Google ScholarGoogle Scholar
  25. A. Swaminathan and T. Joachims. Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research (JMLR), 16:1731--1755, Sep 2015.Google ScholarGoogle Scholar
  26. V. Vapnik. Statistical Learning Theory. Wiley, Chichester, GB, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. Wang, J. J. Lin, and D. Metzler. A cascade ranking model for efficient ranked retrieval. In ACM Conference on Research and Development in Information Retrieval (SIGIR), pages 105--114, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. X. Wang, M. Bendersky, D. Metzler, and M. Najork. Learning to rank with selection bias in personal search. In ACM Conference on Research and Development in Information Retrieval (SIGIR). ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Y. Wang, D. Yin, L. Jie, P. Wang, M. Yamada, Y. Chang, and Q. Mei. Beyond ranking: Optimizing whole-page presentation. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, WSDM '16, pages 103--112, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling bandits problem. In International Conference on Machine Learning (ICML), pages 151--159, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Y. Yue, R. Patel, and H. Roehrig. Beyond position bias: examining result attractiveness as a source of presentation bias in clickthrough data. In International Conference on World Wide Web (WWW), pages 1011--1018. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Unbiased Learning-to-Rank with Biased Feedback

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          WSDM '17: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining
          February 2017
          868 pages
          ISBN:9781450346757
          DOI:10.1145/3018661

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 February 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          WSDM '17 Paper Acceptance Rate80of505submissions,16%Overall Acceptance Rate498of2,863submissions,17%

          Upcoming Conference

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader