skip to main content
10.1145/2505515.2505653acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

GAPfm: optimal top-n recommendations for graded relevance domains

Published:27 October 2013Publication History

ABSTRACT

Recommender systems are frequently used in domains in which users express their preferences in the form of graded judgments, such as ratings. Current ranking techniques are based on one of two sub-optimal approaches: either they optimize for a binary metric such as Average Precision, which discards information on relevance levels, or they optimize for Normalized Discounted Cumulative Gain (NDCG), which ignores the dependence of an item's contribution on the relevance of more highly ranked items. We address the shortcomings of existing approaches by proposing GAPfm, the Graded Average Precision factor model, which is a latent factor model for top-N recommendation in domains with graded relevance data. The model optimizes the Graded Average Precision metric that has been proposed recently for assessing the quality of ranked results lists for graded relevance. GAPfm's advantages are twofold: it maintains full information about graded relevance and also addresses the limitations of models that optimize NDCG. Experimental results show that GAPfm achieves substantial improvements on the top-N recommendation task, compared to several state-of-the-art approaches.

References

  1. D. Agarwal and B.-C. Chen. Regression-based latent factor models. KDD '09, pages 19--28. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Agarwal, B.-C. Chen, P. Elango, and X. Wang. Personalized click shaping through lagrangian duality for online recommendation. SIGIR '12, pages 485--494. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Balakrishnan and S. Chopra. Collaborative ranking. WSDM '12, pages 143--152. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. NIPS '06, pages 193--200, 2006.Google ScholarGoogle Scholar
  5. O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM conference on Information and knowledge management, CIKM '09, pages 621--630, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. O. Chapelle and M. Wu. Gradient descent optimization of smoothed information retrieval metrics. Inf. Retr., 13:216--235, June 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-n recommendation tasks. RecSys '10, pages 39--46. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Gunawardana and G. Shani. A survey of accuracy evaluation metrics of recommendation tasks. J. Mach. Learn. Res., 10:2935--2962, December 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst., 22:89--115, January 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. L. Hong, R. Bekkerman, J. Adler, and B. D. Davison. Learning to rank social update streams. SIGIR '12, pages 651--660. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. Järvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422--446, Oct. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. KDD '08, pages 426--434. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42:30--37, August 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. N. N. Liu and Q. Yang. Eigenrank: a ranking-oriented approach to collaborative filtering. SIGIR '08, pages 83--90. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. N. N. Liu, M. Zhao, and Q. Yang. Probabilistic latent preference analysis for collaborative filtering. CIKM '09, pages 759--766. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T.-Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3):225--331, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed graphlab: A framework for machine learning in the cloud. PVLDB, 5(8):716--727, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. D. Manning, P. Raghavan, and H. Schütze. Introduction to information retrieval. Cambridge Univ. Press, Cambridge {u.a.}, 1. publ. edition, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Rendle, C. Freudenthaler, Z. Gantner, and S.-T. Lars. Bpr: Bayesian personalized ranking from implicit feedback. UAI '09, pages 452--461. AUAI Press, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. E. Robertson, E. Kanoulas, and E. Yilmaz. Extending average precision to graded relevance judgments. SIGIR '10, pages 603--610. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, A. Hanjalic, and N. Oliver. TFMAP: optimizing map for top-n context-aware recommendation. SIGIR '12, pages 155--164. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, N. Oliver, and A. Hanjalic. CLiMF: learning to maximize reciprocal rank with collaborative less-is-more filtering. RecSys '12, pages 139--146. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Taylor, J. Guiver, S. Robertson, and T. Minka. Softrank: optimizing non-smooth rank metrics. WSDM '08, pages 77--86. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453--1484, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. N. Volkovs and R. S. Zemel. Collaborative ranking with 17 parameters. NIPS '12, 2012.Google ScholarGoogle Scholar
  26. E. M. Voorhees. The trec-8 question answering track report. In TREC-8, 1999.Google ScholarGoogle Scholar
  27. J. Wang and J. Zhu. On statistical analysis and optimization of information retrieval effectiveness metrics. SIGIR '10, pages 226--233. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Weimer, A. Karatzoglou, Q. Le, and A. Smola. Cofirank - maximum margin matrix factorization for collaborative ranking. NIPS'07, pages 1593--1600, 2007.Google ScholarGoogle Scholar
  29. J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. SIGIR '07, pages 391--398. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. J. Xu, T.-Y. Liu, M. Lu, H. Li, and W.-Y. Ma. Directly optimizing evaluation measures in learning to rank. SIGIR '08, pages 107--114. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. S.-H. Yang, B. Long, A. J. Smola, H. Zha, and Z. Zheng. Collaborative competitive filtering: learning recommender using context of user choice. SIGIR '11, pages 295--304. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. SIGIR '07, pages 271--278. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. GAPfm: optimal top-n recommendations for graded relevance domains

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '13: Proceedings of the 22nd ACM international conference on Information & Knowledge Management
        October 2013
        2612 pages
        ISBN:9781450322638
        DOI:10.1145/2505515

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 October 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        CIKM '13 Paper Acceptance Rate143of848submissions,17%Overall Acceptance Rate1,861of8,427submissions,22%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader