ABSTRACT
Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating scales for real world datasets. We then estimate how the amount of information predictions give to users is related to the scale ratings are collected on. Our findings suggest a tradeoff in rating scale granularity: while previous research indicates that coarse scales (such as thumbs up / thumbs down) take less time, we find that ratings with these scales provide less predictive value to users. We introduce a new measure, preference bits per second, to quantitatively reconcile this tradeoff.
- X. Amatriain, J. Pujol, and N. Oliver. I like it... i like it not: Evaluating user ratings noise in recommender systems. In UMAP 2009, pages 247--258. Springer (2009), 2009. Google ScholarDigital Library
- X. Amatriain, J. M. Pujol, N. Tintarev, and N. Oliver. Rate it again: increasing recommendation accuracy by user re-rating. In RecSys 09. ACM, 2009. Google ScholarDigital Library
- D. Cosley, S. K. Lam, I. Albert, J. A. Konstan, and J. Riedl. Is seeing believing?: How recommender system interfaces affect users' opinions. In CHI 03. ACM, 2003. Google ScholarDigital Library
- T. M. Cover and J. A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006. Google ScholarDigital Library
- M. Ekstrand, M. Ludwig, J. Konstan, and J. Riedl. Rethinking the recommender research ecosystem: Reproducibility, openness, and LensKit. In RecSys 11, pages 133--140. ACM, 2011. Google ScholarDigital Library
- M. D. Ekstrand, J. T. Riedl, and J. A. Konstan. Collaborative filtering recommender system. Foundations and Trends in Human-Computer Interaction, 4(2):81--173, 2010. Google ScholarDigital Library
- B. Fischhoff. Value elicitation: is there anything in there? American Psychologist, 46(8):835, 1991.Google ScholarCross Ref
- W. R. Garner. Rating scales, discriminability, and information transmission. The Psychological Review, 67(6):343--352, 1960.Google ScholarCross Ref
- K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval, 4(2):133--151, 2001. Google ScholarDigital Library
- F. Harper, X. Li, Y. Chen, and J. Konstan. An economic model of user rating in an online recommender system. User Modeling, pages 149--149, 2005. Google ScholarDigital Library
- J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst., 22(1):5--53, Jan. 2004. Google ScholarDigital Library
- W. Hill, L. Stead, M. Rosenstein, and G. Furnas. Recommending and evaluating choices in a virtual community of use. In CHI 95, pages 194--201, 1995. Google ScholarDigital Library
- G. Miller. Note on the bias of information estimates. Information theory in psychology: Problems and methods, 2:95--100, 1955.Google Scholar
- M. P. O'Mahony, N. J. Hurley, N. Kushmerick, and G. Silvestre. Collaborative recommendation: A robustness analysis. ACM Transactions on Internet Technology, 4:344--377, 2004. Google ScholarDigital Library
- M. P. O'Mahony, N. J. Hurley, and G. C. Silvestre. Detecting noise in recommender system databases. In IUI 06. ACM, 2006. Google ScholarDigital Library
- L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15(6):1191--1253, 2003. Google ScholarDigital Library
- B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In WWW 2001, pages 285--295. ACM, 2001. Google ScholarDigital Library
- S. Sen, J. Vig, and J. Riedl. Tagommenders: Connecting users to items through tags. In WWW 09, pages 671--680. ACM, 2009. Google ScholarDigital Library
- G. Shani and A. Gunawardana. Evaluating recommendation systems. Recommender Systems Handbook, pages 257--297, 2011.Google ScholarCross Ref
- C. Shannon. A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1):3--55, 2001. Google ScholarDigital Library
- E. I. Sparling and S. Sen. Rating: How difficult is it? In RecSys 11. ACM, 2011. Google ScholarDigital Library
- Youtube. New video page launches for all users @http://youtube-global.blogspot.com/2010/03/new-video-page-launches-for-all-users.html, Mar. 2010.Google Scholar
Index Terms
- How many bits per rating?
Recommendations
Rating: how difficult is it?
RecSys '11: Proceedings of the fifth ACM conference on Recommender systemsNetflix.com uses star ratings, Digg.com uses up/down votes and Facebook uses a "like" but not a "dislike" button. Despite the popularity and diversity of these rating scales, research offers little guidance for designers choosing between them.
This ...
Coherence and inconsistencies in rating behavior: estimating the magic barrier of recommender systems
Recommender Systems have to deal with a wide variety of users and user types that express their preferences in different ways. This difference in user behavior can have a profound impact on the performance of the recommender system. Users receive better ...
Improving collaborative filtering’s rating prediction accuracy by introducing the experiencing period criterion
AbstractCollaborative filtering algorithms take into account users’ tastes and interests, expressed as ratings, in order to formulate personalized recommendations. These algorithms initially identify each user’s “near neighbors,” i.e., users having highly ...
Comments