ABSTRACT
Recommender systems have to handle a highly non-stationary environment, due to users' fast changing interests over time. Traditional solutions have to periodically rebuild their models, despite high computational cost. But this still cannot empower them to automatically adjust to abrupt changes in trends caused by timely information. It is important to note that the changes of reward distributions caused by a non-stationary environment can also be context dependent. When the change is orthogonal to the given context, previously maintained models should be reused for better recommendation prediction.
In this work, we focus on contextual bandit algorithms for making adaptive recommendations. We capitalize on the unique context-dependent property of reward changes to conquer the challenging non-stationary environment for model update. In particular, we maintain a dynamic ensemble of contextual bandit models, where each bandit model's reward estimation quality is monitored regarding given context and possible environment changes. Only the admissible models to the current environment will be used for recommendation. We provide a rigorous upper regret bound analysis of our proposed algorithm. Extensive empirical evaluations on both synthetic and three real-world datasets confirmed the algorithm's advantage against existing non-stationary solutions that simply create new models whenever an environment change is detected.
- Yasin Abbasi-yadkori, Dávid Pál, and Csaba Szepesvári. 2011. Improved Algorithms for Linear Stochastic Bandits. In NIPS. 2312-2320. Google ScholarDigital Library
- Fabian Abel, Qi Gao, Geert-Jan Houben, and Ke Tao. 2011. Analyzing temporal dynamics in twitter profiles for personalized recommendations in the social web. In Proceedings of the 3rd International Web Science Conference. ACM, 2. Google ScholarDigital Library
- Peter Auer. 2002. Using Confidence Bounds for Exploitation-Exploration Trade-offs. Journal of Machine Learning Research 3 (2002), 397-422. Google ScholarDigital Library
- Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. 2002. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn. 47, 2-3 (May 2002), 235-256. Google ScholarDigital Library
- Dimitris Bertsimas and Jose´ Ni&ntiled;o Mora. 2000. Restless Bandits, Linear Programming Relaxations, and a Primal-Dual Index Heuristic. Oper. Res. 48, 1 (Jan. 2000), 80-90. Google ScholarDigital Library
- John S. Breese, David Heckerman, and Carl Kadie. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. Technical Report MSR-TR-98-12. Microsoft Research. 18 pages.Google Scholar
- Yang Cao, Zheng Wen, Branislav Kveton, and Yao Xie. 2018. Nearly Optimal Adaptive Procedure for Piecewise-Stationary Bandit: a Change-Point Detection Approach. https://arxiv.org/abs/1802.03692(2018).Google Scholar
- Nicolò Cesa-Bianchi, Claudio Gentile, and Giovanni Zappella. 2013. A Gang of Bandits. In Pro. NIPS (2013). Google ScholarDigital Library
- Olivier Chapelle and Lihong Li. 2011. An Empirical Evaluation of Thompson Sampling. In Advances in Neural Information Processing Systems. 2249-2257. Google ScholarDigital Library
- Joohyun Lee Fang Liu and Ness Shroff. 2018. A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem(AAAI'18).Google Scholar
- Sarah Filippi, Olivier Cappe, Aure´lien Garivier, and Csaba Szepesvári. 2010. Parametric bandits: The generalized linear case. In NIPS. 586-594. Google ScholarDigital Library
- Aure´lien Garivier and Eric Moulines. 2008. On upper-confidence bound policies for non-stationary bandit problems. arXiv preprint arXiv:0805.3415(2008).Google Scholar
- Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou, Giovanni Zappella, and Evans Etrue. 2017. On Context-Dependent Clustering of Bandits. In Proceedings of the 34th International Conference on Machine Learning(Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 1253-1262. Google ScholarDigital Library
- Negar Hariri, Bamshad Mobasher, and Robin Burke. 2015. Adapting to User Preference Changes in Interactive Recommendation. In Proceedings of the 24th International Conference on Artificial Intelligence(IJCAI'15). 4268-4274. Google ScholarDigital Library
- Ce´dric Hartland, Sylvain Gelly, Nicolas Baskiotis, Olivier Teytaud, and Michèle Sebag. 2006. Multi-armed Bandit, Dynamic Environments and Meta-Bandits. (Nov. 2006). https://hal.archives-ouvertes.fr/hal-00113668working paper or preprint.Google Scholar
- Jaya Kawale, Hung H Bui, Branislav Kveton, Long Tran-Thanh, and Sanjay Chawla. 2015. Efficient Thompson Sampling for Online Matrix-Factorization Recommendation. In Advances in Neural Information Processing Systems. 1297-1305. Google ScholarDigital Library
- Michal Kompan and Mária Bieliková. 2010. Content-Based News Recommendation. In E-Commerce and Web Technologies, Francesco Buccafurri and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 61-72.Google Scholar
- Georgia Koutrika. 2018. Recent Advances in Recommender Systems: Matrices, Bandits, and Blenders. In EDBT.Google Scholar
- John Langford and Tong Zhang. 2008. The Epoch-Greedy Algorithm for Multi-armed Bandits with Side Information. In NIPS. 817-824. Google ScholarDigital Library
- Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of 19th WWW. ACM, 661-670. Google ScholarDigital Library
- Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. 2011. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of 4th WSDM. ACM, 297-306. Google ScholarDigital Library
- Lei Li, Dingding Wang, Tao Li, Daniel Knox, and Balaji Padmanabhan. 2011. SCENE: A Scalable Two-stage Personalized News Recommendation System. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR '11). ACM, New York, NY, USA, 125-134. Google ScholarDigital Library
- Wei Li, Xuerui Wang, Ruofei Zhang, Ying Cui, Jianchang Mao, and Rong Jin. 2010. Exploitation and exploration in a performance based contextual advertising system. In Proceedings of 16th SIGKDD. ACM, 27-36. Google ScholarDigital Library
- Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces. ACM, 31-40. Google ScholarDigital Library
- Christos H. Papadimitriou and John N. Tsitsiklis. 1999. The Complexity of Optimal Queuing Network Control. Math. Oper. Res. 24, 2 (May 1999), 293-305. Google ScholarCross Ref
- Owen Phelan, Kevin McCarthy, Mike Bennett, and Barry Smyth. 2011. Terms of a Feather: Content-based News Recommendation and Discovery Using Twitter. In Proceedings of the 33rd European Conference on Advances in Information Retrieval(ECIR'11). Springer-Verlag, Berlin, Heidelberg, 448-459. http://dl.acm.org/citation.cfm?id=1996889.1996947 Google ScholarDigital Library
- Yi Qi, Qingyun Wu, Hongning Wang, Jie Tang, and Maosong Sun. 2018. Bandit Learning with Implicit Feedback. In Advances in Neural Information Processing Systems. 7287-7297. Google ScholarDigital Library
- Vishnu Raj and Sheetal Kalyani. 2017. Taming non-stationary bandits: A Bayesian approach. arXiv preprint arXiv:1707.09727(2017).Google Scholar
- Herbert Robbins. 1952. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc. 58, 5 (09 1952), 527-535. http://projecteuclid.org/euclid.bams/1183517370Google Scholar
- Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of 10th WWW. ACM, 285-295. Google ScholarDigital Library
- Alex Slivkins and Eli Upfal. 2008. Adapting to a Changing Environment: the Brownian Restless Bandits. In 21st Conference on Learning Theory (COLT).Google Scholar
- Huazheng Wang, Qingyun Wu, and Hongning Wang. 2016. Learning hidden features for contextual bandits. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 1633-1642. Google ScholarDigital Library
- Huazheng Wang, Qingyun Wu, and Hongning Wang. 2017. Factorization bandits for interactive recommendation. In Thirty-First AAAI Conference on Artificial Intelligence. Google ScholarCross Ref
- P. Whittle. 1988. Restless bandits: activity allocation in a changing world. Journal of Applied Probability 25, A (1988), 287-298.Google Scholar
- Qingyun Wu, Naveen Iyer, and Hongning Wang. 2018. Learning Contextual Bandits in a Non-stationary Environment. In The 41st International ACM SIGIR Conference on Research Development in Information Retrieval(SIGIR '18). ACM, 495-504. Google ScholarDigital Library
- Qingyun Wu, Huazheng Wang, Quanquan Gu, and Hongning Wang. 2016. Contextual Bandits in a Collaborative Environment. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 529-538. Google ScholarDigital Library
- Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi. 2017. Returning is believing: Optimizing long-term user engagement in recommender systems. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, 1927-1936. Google ScholarDigital Library
- Jia Yuan Yu and Shie Mannor. 2009. Piecewise-stationary Bandit Problems with Side Observations. In Proceedings of the 26th Annual International Conference on Machine Learning(ICML '09). 1177-1184. Google ScholarDigital Library
Recommendations
Learning Hidden Features for Contextual Bandits
CIKM '16: Proceedings of the 25th ACM International on Conference on Information and Knowledge ManagementContextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. Most contextual bandit algorithms simply assume the learner would have access to ...
Learning Contextual Bandits in a Non-stationary Environment
SIGIR '18: The 41st International ACM SIGIR Conference on Research & Development in Information RetrievalMulti-armed bandit algorithms have become a reference solution for handling the explore/exploit dilemma in recommender systems, and many other important real-world problems, such as display advertisement. However, such algorithms usually assume a ...
Ensemble contextual bandits for personalized recommendation
RecSys '14: Proceedings of the 8th ACM Conference on Recommender systemsThe cold-start problem has attracted extensive attention among various online services that provide personalized recommendation. Many online vendors employ contextual bandit strategies to tackle the so-called exploration/exploitation dilemma rooted from ...
Comments