skip to main content
10.1145/3308558.3313727acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Dynamic Ensemble of Contextual Bandits to Satisfy Users' Changing Interests

Published:13 May 2019Publication History

ABSTRACT

Recommender systems have to handle a highly non-stationary environment, due to users' fast changing interests over time. Traditional solutions have to periodically rebuild their models, despite high computational cost. But this still cannot empower them to automatically adjust to abrupt changes in trends caused by timely information. It is important to note that the changes of reward distributions caused by a non-stationary environment can also be context dependent. When the change is orthogonal to the given context, previously maintained models should be reused for better recommendation prediction.

In this work, we focus on contextual bandit algorithms for making adaptive recommendations. We capitalize on the unique context-dependent property of reward changes to conquer the challenging non-stationary environment for model update. In particular, we maintain a dynamic ensemble of contextual bandit models, where each bandit model's reward estimation quality is monitored regarding given context and possible environment changes. Only the admissible models to the current environment will be used for recommendation. We provide a rigorous upper regret bound analysis of our proposed algorithm. Extensive empirical evaluations on both synthetic and three real-world datasets confirmed the algorithm's advantage against existing non-stationary solutions that simply create new models whenever an environment change is detected.

References

  1. Yasin Abbasi-yadkori, Dávid Pál, and Csaba Szepesvári. 2011. Improved Algorithms for Linear Stochastic Bandits. In NIPS. 2312-2320. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Fabian Abel, Qi Gao, Geert-Jan Houben, and Ke Tao. 2011. Analyzing temporal dynamics in twitter profiles for personalized recommendations in the social web. In Proceedings of the 3rd International Web Science Conference. ACM, 2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Peter Auer. 2002. Using Confidence Bounds for Exploitation-Exploration Trade-offs. Journal of Machine Learning Research 3 (2002), 397-422. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. 2002. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn. 47, 2-3 (May 2002), 235-256. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Dimitris Bertsimas and Jose´ Ni&ntiled;o Mora. 2000. Restless Bandits, Linear Programming Relaxations, and a Primal-Dual Index Heuristic. Oper. Res. 48, 1 (Jan. 2000), 80-90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. John S. Breese, David Heckerman, and Carl Kadie. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. Technical Report MSR-TR-98-12. Microsoft Research. 18 pages.Google ScholarGoogle Scholar
  7. Yang Cao, Zheng Wen, Branislav Kveton, and Yao Xie. 2018. Nearly Optimal Adaptive Procedure for Piecewise-Stationary Bandit: a Change-Point Detection Approach. https://arxiv.org/abs/1802.03692(2018).Google ScholarGoogle Scholar
  8. Nicolò Cesa-Bianchi, Claudio Gentile, and Giovanni Zappella. 2013. A Gang of Bandits. In Pro. NIPS (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Olivier Chapelle and Lihong Li. 2011. An Empirical Evaluation of Thompson Sampling. In Advances in Neural Information Processing Systems. 2249-2257. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Joohyun Lee Fang Liu and Ness Shroff. 2018. A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem(AAAI'18).Google ScholarGoogle Scholar
  11. Sarah Filippi, Olivier Cappe, Aure´lien Garivier, and Csaba Szepesvári. 2010. Parametric bandits: The generalized linear case. In NIPS. 586-594. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Aure´lien Garivier and Eric Moulines. 2008. On upper-confidence bound policies for non-stationary bandit problems. arXiv preprint arXiv:0805.3415(2008).Google ScholarGoogle Scholar
  13. Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou, Giovanni Zappella, and Evans Etrue. 2017. On Context-Dependent Clustering of Bandits. In Proceedings of the 34th International Conference on Machine Learning(Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 1253-1262. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Negar Hariri, Bamshad Mobasher, and Robin Burke. 2015. Adapting to User Preference Changes in Interactive Recommendation. In Proceedings of the 24th International Conference on Artificial Intelligence(IJCAI'15). 4268-4274. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Ce´dric Hartland, Sylvain Gelly, Nicolas Baskiotis, Olivier Teytaud, and Michèle Sebag. 2006. Multi-armed Bandit, Dynamic Environments and Meta-Bandits. (Nov. 2006). https://hal.archives-ouvertes.fr/hal-00113668working paper or preprint.Google ScholarGoogle Scholar
  16. Jaya Kawale, Hung H Bui, Branislav Kveton, Long Tran-Thanh, and Sanjay Chawla. 2015. Efficient Thompson Sampling for Online Matrix-Factorization Recommendation. In Advances in Neural Information Processing Systems. 1297-1305. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Michal Kompan and Mária Bieliková. 2010. Content-Based News Recommendation. In E-Commerce and Web Technologies, Francesco Buccafurri and Giovanni Semeraro (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 61-72.Google ScholarGoogle Scholar
  18. Georgia Koutrika. 2018. Recent Advances in Recommender Systems: Matrices, Bandits, and Blenders. In EDBT.Google ScholarGoogle Scholar
  19. John Langford and Tong Zhang. 2008. The Epoch-Greedy Algorithm for Multi-armed Bandits with Side Information. In NIPS. 817-824. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lihong Li, Wei Chu, John Langford, and Robert E Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In Proceedings of 19th WWW. ACM, 661-670. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Lihong Li, Wei Chu, John Langford, and Xuanhui Wang. 2011. Unbiased offline evaluation of contextual-bandit-based news article recommendation algorithms. In Proceedings of 4th WSDM. ACM, 297-306. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Lei Li, Dingding Wang, Tao Li, Daniel Knox, and Balaji Padmanabhan. 2011. SCENE: A Scalable Two-stage Personalized News Recommendation System. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR '11). ACM, New York, NY, USA, 125-134. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Wei Li, Xuerui Wang, Ruofei Zhang, Ying Cui, Jianchang Mao, and Rong Jin. 2010. Exploitation and exploration in a performance based contextual advertising system. In Proceedings of 16th SIGKDD. ACM, 27-36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces. ACM, 31-40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Christos H. Papadimitriou and John N. Tsitsiklis. 1999. The Complexity of Optimal Queuing Network Control. Math. Oper. Res. 24, 2 (May 1999), 293-305. Google ScholarGoogle ScholarCross RefCross Ref
  26. Owen Phelan, Kevin McCarthy, Mike Bennett, and Barry Smyth. 2011. Terms of a Feather: Content-based News Recommendation and Discovery Using Twitter. In Proceedings of the 33rd European Conference on Advances in Information Retrieval(ECIR'11). Springer-Verlag, Berlin, Heidelberg, 448-459. http://dl.acm.org/citation.cfm?id=1996889.1996947 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Yi Qi, Qingyun Wu, Hongning Wang, Jie Tang, and Maosong Sun. 2018. Bandit Learning with Implicit Feedback. In Advances in Neural Information Processing Systems. 7287-7297. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Vishnu Raj and Sheetal Kalyani. 2017. Taming non-stationary bandits: A Bayesian approach. arXiv preprint arXiv:1707.09727(2017).Google ScholarGoogle Scholar
  29. Herbert Robbins. 1952. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc. 58, 5 (09 1952), 527-535. http://projecteuclid.org/euclid.bams/1183517370Google ScholarGoogle Scholar
  30. Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of 10th WWW. ACM, 285-295. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Alex Slivkins and Eli Upfal. 2008. Adapting to a Changing Environment: the Brownian Restless Bandits. In 21st Conference on Learning Theory (COLT).Google ScholarGoogle Scholar
  32. Huazheng Wang, Qingyun Wu, and Hongning Wang. 2016. Learning hidden features for contextual bandits. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management. ACM, 1633-1642. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Huazheng Wang, Qingyun Wu, and Hongning Wang. 2017. Factorization bandits for interactive recommendation. In Thirty-First AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarCross RefCross Ref
  34. P. Whittle. 1988. Restless bandits: activity allocation in a changing world. Journal of Applied Probability 25, A (1988), 287-298.Google ScholarGoogle Scholar
  35. Qingyun Wu, Naveen Iyer, and Hongning Wang. 2018. Learning Contextual Bandits in a Non-stationary Environment. In The 41st International ACM SIGIR Conference on Research Development in Information Retrieval(SIGIR '18). ACM, 495-504. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Qingyun Wu, Huazheng Wang, Quanquan Gu, and Hongning Wang. 2016. Contextual Bandits in a Collaborative Environment. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 529-538. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Qingyun Wu, Hongning Wang, Liangjie Hong, and Yue Shi. 2017. Returning is believing: Optimizing long-term user engagement in recommender systems. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. ACM, 1927-1936. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Jia Yuan Yu and Shie Mannor. 2009. Piecewise-stationary Bandit Problems with Side Observations. In Proceedings of the 26th Annual International Conference on Machine Learning(ICML '09). 1177-1184. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    WWW '19: The World Wide Web Conference
    May 2019
    3620 pages
    ISBN:9781450366748
    DOI:10.1145/3308558

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 May 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,899of8,196submissions,23%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format