skip to main content
10.1145/3298689.3347034acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article

Relaxed softmax for PU learning

Published:10 September 2019Publication History

ABSTRACT

In recent years, the softmax model and its fast approximations have become the de-facto loss functions for deep neural networks when dealing with multi-class prediction. This loss has been extended to language modeling and recommendation, two fields that fall into the framework of learning from Positive and Unlabeled data.

In this paper, we stress the different drawbacks of the current family of softmax losses and sampling schemes when applied in a Positive and Unlabeled learning setup. We propose both a Relaxed Softmax loss (RS) and a new negative sampling scheme based on Boltzmann formulation. We show that the new training objective is better suited for the tasks of density estimation, item similarity and next-event prediction by driving uplifts in performance on textual and recommendation datasets against classical softmax.

References

  1. Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 301--310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research 3, Feb (2003), 1137--1155. Google ScholarGoogle Scholar
  3. Yoshua Bengio and Jean-Sébastien Senécal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks 19, 4 (2008), 713--722. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yoshua Bengio, Jean-Sébastien Senécal, et al. 2003. Quick Training of Probabilistic Neural Nets by Importance Sampling.. In AISTATS. 1--9.Google ScholarGoogle Scholar
  5. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan (2003), 993--1022. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching Word Vectors with Subword Information. CoRR abs/1607.04606 (2016). arXiv:1607.04606 http://arxiv.org/abs/1607.04606Google ScholarGoogle Scholar
  7. Long Chen, Fajie Yuan, Joemon M Jose, and Weinan Zhang. 2018. Improving Negative Sampling for Word Representation using Self-embedded Features. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. ACM, 99--107. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Charles Elkan and Keith Noto. 2008. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 213--220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 855--864. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. Gutmann and A. Hyvärinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. JMLR WCP, Vol. 9. Journal of Machine Learning Research - Proceedings Track, 297--304.Google ScholarGoogle Scholar
  11. Tzu-Kuo Huang, Alekh Agarwal, Daniel J Hsu, John Langford, and Robert E Schapire. 2015. Efficient and parsimonious agnostic active learning. In Advances in Neural Information Processing Systems. 2755--2763. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On Using Very Large Target Vocabulary for Neural Machine Translation. CoRR abs/1412.2007 (2014). arXiv:1412.2007 http://arxiv.org/abs/1412.2007Google ScholarGoogle Scholar
  13. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. CoRR abs/1607.01759 (2016). arXiv:1607.01759Google ScholarGoogle Scholar
  14. Christoph Käding, Alexander Freytag, Erik Rodner, Andrea Perino, and Joachim Denzler. 2016. Large-scale active learning with approximations of expected model output changes. In German Conference on Pattern Recognition. Springer, 179--191.Google ScholarGoogle ScholarCross RefCross Ref
  15. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning. 1188--1196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hai Le Son, Alexandre Allauzen, and François Yvon. 2012. Measuring the influence of long range dependencies with neural network language models. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT. Association for Computational Linguistics, 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wee Sun Lee and Bing Liu. 2003. Learning with positive and unlabeled examples using weighted logistic regression. In ICML, Vol. 3. 448--455. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Xiaoli Li and Bing Liu. 2003. Learning to classify texts using positive and unlabeled data. In IJCAI, Vol. 3. 587--592. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Xiao-Li Li and Bing Liu. 2005. Learning from positive and unlabeled examples with different data distributions. In European Conference on Machine Learning. Springer, 218--229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Dawen Liang, Laurent Charlin, James McInerney, and David M Blei. 2016. Modeling user exposure in recommendation. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 951--961. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Bing Liu, Wee Sun Lee, Philip S Yu, and Xiaoli Li. 2002. Partially supervised classification of text documents. In ICML, Vol. 2. Citeseer, 387--394. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).Google ScholarGoogle Scholar
  23. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111--3119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. (2012).Google ScholarGoogle Scholar
  25. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu, and Santhoshkumar Saminathan. 2016. subgraph2vec: Learning distributed representations of rooted sub-graphs from large graphs. arXiv preprint arXiv:1606.08928 (2016).Google ScholarGoogle Scholar
  27. Ulrich Paquet and Noam Koenigstein. 2013. One-class collaborative filtering with random graphs. In Proceedings of the 22nd international conference on World Wide Web. ACM, 999--1008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP). 1532--1543. http://www.aclweb.org/anthology/D14-1162Google ScholarGoogle Scholar
  29. Yafeng Ren, Donghong Ji, and Hongbin Zhang. 2014. Positive Unlabeled Learning for Deceptive Reviews Detection. In EMNLP. 488--498.Google ScholarGoogle Scholar
  30. N Roy and A McCallum. 2001. Toward optimal active learning through sampling estimation of error reduction. Int. Conf. on Machine Learning. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. 2016. Swivel: Improving embeddings by noticing what's missing. arXiv preprint arXiv:1602.02215 (2016).Google ScholarGoogle Scholar
  32. Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research 2, Nov (2001), 45--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Trieu H Trinh, Andrew M Dai, Minh-Thang Luong, and Quoc V Le. 2018. Learning longer-term dependencies in rnns with auxiliary losses. arXiv preprint arXiv:1803.00144 (2018).Google ScholarGoogle Scholar
  34. Flavian Vasile, Elena Smirnova, and Alexis Conneau. 2016. Meta-prod2vec: Product embeddings using side-information for recommendation. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 225--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Yazhou Yang and Marco Loog. 2018. A benchmark and comparison of active learning for logistic regression. Pattern Recognition 83 (2018), 401--415.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the Softmax Bottleneck: A High-Rank RNN Language Model. In International Conference on Learning Representations (ICLR).Google ScholarGoogle Scholar
  38. Kai Yu, Jinbo Bi, and Volker Tresp. 2006. Active learning via transductive experimental design. In Proceedings of the 23rd international conference on Machine learning. ACM, 1081--1088. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Relaxed softmax for PU learning

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          RecSys '19: Proceedings of the 13th ACM Conference on Recommender Systems
          September 2019
          635 pages
          ISBN:9781450362436
          DOI:10.1145/3298689

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 10 September 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          RecSys '19 Paper Acceptance Rate36of189submissions,19%Overall Acceptance Rate254of1,295submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader