skip to main content
10.1145/3240508.3240618acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

BeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial Network

Authors Info & Claims
Published:15 October 2018Publication History

ABSTRACT

Facial makeup transfer aims to translate the makeup style from a given reference makeup face image to another non-makeup one while preserving face identity. Such an instance-level transfer problem is more challenging than conventional domain-level transfer tasks, especially when paired data is unavailable. Makeup style is also different from global styles (e.g., paintings) in that it consists of several local styles/cosmetics, including eye shadow, lipstick, foundation, and so on. Extracting and transferring such local and delicate makeup information is infeasible for existing style transfer methods. We address the issue by incorporating both global domain-level loss and local instance-level loss in an dual input/output Generative Adversarial Network, called BeautyGAN. Specifically, the domain-level transfer is ensured by discriminators that distinguish generated images from domains' real samples. The instance-level loss is calculated by pixel-level histogram loss on separate local facial regions. We further introduce perceptual loss and cycle consistency loss to generate high quality faces and preserve identity. The overall objective function enables the network to learn translation on instance-level through unsupervised adversarial learning. We also build up a new makeup dataset that consists of 3834 high-resolution face images. Extensive experiments show that BeautyGAN could generate visually pleasant makeup faces and accurate transferring results. Data and code are available at http://liusi-group.com/projects/BeautyGAN.

Skip Supplemental Material Section

Supplemental Material

References

  1. Cunjian Chen, Antitza Dantcheva, and Arun Ross. 2013. Automatic facial makeup detection with application in face recognition. In Biometrics (ICB), 2013 International Conference on. IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  2. Cunjian Chen, Antitza Dantcheva, and Arun Ross. 2016. An ensemble of patchbased subspaces for makeup-robust face recognition. Information fusion 32 (2016), 80--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cunjian Chen, Antitza Dantcheva, Thomas Swearingen, and Arun Ross. 2017. Spoofing faces using makeup: An investigative study. In Identity, Security and Behavior Analysis (ISBA), 2017 IEEE International Conference on. IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  4. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2017. StarGAN: Unified Generative Adversarial Networks for Multi- Domain Image-to-Image Translation. arXiv preprint arXiv:1711.09020 (2017).Google ScholarGoogle Scholar
  5. Antitza Dantcheva, Cunjian Chen, and Arun Ross. 2012. Can facial cosmetics affect the matching accuracy of face recognition systems?. In Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International Conference on. IEEE, 391--398.Google ScholarGoogle Scholar
  6. Brian Dolhansky and Cristian Canton Ferrer. 2017. Eye In-Painting with Exemplar Generative Adversarial Networks. arXiv preprint arXiv:1712.03999 (2017).Google ScholarGoogle Scholar
  7. Hasan Sheikh Faridul, Tania Pouli, Christel Chamaret, Jürgen Stauder, Alain Trémeau, Erik Reinhard, et al. 2014. A Survey of Color Mapping and its Applications. Eurographics (State of the Art Reports) 3 (2014).Google ScholarGoogle Scholar
  8. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. 2015. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015).Google ScholarGoogle Scholar
  9. Leon A Gatys, Alexander S Ecker, Matthias Bethge, Aaron Hertzmann, and Eli Shechtman. 2017. Controlling perceptual factors in neural style transfer. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Dong Guo and Terence Sim. 2009. Digital face makeup by example. In Computer Vision and Pattern Recognition, 2009. IEEE Conference on. IEEE, 73--79.Google ScholarGoogle Scholar
  12. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2016. Imageto- image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2016).Google ScholarGoogle Scholar
  13. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for realtime style transfer and super-resolution. In European Conference on Computer Vision. Springer, 694--711.Google ScholarGoogle ScholarCross RefCross Ref
  14. Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. 2017. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192 (2017).Google ScholarGoogle Scholar
  15. Taeksoo Kim, Byoungjip Kim, Moonsu Cha, and Jiwon Kim. 2017. Unsupervised visual attribute transfer with reconfigurable generative adversarial networks. arXiv preprint arXiv:1707.09798 (2017).Google ScholarGoogle Scholar
  16. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  17. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2016. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802 (2016).Google ScholarGoogle Scholar
  18. Chuan Li and MichaelWand. 2016. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision. Springer, 702--716.Google ScholarGoogle ScholarCross RefCross Ref
  19. Chen Li, Kun Zhou, and Stephen Lin. 2015. Simulating makeup through physicsbased manipulation of intrinsic image layers. In IEEE Conference on Computer Vision and Pattern Recognition. 4621--4629.Google ScholarGoogle ScholarCross RefCross Ref
  20. Yi Li, Lingxiao Song, Xiang Wu, Ran He, and Tieniu Tan. 2017. Anti-Makeup: Learning A Bi-Level Adversarial Network for Makeup-Invariant Face Verification. arXiv preprint arXiv:1709.03654 (2017).Google ScholarGoogle Scholar
  21. Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics (TOG) 36, 4 (2017), 120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Ming-Yu Liu and Oncel Tuzel. 2016. Coupled generative adversarial networks. In Advances in neural information processing systems. 469--477. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Si Liu, Xinyu Ou, Ruihe Qian, Wei Wang, and Xiaochun Cao. 2016. Makeup like a superstar: deep localized makeup transfer network. In the Association for the Advance of Artificial Intelligence. AAAI Press, 2568--2575. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. 2016. Multi-class Generative Adversarial Networks with the L2 Loss Function. CoRR abs/1611.04076 (2016). arXiv:1611.04076 http://arxiv.org/abs/1611.04076Google ScholarGoogle Scholar
  25. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  26. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018).Google ScholarGoogle Scholar
  27. Ashish Shrivastava, Tomas P'ster, Oncel Tuzel, Josh Susskind,WendaWang, and Russ Webb. 2016. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828 (2016).Google ScholarGoogle Scholar
  28. K. Simonyan and A. Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR abs/1409.1556 (2014).Google ScholarGoogle Scholar
  29. Wai-Shun Tong, Chi-Keung Tang, Michael S Brown, and Ying-Qing Xu. 2007. Example-based cosmetic transfer. In Computer Graphics and Applications, 2007. PG'07. 15th Pacific Conference on. IEEE, 211--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. 2016. Instance Normalization: The Missing Ingredient for Fast Stylization. CoRR abs/1607.08022 (2016). arXiv:1607.08022 http://arxiv.org/abs/1607.08022Google ScholarGoogle Scholar
  31. Shuyang Wang and Yun Fu. 2016. Face Behind Makeup. In the Association for the Advance of Artificial Intelligence. 58--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Zhen Wei, Yao Sun, Jinqiao Wang, Hanjiang Lai, and Si Liu. 2017. Learning Adaptive Receptive Fields for Deep Image Parsing Network. In IEEE Conference on Computer Vision and Pattern Recognition. 2434--2442.Google ScholarGoogle Scholar
  33. Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. 2017. Pyramid scene parsing network. In IEEE Conference on Computer Vision and Pattern Recognition. 2881--2890.Google ScholarGoogle ScholarCross RefCross Ref
  34. Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. 2016. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision. Springer, 597--613.Google ScholarGoogle ScholarCross RefCross Ref
  35. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017).Google ScholarGoogle Scholar

Index Terms

  1. BeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial Network

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          MM '18: Proceedings of the 26th ACM international conference on Multimedia
          October 2018
          2167 pages
          ISBN:9781450356657
          DOI:10.1145/3240508

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 15 October 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          MM '18 Paper Acceptance Rate209of757submissions,28%Overall Acceptance Rate995of4,171submissions,24%

          Upcoming Conference

          MM '24
          MM '24: The 32nd ACM International Conference on Multimedia
          October 28 - November 1, 2024
          Melbourne , VIC , Australia

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader