skip to main content
10.5555/2615731.2616112acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
demonstration

Shape and texture based facial action and emotion recognition

Published:05 May 2014Publication History

ABSTRACT

In this paper, we present an intelligent facial emotion recognition system with real-time face tracking for a humanoid robot. The system is able to detect facial actions and emotions from images with up to 60 degrees of pose variations. We employ the Active Appearance Model to perform real-time face tracking and extract both texture and geometric representations of images. A POSIT algorithm is also used to identify head rotations. The extracted texture and shape features are employed to detect 18 facial actions and seven basic emotions. The overall system is integrated with a humanoid robot platform to further extend its vision APIs. The system proved to be able to deal with challenging facial emotion recognition tasks with various pose variations.

References

  1. Zhang, L., Jiang, M., Farid, D., and Hossain, A.M. 2013. Intelligent Facial Emotion Recognition and Semantic-based Topic Detection for a Humanoid Robot. Expert Systems with Applications, Vol 40, Issue 13, 5160--5168.Google ScholarGoogle Scholar
  2. Zhang, L., Gillies, M., and Barnden, J.A. 2008. EMMA: an Automated Intelligent Actor in E-drama. In Proceedings of International Conference on Intelligent User Interfaces. Canary Islands, Spain. pp. 409--412. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Zhang, L. 2013. Contextual and Active Learning-based Affect-sensing from Virtual Drama Improvisation. ACM Transactions on Speech and Language Processing (TSLP), Vol 9, Issue 4, Article No. 8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Ekman, P., Friesen, W.V., and Hager, J.C. 2002. Facial Action Coding System, A Human Face.Google ScholarGoogle Scholar
  5. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. 2010. The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. In Proceedings of CVPR4HB.Google ScholarGoogle Scholar
  6. Cootes, T.F., Edwards, G.J., and Taylor, C.J. 1999. Comparing Active Shape Models with Active Appearance Models. In Proceedings of British Machine Vision Conference. Vol. 1, 173--182.Google ScholarGoogle Scholar
  7. DeMenthon, D. and Davis, L.S. 1995. Model-Based Object Pose in 25 Lines of Code, IJCV, 15, 123--141. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Farid, D., Zhang, L., Hossain, A.M., Rahman, C.M., Strachan, R., Sexton, G., and Dahal, K. 2013. An Adaptive Ensemble Classifier for Mining Concept-Drifting Data Streams. Expert Systems with Applications, Vol 40, Issue 15. 5895--5906. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Shape and texture based facial action and emotion recognition

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      AAMAS '14: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems
      May 2014
      1774 pages
      ISBN:9781450327381

      Publisher

      International Foundation for Autonomous Agents and Multiagent Systems

      Richland, SC

      Publication History

      • Published: 5 May 2014

      Check for updates

      Qualifiers

      • demonstration

      Acceptance Rates

      AAMAS '14 Paper Acceptance Rate169of709submissions,24%Overall Acceptance Rate1,155of5,036submissions,23%
    • Article Metrics

      • Downloads (Last 12 months)8
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader