skip to main content
research-article

Adaptive Real-Time Emotion Recognition from Body Movements

Authors Info & Claims
Published:22 December 2015Publication History
Skip Abstract Section

Abstract

We propose a real-time system that continuously recognizes emotions from body movements. The combined low-level 3D postural features and high-level kinematic and geometrical features are fed to a Random Forests classifier through summarization (statistical values) or aggregation (bag of features). In order to improve the generalization capability and the robustness of the system, a novel semisupervised adaptive algorithm is built on top of the conventional Random Forests classifier. The MoCap UCLIC affective gesture database (labeled with four emotions) was used to train the Random Forests classifier, which led to an overall recognition rate of 78% using a 10-fold cross-validation. Subsequently, the trained classifier was used in a stream-based semisupervised Adaptive Random Forests method for continuous unlabeled Kinect data classification. The very low update cost of our adaptive classifier makes it highly suitable for data stream applications. Tests performed on the publicly available emotion datasets (body gestures and facial expressions) indicate that our new classifier outperforms existing algorithms for data streams in terms of accuracy and computational costs.

References

  1. Jake K. Aggarwal and Quin Cai. 1997. Human motion analysis: A review. In Proceedings of Nonrigid and Articulated Motion Workshop. 90--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kaat Alaerts, Evelien Nackaerts, Pieter Meyns, Stephan P. Swinnen, and Nicole Wenderoth. 2011. Action and emotion recognition from point light displays: An investigation of gender differences. PLoS ONE 6, 6 (Jan. 2011), e20989. DOI:http://dx.doi.org/10.1371/journal.pone.0020989Google ScholarGoogle ScholarCross RefCross Ref
  3. Anthony P. Atkinson, Winand H. Dittrich, Andrew J. Gemmell, and Andrew W. Young. 2004. Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception 33, 6 (2004), 717--746.Google ScholarGoogle ScholarCross RefCross Ref
  4. Anthony P. Atkinson, Mary L. Tunstall, and Winand H. Dittrich. 2007. Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures. Cognition 104 (2007), 59--72.Google ScholarGoogle ScholarCross RefCross Ref
  5. Tadas Baltrusaitis, D. McDuff, N. Banda, M. Mahmoud, R. el Kaliouby, P. Robinson, and R. Picard. 2011. Real-time inference of mental states from facial expressions and upper body gestures. In Proceedings of 2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshops (FG’11). 909--914. DOI:http://dx.doi.org/10.1109/FG.2011.5771372Google ScholarGoogle Scholar
  6. Tanja Bänziger, Marcello Mortillaro, and Klaus R. Scherer. 2012. Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion 12, 5 (Oct. 2012), 1161--1179. DOI:http://dx.doi.org/10.1037/a0025827Google ScholarGoogle ScholarCross RefCross Ref
  7. Tony Belpaeme, Paul E. Baxter, Robin Read, Rachel Wood, Heriberto Cuayáhuitl, Bernd Kiefer, Stefania Racioppa, Ivana Kruijff-Korbayová, Georgios Athanasopoulos, Valentin Enescu, Rosemarijn Looije, Mark Neerincx, Yiannis Demiris, Raquel Ros-Espinoza, Aryel Beck, Lola Cañamero, Antione Hiolle, Matthew Lewis, Ilaria Baroni, Marco Nalin, Piero Cosi, Giulio Paci, Fabio Tesser, Giacomo Sommavilla, and Remi Humbert. 2012. Multimodal child-robot interaction: Building social bonds. Journal of Human-Robot Interaction 1, 2 (2012), 33--53. DOI:http://dx.doi.org/10.5898/jhri.v1i2.62Google ScholarGoogle Scholar
  8. Daniel Bernhardt. 2010. Emotion Inference From Human Body Motion. Technical Report 787. Computer Laboratory, University of Cambridge, Cambridge, 227 pages.Google ScholarGoogle Scholar
  9. Nadia Bianchi-Berthouze, P. Cairns, and A. L. Cox. 2008. On posture as a modality for expressing and recognizing emotions. In Joint Proceedings of the 2005, 2006, and 2007 International Workshops on Emotion in HCI. Citeseer, 74--80.Google ScholarGoogle Scholar
  10. Nadia Bianchi-Berthouze and Andrea Kleinsmith. 2003. A categorical approach to affective gesture recognition. Connection Science 15, 4 (Dec. 2003), 259--269. DOI:http://dx.doi.org/10.1080/09540090310001658793Google ScholarGoogle ScholarCross RefCross Ref
  11. Leo Breiman. 2001. Random forests. Machine Learning 45, 1 (2001), 5--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Antonio Camurri, Paolo Coletta, Alberto Massari, Barbara Mazzarino, Massimiliano Peri, Matteo Ricchetti, Andrea Ricci, Gualtiero Volpe, and Max Msp. 2004. Toward real-time multimodal processing: EyesWeb 4.0. In Proceedings of 2014 Convention on Motion, Emotion and Cognition, Vol. 1.Google ScholarGoogle Scholar
  13. Antonio Camurri, Ingrid Lagerlöf, and Gualtiero Volpe. 2003. Recognizing emotion from dance movement: Comparison of spectator recognition and automated techniques. International Journal of Human-Computer Studies 59, 1--2 (July 2003), 213--225. DOI:http://dx.doi.org/10.1016/S1071-5819(03)00050-8 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. 2008. An empirical evaluation of supervised learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning (2008), 96--103. DOI:http://dx.doi.org/10.1145/1390156.1390169 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Ginevra Castellano, Antonio Camurri, Barbara Mazzarino, and Gualtiero Volpe. 2007a. A mathematical model to analyse the dynamics of gesture expressivity. In Proceedings of 2007 Convention on Artificial and Ambient Intelligence. Newcastle upon Tyne, UK, 1--6.Google ScholarGoogle Scholar
  16. Ginevra Castellano, Loic Kessous, and George Caridakis. 2007b. Multimodal emotion recognition from expressive faces, body gestures and speech. Affective Computing and Intelligent Interaction 4738 (2007), 71--82.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ginevra Castellano, S. Villalba, and Antonio Camurri. 2007c. Recognising human emotions from body movement and gesture dynamics. In Proceedings of 2nd International Conference on Affective Computing and Intelligent Interaction. Springer, 71--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nele Dael, Marcello Mortillaro, and Klaus R. Scherer. 2012a. Emotion expression in body action and posture. Emotion 12, 5 (Oct. 2012), 1085--1101. DOI:http://dx.doi.org/10.1037/a0025737Google ScholarGoogle ScholarCross RefCross Ref
  19. Nele Dael, Marcello Mortillaro, and Klaus R. Scherer. 2012b. The body action and posture coding system (BAP): Development and reliability. Journal of Nonverbal Behavior 36, 2 (Jan. 2012), 97--121. DOI:http://dx.doi.org/10.1007/s10919-012-0130-0Google ScholarGoogle ScholarCross RefCross Ref
  20. Beatrice de Gelder. 2009. Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philosophical Tranactions of the Royal Society B: Biological Sciences 364 (2009), 3475--3484.Google ScholarGoogle ScholarCross RefCross Ref
  21. Marco de Meijer. 1989. The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 13, 4 (1989), 247--268.Google ScholarGoogle ScholarCross RefCross Ref
  22. P. Ravindra De Silva, Ajith P. Madurapperuma, Ashu Marasinghe, and Minetada Osano. 2006. A multi-agent based interactive system towards child’s emotion performances quantified through affective body gestures. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06). 1236--1239. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Ellen Douglas-cowie, Roddy Cowie, Ian Sneddon, Cate Cox, Orla Lowry, Margaret Mcrorie, Jean-claude Martin, Laurence Devillers, Sarkis Abrilian, Anton Batliner, Noam Amir, and Kostas Karpouzis. 2007. The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In Proceedings of Affective Computing and Intelligent Interaction, Vol. 4738. 488--500. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Machine Learning 63 (2006), 3--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Donald Glowinski, Antonio Camurri, Gualtiero Volpe, Nele Dael, and Klaus R. Scherer. 2008. Technique for automatic emotion recognition by body gesture analysis. In Proceedings of IEEE Workshops on Computer Vision and Pattern Recognition. 1--6. DOI:http://dx.doi.org/10.1109/CVPRW.2008.4563173Google ScholarGoogle Scholar
  26. Donald Glowinski, Nele Dael, Antonio Camurri, Gualtiero Volpe, Marcello Mortillaro, and Klaus R. Scherer. 2011. Toward a minimal representation of affective gestures. IEEE Transactions on Affective Computing 2, 2 (April 2011), 106--118. DOI:http://dx.doi.org/10.1109/T-AFFC.2011.7 Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Harry J. Griffin, Min S. H. Aung, Bernardino Romera-Paredes, Ciaran McLoughlin, Gary McKeown, William Curran, and Nadia Bianchi-Berthouze. 2013. Laughter type recognition from whole body motion. In Proceedings of 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. 349--355. DOI:http://dx.doi.org/10.1109/ACII.2013.64 Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Hatice Gunes and Massimo Piccardi. 2005. Fusing face and body gesture for machine recognition of emotions. In Proceedings of IEEE International Workshop on Robot and Human Interactive Communication. 306--311.Google ScholarGoogle ScholarCross RefCross Ref
  29. Hatice Gunes and Massimo Piccardi. 2006. A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In Proceedings of the International Conference on Pattern Recognition, Vol. 1. 1148--1153. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hatice Gunes and Massimo Piccardi. 2009. Automatic temporal segment detection and affect recognition from face and body display. IEEE Transactions on Systems, Man, and Cybernetics. Part B 39, 1 (Feb. 2009), 64--84. DOI:http://dx.doi.org/10.1109/TSMCB.2008.927269 Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Hatice Gunes and Björn Schuller. 2012. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing (July 2012). DOI:http://dx.doi.org/10.1016/j.imavis.2012.06.016Google ScholarGoogle Scholar
  32. Ian H. Hall, Mark Frank, Eibe Holmes, Geoffrey Pfahringer, Bernhard Reutemann, and Peter Witten. 2009. The WEKA data mining software: An update. ACM SIGKDD Explorations Newsletter 11, 1 (2009), 10--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Kanav Kahol, Priyamvada Tripathi, and Sethuraman Panchanathan. 2003. Gesture segmentation in complex motion sequences. In Proceedings of the International Conference on Image Processing.Google ScholarGoogle ScholarCross RefCross Ref
  34. Asha Kapur, Ajay Kapur, Naznin Virji-babul, George Tzanetakis, and Peter F. Driessen. 2005. Gesture-based affective computing on motion capture data. In Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction (ACII’05). 1--7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Michelle Karg, Ali-akbar Samadani, Rob Gorbet, K. Kolja, Jesse Hoey, and Dana Kulic. 2013. Body movements for affective expression: A survey of automatic recognition and generation. IEEE Transactions on Affective Computing 4, 4 (2013), 341--359.Google ScholarGoogle ScholarCross RefCross Ref
  36. Andrea Kleinsmith. 2005. An incremental and interactive affective posture recognition system. In Proceedings of the Workshop on Adapting the Interaction Style to Affective Factors.Google ScholarGoogle Scholar
  37. Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2007. Recognizing affective dimensions from body posture. In Proceedings of the 2nd International Conference on Affective Computing and Intelligent Interaction. 45--58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing 4, 1 (Jan. 2013), 15--33. DOI:http://dx.doi.org/10.1109/T-AFFC.2012.16 Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Andrea Kleinsmith, Nadia Bianchi-Berthouze, and Anthony Steed. 2011. Automatic recognition of non-acted affective postures: A video game scenario. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics: A Publication of the IEEE Systems, Man, and Cybernetics Society 41, 4 (Jan. 2011), 1027--1038. DOI:http://dx.doi.org/10.1109/TSMCB.2010.2103557 Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Andrea Kleinsmith, P. R. De Silva, and Nadia Bianchi-Berthouze. 2005. Recognizing emotion from postures: Cross-cultural differences in user modeling. Lecture Notes in Artificial Intelligence (2005), 50--59. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Andrea Kleinsmith, P. Ravindra De Silva, and Nadia Bianchi-Berthouze. 2006. Cross-cultural differences in recognizing affect from body posture. Interacting with Computers 18, 6 (2006), 1371--1389. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Karl H. E. Kroemer, Henrike B. Kroemer, and Katrin E. Kroemer-Elbert. 1994. Ergonomics: How to Design for Ease and Efficiency. Prentice Hall.Google ScholarGoogle Scholar
  43. Gary Mckeown, Michel Valstar, Roddy Cowie, Maja Pantic, and Marc Schroder. 2012. The SEMAINE database: Annotated multimodal records of emotionally coloured conversations between a person and a limited agent. IEEE Transactions on Affective Computing 3, 1 (2012), 5--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Albert Mehrabian. 2007. Nonverbal communication.Google ScholarGoogle Scholar
  45. Hongying Meng and Nadia Bianchi-Berthouze. 2014. Affective state level recognition in naturalistic facial and vocal expressions. IEEE Transactions on Systems, Man, and Cybernetics Part B 44, 3 (2014), 315--328.Google ScholarGoogle Scholar
  46. Angeliki Metallinou, Athanasios Katsamanis, and Shrikanth Narayanan. 2012. Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information. Image and Vision Computing (Sept. 2012). DOI:http://dx.doi.org/10.1016/j.imavis.2012.08.018 Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Angeliki Metallinou, Athanassios Katsamanis, Yun Wang, and Shrikanth Narayanan. 2011. Tracking changes in continuous emotion states using body language and prosodic cues. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 2. 2288--2291.Google ScholarGoogle ScholarCross RefCross Ref
  48. Philipp M. Müller, Sikandar Amin, Prateek Verma, Mykhaylo Andriluka, and Andreas Bulling. 2015. Emotion recognition from embedded bodily expressions and speech during dyadic interactions. In Proceedings of the 6th Affective Computing and Intelligent Interaction (ACII’15).Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Joseph Onderi Orero, Florent Levillain, Marc Damez-Fontaine, Maria Rifqi, and Bernadette Bouchon-Meunier. 2010. Assessing gameplay emotions from physiological signals. In Proceedings of the International Conference on Kansei Engineering and Emotion Research. 1684--1693.Google ScholarGoogle Scholar
  50. Geovany A. Ramirez, Tadas Baltrusaitis, and Louis-Philippe Morency. 2011. Modeling latent discriminative dynamic of multi-dimensional affective signals. In Proceedings of the 4th International Conference on Affective Computing and Intelligent Interaction, Vol. 2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Amir Saffari, Christian Leistner, Jakob Santner, Martin Godec, and Horst Bischof. 2009. On-line random forests. In Proceedings of the IEEE 12th Internatinal Conference on Computer Vision Workshops. 1393--1400. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5457447.Google ScholarGoogle ScholarCross RefCross Ref
  52. Jyotirmay Sanghvi, Ginevra Castellano, Ana Paiva, and Peter W. Mcowan. 2011. Automatic analysis of affective postures and body motion to detect engagement with a game companion categories and subject descriptors. In Proceedings of ACM/IEEE International Conference on Human-Robot Interaction. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Nikolaos Savva and Nadia Bianchi-Berthouze. 2012. Automatic recognition of affective body movement in a video game scenario. In Proceedings of the International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN). 149--158.Google ScholarGoogle ScholarCross RefCross Ref
  54. Nikolaos Savva, Alfonsina Scarinzi, and Nadia Bianchi-Berthouze. 2012. Continuous recognition of player’s affective body expression as dynamic quality of aesthetic experience. IEEE Transactions on Computational Intelligence and AI in Games 4, 3 (Sept. 2012), 199--212. DOI:http://dx.doi.org/10.1109/TCIAIG.2012.2202663Google ScholarGoogle ScholarCross RefCross Ref
  55. Caifeng Shan and Shaogang Gong. 2007. Beyond facial expressions: Learning human emotion from body gestures. In Proceedings of the British Machine Vision Conference.Google ScholarGoogle ScholarCross RefCross Ref
  56. Harold Soh. 2012. Online spatio-temporal Gaussian process experts with application to tactile classification. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.Google ScholarGoogle Scholar
  57. Feng Tang, Shane Brennan, Qi Zhao, and Hai Tao. 2007. Co-tracking using semi-supervised support vector machines. In Proceedings of the IEEE 11th International Conference on Computer Vision (2007), 1--8. DOI:http://dx.doi.org/10.1109/ICCV.2007.4408954Google ScholarGoogle ScholarCross RefCross Ref
  58. Michel Valstar, Florian Eyben, Gary Mckeown, Roddy Cowie, and Maja Pantic. 2011. AVEC 2011 the first international audio/visual emotion challenge. In Proceedings of Affective Computing and Intelligent Interaction. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Ekaterina Volkova, Stephan de la Rosa, Heinrich H. Bülthoff, and Betty Mohler. 2014. The MPI emotional body expressions database for narrative scenarios. PLoS ONE 9, 12 (2014), e113647. DOI:http://dx.doi.org/10.1371/journal.pone.0113647Google ScholarGoogle ScholarCross RefCross Ref
  60. Harald G. Wallbott. 1998. Bodily expression of emotion. European Journal of Social Psychology 28, 6 (1998), 879--896.Google ScholarGoogle ScholarCross RefCross Ref
  61. Weiyi Wang, Geogiors Athanasopoulos, Selma Yilmazyildiz, Geogiors Patsis, Valentin Enescu, Hichem Sahli, Werner Verhelst, Antoine Hiolle, Matthew Lewis, and Lola Cañamero. 2014. Natural emotion elicitation for emotion modeling in child-robot interactions. In Proceedings of the 4th Workshop on Child-Computer Interaction (WOCCI’14).Google ScholarGoogle Scholar
  62. Weiyi Wang, Valentin Enescu, and Hichem Sahli. 2013. Towards real-time continuous emotion recognition from body movements. In Proceedings of Human Behavior Understanding 2013. Lecture Notes in Computer Science, Vol. 8212. 235--245. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

  • Published in

    cover image ACM Transactions on Interactive Intelligent Systems
    ACM Transactions on Interactive Intelligent Systems  Volume 5, Issue 4
    Regular Articles and Special issue on New Directions in Eye Gaze for Interactive Intelligent Systems (Part 1 of 2)
    January 2016
    118 pages
    ISSN:2160-6455
    EISSN:2160-6463
    DOI:10.1145/2866565
    Issue’s Table of Contents

    Copyright © 2015 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 22 December 2015
    • Revised: 1 September 2015
    • Accepted: 1 September 2015
    • Received: 1 March 2014
    Published in tiis Volume 5, Issue 4

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader