skip to main content
10.1145/3340555.3353716acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion Recognition

Published:14 October 2019Publication History

ABSTRACT

To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods.

References

  1. Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme. 2016. Semi-supervised model personalization for improved detection of learner’s emotional engagement. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 100–107.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mathias Benedek and Christian Kaernbach. 2010. A continuous measure of phasic electrodermal activity. Journal of neuroscience methods 190, 1 (2010), 80–91.Google ScholarGoogle ScholarCross RefCross Ref
  3. Mathias Benedek and Christian Kaernbach. 2010. Decomposition of skin conductance data by means of nonnegative deconvolution. Psychophysiology 47, 4 (2010), 647–658.Google ScholarGoogle Scholar
  4. Wolfram Boucsein. 2012. Electrodermal activity. Springer Science & Business Media.Google ScholarGoogle Scholar
  5. Jason J Braithwaite, Derrick G Watson, Robert Jones, and Mickey Rowe. 2013. A guide for analysing electrodermal activity (EDA) & skin conductance responses (SCRs) for psychological experiments. Psychophysiology 49, 1 (2013), 1017–1034.Google ScholarGoogle Scholar
  6. Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, and Chi-Chun Lee. 2018. Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 55–62.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. CL Philip Chen and Zhulin Liu. 2018. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems 29, 1(2018), 10–24.Google ScholarGoogle Scholar
  8. Hongtian Chen and Bin Jiang. 2019. A review of fault detection and diagnosis for the traction system in high-speed trains. IEEE Transactions on Intelligent Transportation Systems (2019).Google ScholarGoogle Scholar
  9. Frank D Colman and Allan Paivio. 1969. Pupillary response and galvanic skin response during an imagery task. Psychonomic Science 16, 6 (1969), 296–297.Google ScholarGoogle ScholarCross RefCross Ref
  10. Hany Ferdinando and Esko Alasaarela. 2018. Enhancement of Emotion Recogniton using Feature Fusion and the Neighborhood Components Analysis.. In ICPRAM. 463–469.Google ScholarGoogle Scholar
  11. Hany Ferdinando, Tapio Seppänen, and Esko Alasaarela. 2017. Enhancing Emotion Recognition from ECG Signals using Supervised Dimensionality Reduction.. In ICPRAM. 112–118.Google ScholarGoogle Scholar
  12. Julien Fleureau, Philippe Guillotel, and Izabela Orlac. 2013. Affective benchmarking of movies based on the physiological responses of a real audience. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEE, 73–78.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Michael Grimm, Kristian Kroschel, and Shrikanth Narayanan. 2008. The Vera am Mittag German audio-visual emotional speech database. In 2008 IEEE international conference on multimedia and expo. IEEE, 865–868.Google ScholarGoogle ScholarCross RefCross Ref
  14. Dongdong Gui, Sheng-hua Zhong, and Zhong Ming. 2018. Implicit Affective Video Tagging Using Pupillary Response. In International Conference on Multimedia Modeling. Springer, 165–176.Google ScholarGoogle Scholar
  15. Rui Guo, Shuangjiang Li, Li He, Wei Gao, Hairong Qi, and Gina Owens. 2013. Pervasive and unobtrusive emotion sensing for human mental health. In Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2013 7th International Conference on. IEEE, 436–439.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jennifer Healey, Rosalind W Picard, 2005. Detecting stress during real-world driving tasks using physiological sensors. IEEE Transactions on intelligent transportation systems 6, 2(2005), 156–166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Eckhard H Hess and James M Polt. 1960. Pupil size as related to interest value of visual stimuli. Science 132, 3423 (1960), 349–350.Google ScholarGoogle Scholar
  18. Anil Jain, Karthik Nandakumar, and Arun Ross. 2005. Score normalization in multimodal biometric systems. Pattern recognition 38, 12 (2005), 2270–2285.Google ScholarGoogle Scholar
  19. JW Kling and Harold Schlosberg. 1961. The Uniqueness of Patterns of Skin-Conductance. The American journal of psychology 74, 1 (1961), 74–79.Google ScholarGoogle Scholar
  20. Chuanhe Liu, Tianhao Tang, Kui Lv, and Minghao Wang. 2018. Multi-Feature Based Emotion Recognition for Video Clips. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 630–634.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Nick Martin and Hermine Maes. 1979. Multivariate analysis. Academic press London.Google ScholarGoogle Scholar
  22. Gary McKeown, Michel F Valstar, Roderick Cowie, and Maja Pantic. 2010. The SEMAINE corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo. IEEE, 1079–1084.Google ScholarGoogle ScholarCross RefCross Ref
  23. Prateek Panwar and Christopher M Collins. 2018. Detecting negative emotion for mixed initiative visual analytics. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, LBW004.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Rosalind W. Picard, Elias Vyzas, and Jennifer Healey. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence 23, 10(2001), 1175–1191.Google ScholarGoogle Scholar
  25. Lin Shu, Jinyan Xie, Mingyue Yang, Ziyi Li, Zhenqi Li, Dan Liao, Xiangmin Xu, and Xinyi Yang. 2018. A Review of Emotion Recognition Using Physiological Signals. Sensors 18, 7 (2018), 2074.Google ScholarGoogle ScholarCross RefCross Ref
  26. Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42–55.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Tengfei Song, Wenming Zheng, Peng Song, and Zhen Cui. 2018. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing(2018), 1–1. https://doi.org/10.1109/TAFFC.2018.2817622Google ScholarGoogle Scholar
  28. Goran Udovičić, Jurica Ðerek, Mladen Russo, and Marjan Sikora. 2017. Wearable emotion recognition system based on GSR and PPG signals. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care. ACM, 53–59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Gyanendra K Verma and Uma Shanker Tiwary. 2014. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage 102(2014), 162–172.Google ScholarGoogle ScholarCross RefCross Ref
  30. Chin-An Wang and Douglas P Munoz. 2015. A circuit for pupil orienting responses: implications for cognitive modulation of pupil size. Current opinion in neurobiology 33 (2015), 134–140.Google ScholarGoogle Scholar
  31. Wei-Long Zheng, Bo-Nan Dong, and Bao-Liang Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, Chicago, IL, 5040–5043. https://doi.org/10.1109/EMBC.2014.6944757Google ScholarGoogle Scholar
  32. Wan-Hui Wen, Guang-Yuan Liu, Nan-Pu Cheng, Jie Wei, Peng-Chao Shangguan, and Wen-Jin Huang. 2014. Emotion recognition based on multi-variant correlation of physiological signals. IEEE Transactions on Affective Computing1 (2014), 1–1.Google ScholarGoogle ScholarCross RefCross Ref
  33. A Tianyi Zhang and B Olivier Le Meur. 2018. How Old Do You Look? Inferring Your Age From Your Gaze. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2660–2664.Google ScholarGoogle Scholar
  34. Yu-Dong Zhang, Zhang-Jing Yang, Hui-Min Lu, Xing-Xing Zhou, Preetha Phillips, Qing-Ming Liu, and Shui-Hua Wang. 2016. Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access 4(2016), 8375–8385.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICMI '19: 2019 International Conference on Multimodal Interaction
    October 2019
    601 pages
    ISBN:9781450368605
    DOI:10.1145/3340555

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 October 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate453of1,080submissions,42%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format