ABSTRACT
To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods.
- Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme. 2016. Semi-supervised model personalization for improved detection of learner’s emotional engagement. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 100–107.Google ScholarDigital Library
- Mathias Benedek and Christian Kaernbach. 2010. A continuous measure of phasic electrodermal activity. Journal of neuroscience methods 190, 1 (2010), 80–91.Google ScholarCross Ref
- Mathias Benedek and Christian Kaernbach. 2010. Decomposition of skin conductance data by means of nonnegative deconvolution. Psychophysiology 47, 4 (2010), 647–658.Google Scholar
- Wolfram Boucsein. 2012. Electrodermal activity. Springer Science & Business Media.Google Scholar
- Jason J Braithwaite, Derrick G Watson, Robert Jones, and Mickey Rowe. 2013. A guide for analysing electrodermal activity (EDA) & skin conductance responses (SCRs) for psychological experiments. Psychophysiology 49, 1 (2013), 1017–1034.Google Scholar
- Gao-Yi Chao, Chun-Min Chang, Jeng-Lin Li, Ya-Tse Wu, and Chi-Chun Lee. 2018. Generating fMRI-Enriched Acoustic Vectors using a Cross-Modality Adversarial Network for Emotion Recognition. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 55–62.Google ScholarDigital Library
- CL Philip Chen and Zhulin Liu. 2018. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE transactions on neural networks and learning systems 29, 1(2018), 10–24.Google Scholar
- Hongtian Chen and Bin Jiang. 2019. A review of fault detection and diagnosis for the traction system in high-speed trains. IEEE Transactions on Intelligent Transportation Systems (2019).Google Scholar
- Frank D Colman and Allan Paivio. 1969. Pupillary response and galvanic skin response during an imagery task. Psychonomic Science 16, 6 (1969), 296–297.Google ScholarCross Ref
- Hany Ferdinando and Esko Alasaarela. 2018. Enhancement of Emotion Recogniton using Feature Fusion and the Neighborhood Components Analysis.. In ICPRAM. 463–469.Google Scholar
- Hany Ferdinando, Tapio Seppänen, and Esko Alasaarela. 2017. Enhancing Emotion Recognition from ECG Signals using Supervised Dimensionality Reduction.. In ICPRAM. 112–118.Google Scholar
- Julien Fleureau, Philippe Guillotel, and Izabela Orlac. 2013. Affective benchmarking of movies based on the physiological responses of a real audience. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEE, 73–78.Google ScholarDigital Library
- Michael Grimm, Kristian Kroschel, and Shrikanth Narayanan. 2008. The Vera am Mittag German audio-visual emotional speech database. In 2008 IEEE international conference on multimedia and expo. IEEE, 865–868.Google ScholarCross Ref
- Dongdong Gui, Sheng-hua Zhong, and Zhong Ming. 2018. Implicit Affective Video Tagging Using Pupillary Response. In International Conference on Multimedia Modeling. Springer, 165–176.Google Scholar
- Rui Guo, Shuangjiang Li, Li He, Wei Gao, Hairong Qi, and Gina Owens. 2013. Pervasive and unobtrusive emotion sensing for human mental health. In Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2013 7th International Conference on. IEEE, 436–439.Google ScholarDigital Library
- Jennifer Healey, Rosalind W Picard, 2005. Detecting stress during real-world driving tasks using physiological sensors. IEEE Transactions on intelligent transportation systems 6, 2(2005), 156–166.Google ScholarDigital Library
- Eckhard H Hess and James M Polt. 1960. Pupil size as related to interest value of visual stimuli. Science 132, 3423 (1960), 349–350.Google Scholar
- Anil Jain, Karthik Nandakumar, and Arun Ross. 2005. Score normalization in multimodal biometric systems. Pattern recognition 38, 12 (2005), 2270–2285.Google Scholar
- JW Kling and Harold Schlosberg. 1961. The Uniqueness of Patterns of Skin-Conductance. The American journal of psychology 74, 1 (1961), 74–79.Google Scholar
- Chuanhe Liu, Tianhao Tang, Kui Lv, and Minghao Wang. 2018. Multi-Feature Based Emotion Recognition for Video Clips. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 630–634.Google ScholarDigital Library
- Nick Martin and Hermine Maes. 1979. Multivariate analysis. Academic press London.Google Scholar
- Gary McKeown, Michel F Valstar, Roderick Cowie, and Maja Pantic. 2010. The SEMAINE corpus of emotionally coloured character interactions. In 2010 IEEE International Conference on Multimedia and Expo. IEEE, 1079–1084.Google ScholarCross Ref
- Prateek Panwar and Christopher M Collins. 2018. Detecting negative emotion for mixed initiative visual analytics. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, LBW004.Google ScholarDigital Library
- Rosalind W. Picard, Elias Vyzas, and Jennifer Healey. 2001. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE transactions on pattern analysis and machine intelligence 23, 10(2001), 1175–1191.Google Scholar
- Lin Shu, Jinyan Xie, Mingyue Yang, Ziyi Li, Zhenqi Li, Dan Liao, Xiangmin Xu, and Xinyi Yang. 2018. A Review of Emotion Recognition Using Physiological Signals. Sensors 18, 7 (2018), 2074.Google ScholarCross Ref
- Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (2012), 42–55.Google ScholarDigital Library
- Tengfei Song, Wenming Zheng, Peng Song, and Zhen Cui. 2018. EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing(2018), 1–1. https://doi.org/10.1109/TAFFC.2018.2817622Google Scholar
- Goran Udovičić, Jurica Ðerek, Mladen Russo, and Marjan Sikora. 2017. Wearable emotion recognition system based on GSR and PPG signals. In Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care. ACM, 53–59.Google ScholarDigital Library
- Gyanendra K Verma and Uma Shanker Tiwary. 2014. Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals. NeuroImage 102(2014), 162–172.Google ScholarCross Ref
- Chin-An Wang and Douglas P Munoz. 2015. A circuit for pupil orienting responses: implications for cognitive modulation of pupil size. Current opinion in neurobiology 33 (2015), 134–140.Google Scholar
- Wei-Long Zheng, Bo-Nan Dong, and Bao-Liang Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, Chicago, IL, 5040–5043. https://doi.org/10.1109/EMBC.2014.6944757Google Scholar
- Wan-Hui Wen, Guang-Yuan Liu, Nan-Pu Cheng, Jie Wei, Peng-Chao Shangguan, and Wen-Jin Huang. 2014. Emotion recognition based on multi-variant correlation of physiological signals. IEEE Transactions on Affective Computing1 (2014), 1–1.Google ScholarCross Ref
- A Tianyi Zhang and B Olivier Le Meur. 2018. How Old Do You Look? Inferring Your Age From Your Gaze. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2660–2664.Google Scholar
- Yu-Dong Zhang, Zhang-Jing Yang, Hui-Min Lu, Xing-Xing Zhou, Preetha Phillips, Qing-Ming Liu, and Shui-Hua Wang. 2016. Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access 4(2016), 8375–8385.Google ScholarCross Ref
Recommendations
Automatic natural expression recognition using head movement and skin color features
AVI '12: Proceedings of the International Working Conference on Advanced Visual InterfacesSignificant progress has been made in automatic facial expression recognition, yet most state of the art approaches produce significantly better reliabilities on acted expressions than on natural ones. User interfaces that use facial expressions to ...
Emotion Recognition Using Physiological Signals
MIDI '15: Proceedings of the Mulitimedia, Interaction, Design and InnnovationIn this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real ...
EEG based emotion recognition using fusion feature extraction method
AbstractAs a high-level function of the human brain, emotion is the external manifestation of people’s psychological characteristics. The emotion has a great impact on people’s personality and mental health. At the same time, emotion classification from ...
Comments