ABSTRACT
In this paper, we present an intelligent facial emotion recognition system with real-time face tracking for a humanoid robot. The system is able to detect facial actions and emotions from images with up to 60 degrees of pose variations. We employ the Active Appearance Model to perform real-time face tracking and extract both texture and geometric representations of images. A POSIT algorithm is also used to identify head rotations. The extracted texture and shape features are employed to detect 18 facial actions and seven basic emotions. The overall system is integrated with a humanoid robot platform to further extend its vision APIs. The system proved to be able to deal with challenging facial emotion recognition tasks with various pose variations.
- Zhang, L., Jiang, M., Farid, D., and Hossain, A.M. 2013. Intelligent Facial Emotion Recognition and Semantic-based Topic Detection for a Humanoid Robot. Expert Systems with Applications, Vol 40, Issue 13, 5160--5168.Google Scholar
- Zhang, L., Gillies, M., and Barnden, J.A. 2008. EMMA: an Automated Intelligent Actor in E-drama. In Proceedings of International Conference on Intelligent User Interfaces. Canary Islands, Spain. pp. 409--412. Google ScholarDigital Library
- Zhang, L. 2013. Contextual and Active Learning-based Affect-sensing from Virtual Drama Improvisation. ACM Transactions on Speech and Language Processing (TSLP), Vol 9, Issue 4, Article No. 8. Google ScholarDigital Library
- Ekman, P., Friesen, W.V., and Hager, J.C. 2002. Facial Action Coding System, A Human Face.Google Scholar
- Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. 2010. The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. In Proceedings of CVPR4HB.Google Scholar
- Cootes, T.F., Edwards, G.J., and Taylor, C.J. 1999. Comparing Active Shape Models with Active Appearance Models. In Proceedings of British Machine Vision Conference. Vol. 1, 173--182.Google Scholar
- DeMenthon, D. and Davis, L.S. 1995. Model-Based Object Pose in 25 Lines of Code, IJCV, 15, 123--141. Google ScholarDigital Library
- Farid, D., Zhang, L., Hossain, A.M., Rahman, C.M., Strachan, R., Sexton, G., and Dahal, K. 2013. An Adaptive Ensemble Classifier for Mining Concept-Drifting Data Streams. Expert Systems with Applications, Vol 40, Issue 15. 5895--5906. Google ScholarDigital Library
Index Terms
- Shape and texture based facial action and emotion recognition
Recommendations
Recognizing facial expressions of emotion using action unit specific decision thresholds
ASSP4MI '16: Proceedings of the 2nd Workshop on Advancements in Social Signal Processing for Multimodal InteractionAutomatic analysis of facial expressions of emotion has been an active research topic of computer vision and machine learning communities. Building person and culture independent models is the main challenge for both communities. We need effective, yet ...
Expression-invariant face recognition by facial expression transformations
In this paper, we present a method of expression-invariant face recognition that transforms input face image with an arbitrary expression into its corresponding neutral facial expression image. When a new face image with an arbitrary expression is ...
Biview face recognition in the shape-texture domain
Face recognition is one of the biometric identification methods with the highest potential. The existing face recognition algorithms relying on the texture information of face images are affected greatly by the variation of expression, scale and ...
Comments