ABSTRACT
Virtual presenters have a great range of possible applications, like teachers, news presenters and guides of virtual environments, easing the interaction with computers. The animation control of such virtual characters is usually accomplished by an animation script describing all the movements to be performed. Writing a convincing animation script is a demanding and cumbersome task. To ease the animation process, we propose the additional use of a behavior model learned from a real presenter. The article presents the implementation of a 3D virtual news presenter that implicitly follows a behavior model and a script that describes the text to be uttered. The behavior model is given by a set of behavioral rules that represent common non-verbal facial movement patterns displayed by a real presenter whose TV appearances were analyzed
- E. André, T. Rist, and J. Müller, "WebPersona: A Lifelike Presentation Agent for the World-Wide Web," Knowledge-based Systems, 11(1): 25--36 Sept. 1998Google ScholarDigital Library
- S. Beard, D. Reid. "MetaFace and VHML: A First Implementation of the Virtual Human Markup Language" Proc. of Embodied conversational agents - let's specify and evaluate them! for AAMAS 2002, Bologna, Italy. 16th July 2002.Google Scholar
- J. Cassell, H. H. Vilhjlmsson, and T. W. Bickmore. Beat: The behavior expression animation toolkit. Proc. of SIGGRAPH, 2001. Google ScholarDigital Library
- M. Chau and M. Betke. Real time eye tracking and blink detection with usb cameras. Boston University Computer Science Technical Report, (12), 2005.Google Scholar
- J. M. De Martino, L. P. Magalhes, and F. Violaro. Facial animation based on context-dependent visemes. Computers & Graphics, 30(6):971--980, 2006. Google ScholarDigital Library
- P. Ekman. About brows: Emotional and conversational signals. Human ethology: Claims and limits of a new discipline, 169--248, 1979.Google Scholar
- P. Ekman and W. V. Friesen. Facial action coding system. Consulting Psychologist Press, 1978.Google Scholar
- S. Kopp, B. Krenn, S. Marsella, A. N. Marshall, C. Pelachaud, H. Pirker, K. R. Thórisson, H. H. Vilhjálmsson. Towards a common framework for multimodal generation: The behavior markup language. In IVA, 205--217, 2006. Google ScholarDigital Library
- B. Krenn. The Neca project: Net environments for embodied emotional conversational agents project note. KI --- Künstliche Intelligenz Themenheft Embodied Conversational Agents, 17(4), 2003.Google Scholar
- S. Kshirsagar, N. Magnenat-Thalmann, A. Guye-Vuilleme, D. Thalmann, K. Kamyab, E. Mamdani. Avatar markup language. Proc. of the Workshop on Virtual Environments 2002, 169--177, May 2002. Google ScholarDigital Library
- J. Lee and S. Marsella. Nonverbal behavior generator for embodied conversational agents. Proc. of Intelligent Virtual Agents 2006, 243--255, 2006. Google ScholarDigital Library
- T. Noma, L. Zhao, N. I. Badler. Design of a virtual human presenter. IEEE Computer Graphics and Applications, 20(4):79--85, 2000. Google ScholarDigital Library
- F. Parke and K. Waters. Computer Facial Animation. Wellesley, Mass.: AK Peters, 1996. Google ScholarDigital Library
- C. Pelachaud, N. Badler, M. Steedman. Generating facial expressions for speech. Cognitive Science, 20(1), 1996.Google Scholar
- H. Pirker and B. Krenn. Assessment of markup languages for avatars, multimedia and multimodal systems. Neca Deliverable D9c, 2002.Google Scholar
- V. Vinayagamoorthy, M. Gillies, A. Steed, E. Tanguy, X. Pan, C. Loscos, and M. Slater. Building expression into virtual characters. Eurographics 2006 --- STAR 2006Google Scholar
Index Terms
- Virtual presenter
Recommendations
Spontaneous spoken dialogues with the furhat human-like robot head
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionFurhat [1] is a robot head that deploys a back-projected animated face that is realistic and human-like in anatomy. Furhat relies on a state-of-the-art facial animation architecture allowing accurate synchronized lip movements with speech, and the ...
Providing expressive gaze to virtual animated characters in interactive applications
SPECIAL ISSUE: Media ArtsEyes play an important role in communication among people. Motions of the eye express emotions and regulate the flow of conversation. Hence we consider fundamental that virtual humans or other characters present convincing and expressive gaze in ...
Interruptions in Human-Agent Interaction
IVA '21: Proceedings of the 21st ACM International Conference on Intelligent Virtual AgentsTurn management is one of the necessary social interactions skills. In human-human interactions, turn changes are naturally completed by interruption, "cooperatively" or "competitively". Interruptions are inherent in conversation. They can be considered ...
Comments