- 1 Bolt, R.A. Put that there: Voice and gesture at the graphics interface. ACM Computer Graphics 14, 3 (1980), 262-270. Google ScholarDigital Library
- 2 Cassell, J., Pelachaud, C., Badler, N., et al. Animated conversation: Rule-based generation of facial expression, gesture and spoken intona-tion for multiple conversational agents. In Computer Graphics, Annual Conference Series, ACM Press, NY, 1994, 413-420. Google ScholarDigital Library
- 3 Cohen, P., Johnston, M., McGee, D., et al. Quickset: Multimodal interaction for distributed applications. In Proceedings of the Fifth ACM International Multimedia Conference (New York, NY) ACM Press, NY, 1997, 31-40. Google ScholarDigital Library
- 4 Kendon, A. Gesticulation and speech: Two aspects of the process of utterance. In M. Key, Ed. The Relationship of Verbal and Nonverbal Communication. The Hague, Mouton, 1980, 207-227.Google Scholar
- 5 Koons, D.B., Sparrell, C.J. and Thorisson, K.R. Integrating simultane-ous input from speech, gaze, and hand gestures. In Intelligent Multime-dia Interfaces. M. Maybury, Ed. MIT Press, Menlo Park, CA, 1993, 257-276. Google ScholarDigital Library
- 6 McNeill, D. Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago, IL, 1992.Google Scholar
- 7 Naughton, K. Spontaneous gesture and sign: A study of ASL signs co-occurring with speech. In Proceedings of the Workshop on the Integration of Gesture in Language & Speech (Oct. 7-8, Newark and Wilmington, DE). L. Messing, Ed., University of Delaware, 1996, 125-134.Google Scholar
- 8 Neal, J.G. and Shapiro, S.C. Intelligent multi-media interface technol-ogy. In Intelligent User Interfaces. J.W. Sullivan and S.W. Tyler, Eds. ACM, NY, 1991, 11-43. Google ScholarDigital Library
- 9 Oviatt, S.L. Mutual disambiguation of recognition errors in a multi-modal architecture. In Proceedings of the Conference on Human Factors in Computing Systems CHI'99 (May 18-20, Pittsburgh, PA). ACM Press, NY, 1999, 576-583. Google ScholarDigital Library
- 10 Oviatt, S.L. Multimodal interactive maps: Designing for human per-formance. Human-Computer Interaction 12, (1997), 93-129. Google ScholarDigital Library
- 11 Oviatt, S.L. and Kuhn, K. Referential features and linguistic indirection in multimodal language. In Proceedings of the International Conference on Spoken Language Processing. Sydney, ASSTA Inc., 2339-2342.Google Scholar
- 12 Oviatt, S.L., DeAngeli, A. and Kuhn, K. Integration and synchroniza-tion of input modes during multimodal human-computer interaction. In Proceedings of Conference on Human Factors in Computing Systems CHI'97 (March 22-27, Atlanta, GA). ACM Press, NY, 1997, 415-422. Google ScholarDigital Library
Index Terms
- Ten myths of multimodal interaction
Recommendations
Multimodal Interaction: Intuitive, Robust, and Preferred?
INTERACT '09: Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part IIWe investigated if and under which conditions multimodal interfaces (<em>touch</em> , <em>speech</em> , <em>motion control</em> ) fulfil the expectation of being superior to unimodal interfaces. The results show that the possibility of multimodal ...
Multimodal embodied mimicry in interaction
COST'10: Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and EnactmentNonverbal behavior plays an important role in human-human interaction. One particular kind of nonverbal behavior is mimicry. Behavioral mimicry supports harmonious relationships in social interaction through creating affiliation, rapport, and liking ...
Multimodal interaction: A suitable strategy for including older users?
The major promise of multimodal user interfaces for older users is that they have the choice to select the input modality (or combination of modalities) that best fits their needs and capabilities. Two studies investigated if multimodal interfaces with ...
Comments