ABSTRACT
We present a context adaptive approach for multimodal interaction for the use in cognitive technical systems, so called Companion Systems. Such systems yield properties of multimodality, individuality, adaptability, availability, cooperativeness and trustworthiness. These characteristics represent a new type of interactive systems that are not only practical and efficient to operate, but as well agreeable, hence the term "companion". Companion technology has to consider the entire situation of the user, machine, and environment. The presented prototype depicts a system that offers assistance in the task of wiring the components of a home cinema system. The user interface for this task is not predefined, but built on the fly by dedicated fission and fusion components, thereby adapting the system's multimodal output and input capabilities to the user and the environment.
Supplemental Material
- D. Costa and C. Duarte. Adapting multimodal fission to user's abilities. In Proceedings of the 6th international conference on Universal access in human-computer interaction: design for all and eInclusion - Volume Part I, UAHCI'11, pages 347--356, Berlin, Heidelberg, 2011. Springer-Verlag. Google ScholarDigital Library
- B. S. Reddy and O. A. Basir. Concept-based evidential reasoning for multimodal fusion in human-computer interaction. Appl. Soft Comput., 10(2):567--577, Mar. 2010. Google ScholarDigital Library
- C. Rousseau, Y. Bellik, F. Vernier, and D. Bazalgette. A framework for the intelligent multimodal presentation of information. Signal Process., 86(12):3696--3713, 2006. Google ScholarDigital Library
- P. Smets. Data fusion in the transferable belief model. In Proc. of the Third International Conference on Information Fusion. FUSION 2000, pages PS21--PS33, July 2000.Google ScholarCross Ref
- A. Wendemuth and S. Biundo. A companion technology for cognitive technical systems. In A. Esposito, A. Vinciarelli, R. Hoffman, and V. C. Müller, editors, Proceedings of the EUCogII-SSPNET-COST2102 International Conference (2011), LNCS Proceedings on Cognitive Behavioural Systems, Dresden, 2012. Google ScholarDigital Library
Index Terms
- Companion technology for multimodal interaction
Recommendations
Toward multimodal fusion of affective cues
HCM '06: Proceedings of the 1st ACM international workshop on Human-centered multimediaDuring face to face communication, it has been suggested that as much as 70% of what people communicate when talking directly with others is through paralanguage involving multiple modalities combined together (e.g. voice tone and volume, body language)...
Architecture and implementation of multimodal plug and play
ICMI '03: Proceedings of the 5th international conference on Multimodal interfacesThis paper describes the handling of multimodality in the Embassi system. Here, multimodality is treated in two modules. Firstly, a modality fusion component merges speech, video traced pointing gestures, and input from a graphical user interface. ...
The Automated Interplay of Multimodal Fission and Fusion in Adaptive HCI
IE '14: Proceedings of the 2014 International Conference on Intelligent EnvironmentsPresent context-aware systems gather a lot of information to maximize their functionality but they predominantly use rather static ways to communicate. This paper motivates two components that serve as mediators between arbitrary components for ...
Comments