ABSTRACT
In situated human-robot dialogue, although humans and robots are co-present in a shared environment, they have significantly mismatched capabilities in perceiving the shared environment. Their representations of the shared world are misaligned. In order for humans and robots to communicate with each other successfully using language, it is important for them to mediate such differences and to establish common ground. To address this issue, this paper describes a dialogue system that aims to mediate a shared perceptual basis during human-robot dialogue. In particular, we present an empirical study that examines the role of the robot's collaborative effort and the performance of natural language processing modules in dialogue grounding. Our empirical results indicate that in situated human-robot dialogue, a low collaborative effort from the robot may lead its human partner to believe a common ground is established. However, such beliefs may not reflect true mutual understanding. To support truly grounded dialogues, the robot should make an extra effort by making its partner aware of its internal representation of the shared world.
- D. Bohus and E. Horvitz. Models for multiparty engagement in open-world dialog. In Proceedings of the SIGDIAL 2009 Conference, pages 225--234, 2009. Google ScholarDigital Library
- H. Buschmeier and S. Kopp. Co-constructing grounded symbols - feedback and incremental adaptation in human-agent dialogue. Künstliche Intelligenz, 27:137--143, 2013.Google ScholarCross Ref
- M. Cakmak and A. L. Thomaz. Designing robot learners that ask good questions. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 17--24, 2012. Google ScholarDigital Library
- H. H. Clark. Using language. Cambridge University Press, Cambridge, UK, 1996.Google ScholarCross Ref
- H. H. Clark and S. E. Brennan. Grounding in communication. In L. B. Resnick, R. M. Levine, and S. D. Teasley, editors, Perspectives on socially shared cognition, pages 127--149. 1991.Google Scholar
- H. H. Clark and E. F. Schaefer. Contributing to discourse. In Cognitive Science, number 13, pages 259--294. 1989.Google Scholar
- H. H. Clark and D. Wilkes-Gibbs. Referring as a collaborative process. In Cognition, number 22, pages 1--39. 1986.Google Scholar
- R. Fang, C. Liu, and J. Y. Chai. Integrating word acquisition and referential grounding towards physical world interaction. In Proceedings of the 14th ACM international conference on Multimodal interaction, ICMI '12, pages 109--116, 2012. Google ScholarDigital Library
- B. Fransen, V. Morariu, E. Martinson, S. Blisard, M. Marge, S. Thomas, A. Schultz, and D. Perzanowski. Using vision, acoustics, and natural language for disambiguation. In Proceedings of HRI'07, pages 73--80, 2007. Google ScholarDigital Library
- D. Gergle, R. E. Kraut, and S. R. Fussell. Using visual information for grounding and awareness in collaborative tasks. Human Computer Interaction, 28:1--39, 2013.Google Scholar
- J. Gordon, R. Passonneau, and S. Epstein. Learning to balance grounding rationales for dialogue systems. In SIGDIAL, June 2011. Google ScholarDigital Library
- S. Kiesler. Fostering common ground in human-robot interaction. In Proceedings of the IEEE International Workshop on Robots and Human Interactive Communication (ROMAN), pages 729--734, 2005.Google ScholarCross Ref
- G.-J. M. Kruijff, P. Lison, T. Benjamin, H. Jacobsson, and N. Hawes. Incremental, multi-level processing for comprehending situated dialogue in human-robot interaction. In Symposium on Language and Robots, 2007.Google Scholar
- C. Liu, R. Fang, and J. Y. Chai. Towards mediating shared perceptual basis in situated dialogue. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 140--149, 2012. Google ScholarDigital Library
- C. Liu, R. Fang, L. She, and J. Chai. Modeling collaborative referring for situated referential grounding. In Proceedings of the SIGDIAL 2013 Conference, pages 78--86, Metz, France, August 2013.Google Scholar
- B. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita. Footing in human-robot conversations: How robots might shape participant roles using gaze cues. Proceedings of HRI, pages 61--68, 2009. Google ScholarDigital Library
- J. Peltason, H. Rieser, S. Wachsmuth, and B. Wrede. On grounding natural kind terms in human-robot communication. Künstliche Intelligenz, 27:107--118, 2013.Google ScholarCross Ref
- A. Powers, A. Kramer, S. Lim, J. Kuo, S.-L. Lee, and S. Kiesler. Common ground in dialogue with a gendered humanoid robot. In Proceedings of ICRA, 2005.Google Scholar
- C. Rich, B. Ponsleur, A. Holroyd, and C. L. Sidner. Recognizing engagement in human-robot interaction. In Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, HRI '10, pages 375--382, 2010. Google ScholarDigital Library
- P. E. Rybski, K. Yoon, J. Stolarz, and M. M. Veloso. Interactive robot task training through dialog and demonstration. In Proceedings of the ACM/IEEE international conference on Human-robot interaction, HRI '07, pages 49--56, 2007. Google ScholarDigital Library
- M. Scheutz, P. Schermerhorn, J. Kramer, and D. Anderson. Incremental natural language processing for hri. In Proceedings of HRI, 2007. Google ScholarDigital Library
- M. F. Schober. Spatial dialogue between partners with mismatched abilities. pages 23--39, 2009.Google Scholar
- C. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C.Rich. Explorations in engagement for humans and robots. In Artificial Intelligence, volume 166(1--2), pages 140--164, 2005. Google ScholarDigital Library
- R. Stalnaker. Common ground. Linguistics and Philosophy, 25:701--721, 2002.Google ScholarCross Ref
- M. Staudte and M. W. Crocker. Visual attention in spoken human-robot interaction. In Proceedings of HRI09, pages 77--84, 2009. Google ScholarDigital Library
- M. Steedman and J. Baldridge. Combinatory categorial grammar. Non-Transformational Syntax Oxford: Blackwell, pages 181--224, 2011.Google ScholarCross Ref
- K. Stubbs, P. Hinds, and D. Wettergreen. Autonomy and common ground in human-robot interaction. In IEEE Intelligent Systems, pages 42--50, 2007. Google ScholarDigital Library
- K. Stubbs, D. Wettergreen, and I. Nourbakhsh. Using a robot proxy to create commmon ground in exploration tasks. In Proceedings of HRI, pages 375--382, 2008. Google ScholarDigital Library
- S. Tellex, R. Deits, P. Thaker, D. Simeonov, T. Kollar, and N. Roy. Clarifying commands with information-theoretic human-robot dialog. Journal of Human-Robot Interaction, 1:78--95, 2012.Google Scholar
- D. Traum. A Computational Theory of Grounding in Natural Language Conversation. PhD thesis, University of Rochester, 1994. Google ScholarDigital Library
Index Terms
- Collaborative effort towards common ground in situated human-robot dialogue
Recommendations
Miscommunication Detection and Recovery in Situated Human–Robot Dialogue
Even without speech recognition errors, robots may face difficulties interpreting natural-language instructions. We present a method for robustly handling miscommunication between people and robots in task-oriented spoken dialogue. This capability is ...
Embodied Collaborative Referring Expression Generation in Situated Human-Robot Interaction
HRI '15: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot InteractionTo facilitate referential communication between humans and robots and mediate their differences in representing the shared environment, we are exploring embodied collaborative models for referring expression generation (REG). Instead of a single minimum ...
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
HRI '14: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interactionIn this paper, we describe a project that explores a novel experimental setup towards building a spoken, multi-modally rich, and human-like multiparty tutoring robot. A human-robot interaction setup is designed, and a human-human dialogue corpus is ...
Comments