skip to main content
10.1145/2663204.2663241acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Managing Human-Robot Engagement with Forecasts and... um... Hesitations

Published:12 November 2014Publication History

ABSTRACT

We explore methods for managing conversational engagement in open-world, physically situated dialog systems. We investigate a self-supervised methodology for constructing forecasting models that aim to anticipate when participants are about to terminate their interactions with a situated system. We study how these models can be leveraged to guide a disengagement policy that uses linguistic hesitation actions, such as filled and non-filled pauses, when uncertainty about the continuation of engagement arises. The hesitations allow for additional time for sensing and inference, and convey the system's uncertainty. We report results from a study of the proposed approach with a directions-giving robot deployed in the wild.

References

  1. Kendon, A. 1990. Spatial organization in social encounters: the F-formation system, Conducting Interaction: Patterns of behavior in focused encounters, Studies in International Sociolinguistics, Cambridge University Press.Google ScholarGoogle Scholar
  2. Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N. and Rich, C., 2005. Explorations in engagement for humans and robots, Artificial Intelligence, 166 (1--2), pp. 140--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Rich, C., Ponsler, B., Holroyd, A., and Sidner, C.L., 2010. Recognizing engagement in human-robot interaction, in Proc. of HRI'2010, Osaka, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Michalowski, M.P., Sabanovic, S., and Simmons, R., 2006. A spatial model of engagement for a social robot, in 9th IEEE Workshop on Advanced Motion Control, pp. 762--767.Google ScholarGoogle Scholar
  5. Bohus, D., and Horvitz, E., 2009. Models for Multiparty Engagement in Open-World Dialog, in Proc. of SIGdial'2009, London, UK. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bohus, D., and Horvitz, E., 2009. Learning to Predict Engagement with a Spoken Dialog System in Open-World Settings, in Proc. of SIGdial'2009, London, UK. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Clark, H.H., and Fox Tree, J.E., 2002. Using uh and um in spontaneous speaking, Cognition, 84(1):73--111, May, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  8. Corley, M., and Stewart, O.W., 2008. Hesitation disfluencies in spontaneous speech: The meaning of um, Language and Linguistics Compass, 4, 589--602.Google ScholarGoogle ScholarCross RefCross Ref
  9. Goto, M., Itou, K., and Hayamizu, S., 1999. A Real-time Filled Pause Detection System for Spontaneous Speech Recognition, in Proc. of Eurospeech'99, Budapest, Hungary.Google ScholarGoogle Scholar
  10. An, G., Brizan, D.G., and Rosenberg, A., 2013. Detecting laughter and filled pauses using syllable-based features, in Proc. of Interspeech'2013, Lyon, France.Google ScholarGoogle Scholar
  11. Adell, J., Bonafonte, A., and Escudero, D., 2010. Synthesis of filled pauses based on a disfluent speech model, in Proc. of ICASSP'2010, Dallas, TX.Google ScholarGoogle Scholar
  12. Skantze, G., and Hjalmarsson, A., 2010. Towards incremental speech generation in dialogue systems, in Proc. of SIGDial'2010, Tokyo, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Skantze, G., Hjalmarsson, A., and Oertel, C., 2013. Exploring the effects of gaze and pauses in situated human-robot interaction, in Proc. of SIGDial'2013, Metz, France.Google ScholarGoogle Scholar
  14. Dethlefs, N., Hastie, H., Reiser, V., and Lemon, O., 2012. Optimizing Natural Language Generation for Decision Making for Situated Dialogue, in Proc. of INLG'2012, 49--58, Utica, IL.Google ScholarGoogle Scholar
  15. Bohus, D., Saw, C.W., and Horvitz, E., 2014. Directions Robot: In-the-Wild Experiences and Lessons Learned, in Proc. of AAMAS'2014, Paris, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Bohus, D., and Horvitz, E., 2009. Dialog in the Open World: Platform and Applications, in Proc. of ICMI'2009, Boston, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., Maisonnier, B. 2009. Mechatronic design of NAO humanoid. In Proceedings of ICRA'09, Kobe, Japan. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Managing Human-Robot Engagement with Forecasts and... um... Hesitations

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '14: Proceedings of the 16th International Conference on Multimodal Interaction
      November 2014
      558 pages
      ISBN:9781450328852
      DOI:10.1145/2663204

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      ICMI '14 Paper Acceptance Rate51of127submissions,40%Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader