skip to main content
10.1145/1160633.1160769acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Reinforcement learning for declarative optimization-based drama management

Published:08 May 2006Publication History

ABSTRACT

A long-standing challenge in interactive entertainment is the creation of story-based games with dynamically responsive story-lines. Such games are populated by multiple objects and autonomous characters, and must provide a coherent story experience while giving the player freedom of action. To maintain coherence, the game author must provide for modifying the world in reaction to the player's actions, directing agents to act in particular ways (overriding or modulating their autonomy), or causing inanimate objects to reconfigure themselves "behind the player's back".Declarative optimization-based drama management is one mechanism for allowing the game author to specify a drama manager (DM) to coordinate these modifications, along with a story the DM should aim for. The premise is that the author can easily describe the salient properties of the story while leaving it to the DM to react to the player and direct agent actions. Although promising, early search-based approaches have been shown to scale poorly. Here, we improve upon the state of the art by using reinforcement learning and a novel training paradigm to build an adaptive DM that manages the tradeoff between exploration and story coherence. We present results on two games and compare our performance with other approaches.

References

  1. J. Bates. Virtual reality, art, and entertainment. Presence: The Journal of Teleoperators and Virtual Environments, 2(1):133--138, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Lamstein and M. Mateas. A search-based drama manager. In Proceedings of the AAAI-04 Workshop on Challenges in Game AI, 2004.Google ScholarGoogle Scholar
  3. B. Laurel. Toward the Design of a Computer-Based Interactive Fantasy System. PhD thesis, Drama department, Ohio State University, 1986.Google ScholarGoogle Scholar
  4. B. Magerko. Story representation and interactive drama. In Proceedings of the First Annual Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-05), 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Mateas. An Oz-centric review of interactive drama and believable agents. In M. Woodridge and M. Veloso, editors, AI Today: Recent Trends and Developments. Lecture Notes in AI 1600. Springer, Berlin, NY, 1999. First appeared in 1997 as Technical Report CMU-CS-97-156, Computer Science Department, Carnegie Mellon University.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Mateas and A. Stern. Integrating plot, character, and natural language processing in the interactive drama Façade. In Proceedings of the 1st International Conference on Technologies for Interactive Digital Storytelling and Entertainment (TIDSE-03), 2003.Google ScholarGoogle Scholar
  7. M. J. Nelson and M. Mateas. Search-based drama management in the interactive fiction Anchorhead. In Proceedings of the First Annual Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-05), 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. B. Pollack and A. D. Blair. Why did TD-Gammon work? Advances in Neural Information Processing Systems, 9:10--16, 1997.Google ScholarGoogle Scholar
  9. R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9--44, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. G. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8:257--277, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, 38(3):58--68, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. P. Weyhrauch. Guiding Interactive Drama. PhD thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1997. Technical Report CMU-CS-97-109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. M. Young, M. O. Riedl, M. Branly, A. Jhala, R. J. Martin, and C. J. Saretto. An architecture for integrating plan-based behavior generation with interactive game environments. Journal of Game Development, 1(1), 2004.Google ScholarGoogle Scholar

Index Terms

  1. Reinforcement learning for declarative optimization-based drama management

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        AAMAS '06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
        May 2006
        1631 pages
        ISBN:1595933034
        DOI:10.1145/1160633

        Copyright © 2006 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 8 May 2006

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        Overall Acceptance Rate1,155of5,036submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader