skip to main content
10.1145/1168987.1169002acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
Article

American sign language recognition in game development for deaf children

Published:23 October 2006Publication History

ABSTRACT

CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.

References

  1. B. Bauer, H. Hienz, and K. Kraiss. Video-based continuous sign language recognition using statistical methods. In Proceedings of the 15th International Conference on Pattern Recognition, volume 2, pages 463--466, September 2000.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. H. Brashear, T. Starner, P. Lukowicz, and H. Junker. Using multiple sensors for mobile sign language recognition. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, pages 45--52, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Dix, J. Finlay, G. Abowd, and R. Beale. Human-Computer Interaction, chapter 6.4 Iterative Design and Prototyping. Prentice Hall, 2004.Google ScholarGoogle Scholar
  4. G. Fang, W. Gao, and D. Zhao. Large vocabulary sign language recognition based on hierarchical decision trees. In International Conference on Multimodal Interfaces, pages 125--131, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Gallaudet. Gallaudet University. Regional and national summary report of data from the 1999-2000 annual survey of deaf and hard of hearing children and youth. Washington, D. C., 2001.Google ScholarGoogle Scholar
  6. W. Gao, G. Fang, D. Zhao, and Y. Chen. Transition movement models for large vocabulary continuous sign language recognition (csl). In Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 553--558, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. V. Henderson, S. Lee, H. Brashear, H. Hamilton, T. Starner, and S. Hamilton. Development of an American sign language game for deaf children. In Proceedings of the 4th International Conference for Interaction Design and Children, Boulder, CO, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. L. Hernandez-Rebollar, N. Kyriakopoulos, and R. W. Lindeman. A new instrumented approach for translating American sign language into sound and text. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 547--552, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. http://htk.eng.cam.ac.uk.Google ScholarGoogle Scholar
  10. IDRT. Con-sign-tration. Product Information on the World Wide Web, Institute for Disabilities Research and Training Inc.,http://www.idrt.com/ProductInfo.php?ID=32&u=1, 1999.Google ScholarGoogle Scholar
  11. J. S. Kim, W. Jang, and Z. Bien. A dynamic gesture recognition system for the Korean sign language KSL. IEEE Transactions on Systems, Man and Cybernetics, 26(2):354--359, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Lee, V. Henderson, H. Hamilton, T. Starner, H. Brashear, and S. Hamilton. A gesture-based American sign language game for deaf children. In Proceedings of CHI, pages 1589--1592, Portland, Oregon, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. Liang and M. Ouhyoung. A real-time continuous gesture recognition system for sign language. In Third International Conference on Automatic Face and Gesture Recognition, pages 558--565, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. I. Mayberry and E. B. Eichen. The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30:486--498, 1991.Google ScholarGoogle ScholarCross RefCross Ref
  15. R. M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear, and D. S. Ross. Towards a one-way American sign language translator. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 620--625, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. E. L. Newport. Maturational constraints on language learning. Cognitive Science, 14:11--28, 1990.Google ScholarGoogle ScholarCross RefCross Ref
  17. N. Oliver, A. Pentland, and F. Berard. Lafter: Lips and face real time tracker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 123--129, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. Pollard and TSD. Texas School for the Deaf. Aesop: Four Fables, 1998.Google ScholarGoogle Scholar
  19. D. Roberts, U. Foehr, V. Rideout, and M. Brodie. Kids and Media @ the New Millennium, 1999.Google ScholarGoogle Scholar
  20. H. Sagawa and M. Takeuchi. A method for recognizing a sequence of sign language words represented in a japanese sign language sentence. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pages 434--439, Grenoble, France, March 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. Sigal, S. Sclaroff, and V. Athitsos. Skin color-based video segmentation under time-varying illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7):862--877, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Starner and A. Pentland. Visual recognition of American sign language using hidden markov models. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, 1995.Google ScholarGoogle Scholar
  23. M. Storring, H. Andersen, and E. Granum. Skin colour detection under changing lighting conditions. In Proceedings of the Seventh Symposium on Intelligent Robotics Systems, pages 187--195, 1999.Google ScholarGoogle Scholar
  24. M. Storring, H. Andersen, and E. Granum. Estimation of the illuminant colour from human skin colour. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 64--69, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. C. Vogler and D. Metaxas. ASL recognition based on a coupling between hmms and 3d motion analysis. In Proceedings of the IEEE International Conference on Computer Vision, pages 363--369, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. C. Vogler and D. Metaxas. Handshapes and movements: Multiple-channel american sign language recognition. In Springer Lecture notes in Artificial Intelligence, volume 2915, pages 247--258, January 2004.Google ScholarGoogle Scholar
  27. T. Westeyn, H. Brashear, A. Atrash, and T. Starner. Georgia tech gesture toolkit: Supporting experiments in gesture recognition. In ICMI '03: Proceedings of the 5th International Conference on Multimodal Interfaces, pages 85--92, New York, NY, USA, 2003. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Yang, L. Weier, and A. Waibel. Skin-color modeling and adaptation. In Proceedings of Asian Conference on Computer Vision, pages 687--694, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. American sign language recognition in game development for deaf children

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
            October 2006
            316 pages
            ISBN:1595932909
            DOI:10.1145/1168987

            Copyright © 2006 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 23 October 2006

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • Article

            Acceptance Rates

            Overall Acceptance Rate436of1,556submissions,28%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader