ABSTRACT
CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.
- B. Bauer, H. Hienz, and K. Kraiss. Video-based continuous sign language recognition using statistical methods. In Proceedings of the 15th International Conference on Pattern Recognition, volume 2, pages 463--466, September 2000.Google ScholarDigital Library
- H. Brashear, T. Starner, P. Lukowicz, and H. Junker. Using multiple sensors for mobile sign language recognition. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, pages 45--52, 2003. Google ScholarDigital Library
- A. Dix, J. Finlay, G. Abowd, and R. Beale. Human-Computer Interaction, chapter 6.4 Iterative Design and Prototyping. Prentice Hall, 2004.Google Scholar
- G. Fang, W. Gao, and D. Zhao. Large vocabulary sign language recognition based on hierarchical decision trees. In International Conference on Multimodal Interfaces, pages 125--131, 2003. Google ScholarDigital Library
- Gallaudet. Gallaudet University. Regional and national summary report of data from the 1999-2000 annual survey of deaf and hard of hearing children and youth. Washington, D. C., 2001.Google Scholar
- W. Gao, G. Fang, D. Zhao, and Y. Chen. Transition movement models for large vocabulary continuous sign language recognition (csl). In Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 553--558, 2004. Google ScholarDigital Library
- V. Henderson, S. Lee, H. Brashear, H. Hamilton, T. Starner, and S. Hamilton. Development of an American sign language game for deaf children. In Proceedings of the 4th International Conference for Interaction Design and Children, Boulder, CO, 2005. Google ScholarDigital Library
- J. L. Hernandez-Rebollar, N. Kyriakopoulos, and R. W. Lindeman. A new instrumented approach for translating American sign language into sound and text. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 547--552, 2004. Google ScholarDigital Library
- http://htk.eng.cam.ac.uk.Google Scholar
- IDRT. Con-sign-tration. Product Information on the World Wide Web, Institute for Disabilities Research and Training Inc.,http://www.idrt.com/ProductInfo.php?ID=32&u=1, 1999.Google Scholar
- J. S. Kim, W. Jang, and Z. Bien. A dynamic gesture recognition system for the Korean sign language KSL. IEEE Transactions on Systems, Man and Cybernetics, 26(2):354--359, 1996. Google ScholarDigital Library
- S. Lee, V. Henderson, H. Hamilton, T. Starner, H. Brashear, and S. Hamilton. A gesture-based American sign language game for deaf children. In Proceedings of CHI, pages 1589--1592, Portland, Oregon, 2005. Google ScholarDigital Library
- R. Liang and M. Ouhyoung. A real-time continuous gesture recognition system for sign language. In Third International Conference on Automatic Face and Gesture Recognition, pages 558--565, 1998. Google ScholarDigital Library
- R. I. Mayberry and E. B. Eichen. The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30:486--498, 1991.Google ScholarCross Ref
- R. M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear, and D. S. Ross. Towards a one-way American sign language translator. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 620--625, 2004. Google ScholarDigital Library
- E. L. Newport. Maturational constraints on language learning. Cognitive Science, 14:11--28, 1990.Google ScholarCross Ref
- N. Oliver, A. Pentland, and F. Berard. Lafter: Lips and face real time tracker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 123--129, 1997. Google ScholarDigital Library
- G. Pollard and TSD. Texas School for the Deaf. Aesop: Four Fables, 1998.Google Scholar
- D. Roberts, U. Foehr, V. Rideout, and M. Brodie. Kids and Media @ the New Millennium, 1999.Google Scholar
- H. Sagawa and M. Takeuchi. A method for recognizing a sequence of sign language words represented in a japanese sign language sentence. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pages 434--439, Grenoble, France, March 2000. Google ScholarDigital Library
- L. Sigal, S. Sclaroff, and V. Athitsos. Skin color-based video segmentation under time-varying illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7):862--877, 2004. Google ScholarDigital Library
- T. Starner and A. Pentland. Visual recognition of American sign language using hidden markov models. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, 1995.Google Scholar
- M. Storring, H. Andersen, and E. Granum. Skin colour detection under changing lighting conditions. In Proceedings of the Seventh Symposium on Intelligent Robotics Systems, pages 187--195, 1999.Google Scholar
- M. Storring, H. Andersen, and E. Granum. Estimation of the illuminant colour from human skin colour. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 64--69, 2000. Google ScholarDigital Library
- C. Vogler and D. Metaxas. ASL recognition based on a coupling between hmms and 3d motion analysis. In Proceedings of the IEEE International Conference on Computer Vision, pages 363--369, 1998. Google ScholarDigital Library
- C. Vogler and D. Metaxas. Handshapes and movements: Multiple-channel american sign language recognition. In Springer Lecture notes in Artificial Intelligence, volume 2915, pages 247--258, January 2004.Google Scholar
- T. Westeyn, H. Brashear, A. Atrash, and T. Starner. Georgia tech gesture toolkit: Supporting experiments in gesture recognition. In ICMI '03: Proceedings of the 5th International Conference on Multimodal Interfaces, pages 85--92, New York, NY, USA, 2003. ACM Press. Google ScholarDigital Library
- J. Yang, L. Weier, and A. Waibel. Skin-color modeling and adaptation. In Proceedings of Asian Conference on Computer Vision, pages 687--694, 1998. Google ScholarDigital Library
Index Terms
- American sign language recognition in game development for deaf children
Recommendations
Development of an American Sign Language game for deaf children
IDC '05: Proceedings of the 2005 conference on Interaction design and childrenWe present a design for an interactive American Sign Language game geared for language development for deaf children. In addition to work on game design, we show how Wizard of Oz techniques can be used to facilitate our work on ASL recognition. We ...
A gesture-based american sign language game for deaf children
CHI EA '05: CHI '05 Extended Abstracts on Human Factors in Computing SystemsWe present a system designed to facilitate language development in deaf children. The children interact with a computer game using American Sign Language (ASL). The system consists of three parts: an ASL (gesture) recognition engine; an interactive, ...
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation
Automated synthesis of American Sign Language (ASL) animations will benefit people who are deaf with low English literacy.Annotated ASL motion-capture corpora enable researchers to produce animations with complex spatial and linguistic phenomena.We ...
Comments