Abstract
We investigated gesture description of sound stimuli performed during a listening task. Our hypothesis is that the strategies in gestural responses depend on the level of identification of the sound source and specifically on the identification of the action causing the sound. To validate our hypothesis, we conducted two experiments. In the first experiment, we built two corpora of sounds. The first corpus contains sounds with identifiable causal actions. The second contains sounds for which no causal actions could be identified. These corpora properties were validated through a listening test. In the second experiment, participants performed arm and hand gestures synchronously while listening to sounds taken from these corpora. Afterward, we conducted interviews asking participants to verbalize their experience while watching their own video recordings. They were questioned on their perception of the listened sounds and on their gestural strategies. We showed that for the sounds where causal action can be identified, participants mainly mimic the action that has produced the sound. In the other case, when no action can be associated with the sound, participants trace contours related to sound acoustic features. We also found that the interparticipants’ gesture variability is higher for causal sounds compared to noncausal sounds. Variability demonstrates that, in the first case, participants have several ways of producing the same action, whereas in the second case, the sound features tend to make the gesture responses consistent.
- J. A. Ballas. 1993. Common factors in the identification of an assortment of brief everyday sounds. Journal of Experimental Psychology: Human Perception and Performance 19, 2 (1993), 250--267.Google ScholarCross Ref
- Baptiste Caramiaux. 2012. Studies on the Relationship between Gesture and Sound in Musical Performance. Ph.D. Thesis. Université Pierre et Marie Curie (Paris 6).Google Scholar
- B. Caramiaux, F. Bevilacqua, and N. Schnell. 2010a. Mimicking Sound with Gesture as Interaction Paradigm. Technical Report. IRCAM - Centre Pompidou.Google Scholar
- B. Caramiaux, F. Bevilacqua, and N. Schnell. 2010b. Towards a gesture-sound cross-modal analysis. In Embodied Communication and Human-Computer Interaction, volume 5934 of Lecture Notes in Computer Science. Springer Verlag, 158--170. Google ScholarDigital Library
- W. W. Gaver. 1993a. How do we hear in the world? Explorations in ecological acoustics. Ecological Psychology 5, 4 (1993), 285--313.Google ScholarCross Ref
- W. W. Gaver. 1993b. What in the world do we hear?: An ecological approach to auditory event perception. Ecological Psychology 5, 1 (1993), 1--29.Google ScholarCross Ref
- Y. Gérard. 2004. Mémoire sémantique et sons de l’environnement. Ph.D. Dissertation. Université de Bourgogne.Google Scholar
- B. L. Giordano, J. McDonnell, and S. McAdams. 2010. Hearing living symbols and nonliving icons: Category specificities in the cognitive processing of environmental sounds. Brain and Cognition 73, 1 (2010), 7--19.Google ScholarCross Ref
- R. I. Godøy. 2006. Gestural-sonorous objects: embodied extensions of Schaeffer’s conceptual apparatus. Organised Sound 11, 2 (2006), 149--157. Google ScholarDigital Library
- R. I. Godøy, E. Haga, and A. R. Jensenius. 2006a. Exploring music-related gestures by sound-tracing: A preliminary study. In Proceedings of the COST287-ConGAS 2nd International Symposium on Gesture Interfaces for Multimedia Systems (GIMS’06).Google Scholar
- R. I. Godøy, E. Haga, and A. R. Jensenius. 2006b. Playing “air instruments”: Mimicry of sound-producing gestures by novices and experts. In Lecture Notes in Computer Science. Springer-Verlag.Google Scholar
- C. Guastavino. 2007. Categorization of environmental sounds. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 61, 1 (2007), 54.Google Scholar
- B. Gygi, G. R. Kidd, and C. S. Watson. 2004. Spectral-temporal factors in the identification of environmental sounds. The Journal of the Acoustical Society of America 115 (2004), 1252.Google ScholarCross Ref
- B. Gygi, G. R. Kidd, and C. S. Watson. 2007. Similarity and categorization of environmental sounds. Attention, Perception, & Psychophysics 69, 6 (2007), 839--855.Google ScholarCross Ref
- O. Houix, G. Lemaitre, N. Misdariis, P. Susini, and I. Urdapilleta. 2012. A lexical analysis of environmental sound categories. Journal of Experimental Psychology: Applied 18, 1 (2012), 52--80.Google ScholarCross Ref
- A. Kendon. 2004. Gesture: Visible action as utterance. Cambridge University Press.Google ScholarCross Ref
- E. Kohler, C. Keysers, M. A. Umilta, L. Fogassi, V. Gallese, and G. Rizzolatti. 2002. Hearing sounds, understanding actions: action representation in mirror neurons. Science 297, 5582 (2002), 846.Google Scholar
- E. W. Large. 2000. On synchronizing movements to music. Human Movement Science 19, 4 (2000), 527--566.Google ScholarCross Ref
- E. W. Large and C. Palmer. 2002. Perceiving temporal regularity in music. Cognitive Science 26, 1 (2002), 1--37.Google ScholarCross Ref
- G. Lemaitre and L. M. Heller. 2012. Auditory perception of material is fragile while action is strikingly robust. The Journal of the Acoustical Society of America 131 (2012), 1337.Google ScholarCross Ref
- G. Lemaitre, O. Houix, N. Misdariis, and P. Susini. 2010. Listener expertise and sound identification influence the categorization of environmental sounds. Journal of Experimental Psychology: Applied 16, 1 (2010), 16--32.Google ScholarCross Ref
- M. Leman. 2007. Embodied Music Cognition and Mediation Technology. Massachusetts Institute of Technology Press. Google ScholarDigital Library
- M. Leman, F. Desmet, F. Styns, L. Van Noorden, and D. Moelants. 2009. Sharing musical expression through embodied listening: A case study based on Chinese Guqin music. Music Perception 26, 3 (2009), 263--278.Google ScholarCross Ref
- J. W. Lewis. 2004. Human brain regions involved in recognizing environmental sounds. Cerebral Cortex 14, 9 (2004), 1008--1021. DOI: http://dx.doi.org/10.1093/cercor/bhh061Google ScholarCross Ref
- J. D. Loehr and C. Palmer. 2007. Cognitive and biomechanical influences in pianists finger tapping. Experimental brain research 178, 4 (2007), 518--528.Google Scholar
- X. Ma, C. Fellbaum, and P. R. Cook. 2010. SoundNet: Investigating a language composed of environmental sounds. In Proceedings of the 28th International Conference on Human Factors in Computing Systems. ACM, 1945--1954. Google ScholarDigital Library
- P.-J. Maes. 2013. An Empirical Study of Embodied Music Listening, and Its Applications in Mediation Technology. Ph.D. Dissertation. Ghent University.Google Scholar
- M. M. Marcell, D. Borella, M. Greene, E. Kerr, and S. Rogers. 2000. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology 22, 6 (2000), 830--864.Google ScholarCross Ref
- S. McAdams. 1993. Recognition of sound sources and events. In Thinking in Sound: The Cognitive Psychology of Human Audition. Oxford University Press, 1993.Google ScholarCross Ref
- D. McNeill. 1996. Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press.Google Scholar
- K. Nymoen, B. Caramiaux, M. Kozak, and J. Tørresen. 2011. Analyzing sound tracings - A multimodal approach to music information retrieval. In ACM Multimedia -- MIRUM 2011 (accepted). Google ScholarDigital Library
- K. Nymoen, K. Glette, S. Skogstad, J. Torresen, and A. R. Jensenius. 2010. Searching for cross-individual relationships between sound and movement features using an SVM classifier. In Proceedings of the Conference on New Interfaces for Musical Expression (NIME’10).Google Scholar
- K. Nymoen, R. I. Godøy, A. R. Jensenius, and J. Torresen. 2013. Analyzing correspondence between sound objects and body motion. ACM Transactions on Applied Perception (TAP) 10, 2 (2013), 9. Google ScholarDigital Library
- L. Pizzamiglio, T. Aprile, G. Spitoni, S. Pitzalis, E. Bates, S. D’Amico, and F. Di Russo. 2005. Separate neural systems for processing action-or non-action-related sounds. Neuroimage 24, 3 (2005), 852--861.Google ScholarCross Ref
- J. O. Ramsay and B. W. Silverman. 1997. Functional Data Analysis. 2nd edition, Springer Science.Google Scholar
- G. P. Scavone, S. Lakatos, and C. R. Harbke. 2002. The Sonic Mapper: An interactive program for obtaining similarity ratings with auditory stimuli. In Proceedings of the International Conference on Auditory Display.Google Scholar
- P. Schaeffer. 1966. Traité des Objets Musicaux. Éditions du Seuil.Google Scholar
- V. Shafiro. 2008. Identification of environmental sounds with varying spectral resolution. Ear and Hearing 29, 3 (2008), 401.Google ScholarCross Ref
- D. Smalley. 1997. Spectromorphology: explaining sound-shapes. Organised Sound 2, 2 (1997), 107--126. Google ScholarDigital Library
- S. S. Stevens, J. Volkmann, and E. B. Newman. 1937. A scale for the measurement of the psychological magnitude pitch. The Journal of the Acoustical Society of America 8, 3 (1937), 185--190.Google ScholarCross Ref
- J. Tardieu, P. Susini, F. Poisson, H. Kawakami, and S. McAdams. 2009. The design and evaluation of an auditory way-finding system in a train station. Applied Acoustics 70, 9 (2009), 1183--1193.Google ScholarCross Ref
- N. J. VanDerveer. 1980. Ecological Acoustics: Human Perception of Environmental Sounds. Ph.D. Dissertation. ProQuest Information & Learning.Google Scholar
- P. Vermersch. 1990. Questionner l’action: l’entretien d’explicitation. Psychologie française 35, 3 (1990), 227--235.Google Scholar
- M. M. Wanderley and P. Depalle. 2004. Gestural control of sound synthesis. Proceedings of the IEEE 92, 4 (2004), 632--644.Google ScholarCross Ref
- R. J. Zatorre, J. L. Chen, and V. B. Penhune. 2007. When the brain plays music: auditory--motor interactions in music perception and production. Nature Reviews Neuroscience 8, 7 (2007), 547--558.Google ScholarCross Ref
Index Terms
- The Role of Sound Source Perception in Gestural Sound Description
Recommendations
Form Follows Sound: Designing Interactions from Sonic Memories
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsSonic interaction is the continuous relationship between user actions and sound, mediated by some technology. Because interaction with sound may be task oriented or experience-based it is important to understand the nature of action-sound relationships ...
Immersive auditory display system 'sound cask': three-dimensional sound field reproduction system based on the boundary surface control principle
VRST '18: Proceedings of the 24th ACM Symposium on Virtual Reality Software and TechnologySound cask was developed to realize the perfect 3D auditory display that creates 3D sound waves around the listener's head just the same as the primary sound field, based on the boundary surface control (BoSC) principle.
If we consider the sound ...
Distance perception of a virtual sound source synthesized near the listener position
This paper reports on the challenges faced in attempts to synthesize virtual sound sources near the listener position due to the differences between sound fields of real and virtual sound sources, especially if virtual sources reproduced by a line array ...
Comments