Abstract
A system for musical accompaniment is presented in which a computer-driven orchestra follows and learns from a soloist in a concerto-like setting. The system is decomposed into three modules: The first computes a real-time score match using a hidden Markov model; the second generates the output audio by phase-vocoding a preexisting audio recording; the third provides a link between these two, by predicting future timing evolution using a Kalman Filter--like model. Several examples are presented showing the system in action in diverse musical settings. Connections with machine learning are highlighted, showing current weaknesses and new possible directions.
- Cemgil, A.T., Kappen, H.J., Barber, D. A generative model for music transcription. IEEE Trans. Audio Speech Lang. Process. 14, 2 (Mar. 2006), 679--694. Google ScholarDigital Library
- Cont, A., Schwarz, D., Schnell, N. From Boulez to ballads: Training ircam's score follower. In Proceedings of the International Computer Music Conference (2005), 241--248.Google Scholar
- Dannenberg, R., Mont-Reynaud, B. Following an improvisation in real time. In Proceedings of the 1987 International Computer Music Conference (1987), 241--248.Google Scholar
- Dannenberg, R., Mukaino, H. New techniques for enhanced quality of computer accompaniment. In Proceedings of the 1988 International Computer Music Conference (1988), 243--249.Google Scholar
- Flanagan, J.L., Golden, R.M. Phase vocoder. Bell Syst. Tech. J. 45 (Nov. 1966), 1493--1509.Google ScholarCross Ref
- Franklin, J. Improvisation and learning. In Advances in Neural Information Processing Systems 14. MIT Press, Cambridge, MA, 2002.Google Scholar
- Klapuri, A., Davy, M. (editors)., Signal Processing Methods for Music Transcription. Springer-Verlag, New York, 2006. Google ScholarDigital Library
- Lippe, C. Real-time interaction among composers, performers, and computer systems. Inf. Process. Soc. Jpn. SIG Notes, 123 (2002), 1--6.Google Scholar
- Pachet, F. Beyond the cybernetic jam fantasy: The continuator. IEEE Comput. Graph. Appl. 24, 1 (2004), 31--35. Google ScholarDigital Library
- Palmer, C. Music performance. Annu. Rev. Psychol. 48 (1997), 115--138.Google ScholarCross Ref
- Raphael, C. A Bayesian network for real-time musical accompaniment. In Advances in Neural Information Processing Systems (NIPS) 14. MIT Press, 2002.Google Scholar
- Rowe, R. Interactive Music Systems. MIT Press, 1993.Google Scholar
- Sagayama, T.N.S., Kameoka, H. Specmurt anasylis: A piano-roll-visualization of polyphonic music signal by deconvolution of log-frequency spectrum. In Proceedings 2004 ISCA Tutorial and Research Workshop on Statistical and Perceptual Audio Processing (SAPA 2004) (2004).Google Scholar
- Schwarz, D. Score following commented bibliography, 2003.Google Scholar
- Shumway, R.H., Stoffer, D.S. Dynamic linear models with switching. J. Am. Stat. Assoc. 86 (1991), 763--769.Google ScholarCross Ref
- Widmer, G., Goebl, W. Computational models for expressive music performance: The state of the art. J. New Music Res. 33, 3 (2004), 203--216Google ScholarCross Ref
Index Terms
- The informatics philharmonic
Recommendations
The Vienna Philharmonic Orchestra’s New Year’s Concerts: Building a FAIR Data Corpus for Musicology
DLfM '22: Proceedings of the 9th International Conference on Digital Libraries for MusicologyThe Vienna Philharmonic Orchestra’s New Year’s Concert is an annual, live-broadcast New Year’s Day staple for a vast international audience, with an alternating line-up of star conductors and an ever-changing repertoire that incorporates the same ...
B-box Mixer: An Interactive UI for Generating B-box Music
MM '15: Proceedings of the 23rd ACM international conference on MultimediaB-box is a form of vocal percussion that imitates rhythms in various types of sound, especially musical instruments. As b-box becoming popular, more and more people want to learn b-box and make their own b-box music. However, not everyone has the talent ...
Comments