Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2015, Article ID 850474, 13 pages
http://dx.doi.org/10.1155/2015/850474
Research Article

CaRo 2.0: An Interactive System for Expressive Music Rendering

Department of Information Engineering, University of Padova, Via Gradenigo 6/A, 35131 Padova, Italy

Received 6 August 2014; Revised 14 December 2014; Accepted 27 December 2014

Academic Editor: Francesco Bellotti

Copyright © 2015 Sergio Canazza et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. N. Bernardini and G. De Poli, “The sound and music computing field: present and future,” Journal of New Music Research, vol. 36, no. 3, pp. 143–148, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Canazza, G. De Poli, A. Rodà, and A. Vidolin, “An abstract control space for communication of sensory expressive intentions in music performance,” Journal of New Music Research, vol. 32, no. 3, pp. 281–294, 2003. View at Publisher · View at Google Scholar
  3. S. Canazza, G. De Poli, A. Rodà, and A. Vidolin, “Expressiveness in music performance: analysis, models, mapping, encoding,” in Structuring Music through Markup Language: Designs and Architectures, J. Steyn, Ed., pp. 156–186, IGI Global, 2012. View at Google Scholar
  4. S. Canazza, G. De Poli, C. Drioli, A. Rodà, and A. Vidolin, “Audio morphing different expressive intentions for multimedia systems,” IEEE Multimedia, vol. 7, no. 3, pp. 79–83, 2000. View at Google Scholar
  5. D. Fabian, R. Timmers, and E. Schubert, Eds., Expressiveness in Music Performance: Empirical Approaches Across Styles and Cultures, Oxford University Press, Oxford, UK, 2014.
  6. G. De Poli, “Methodologies for expressiveness modelling of and for music performance,” Journal of New Music Research, vol. 33, no. 3, pp. 189–202, 2004. View at Publisher · View at Google Scholar
  7. G. Widmer and P. Zanon, “Automatic recognition of famous artists by machine,” in Proceedings of the 16th European Conference on Artificial Intelligence (ECAI '04), pp. 1109–1110, Valencia, Spain, 2004.
  8. A. Gabrielsson, “Expressive intention and performance,” in Music and the Mind Machine, R. Steinberg, Ed., pp. 35–47, Springer, Berlin, Germany, 1995. View at Publisher · View at Google Scholar
  9. P. Juslin, “Emotional communication in music performance: a functionalist perspective and some data,” Music Perception, vol. 14, no. 4, pp. 383–418, 1997. View at Publisher · View at Google Scholar
  10. S. Canazza, G. De Poli, and A. Vidolin, “Perceptual analysis of the musical expressive intention in a clarinet performance,” in Music, Gestalt and Computing, M. Leman, Ed., pp. 441–450, Springer, Berlin, Germany, 1997. View at Google Scholar
  11. G. De Poli, A. Rodà, and A. Vidolin, “Notebynote analysis of the influence of expressive intentions and musical structure in violin performance,” Journal of New Music Research, vol. 27, no. 3, pp. 293–321, 1998. View at Publisher · View at Google Scholar
  12. R. Bresin and A. Friberg, “Emotional coloring of computer-controlled music performances,” Computer Music Journal, vol. 24, no. 4, pp. 44–63, 2000. View at Publisher · View at Google Scholar · View at Scopus
  13. F. B. Baraldi, G. De Poli, and A. Rodà, “Communicating expressive intentions with a single piano note,” Journal of New Music Research, vol. 35, no. 3, pp. 197–210, 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. S. Hashimoto, “KANSEI as the third target of information processing and related topics in Japan,” in Proceedings of the International Workshop on Kansei-Technology of Emotion, pp. 101–104, 1997.
  15. S. Hashimoto, “Humanoid robots for kansei communication: computer must have body,” in Machine Intelligence: Quo Vadis, P. Sinčák, J. Vaščák, and K. Hirota, Eds., pp. 357–370, World Scientific, 2004. View at Google Scholar
  16. R. Picard, Affective Computing, MIT Press, Cambridge, Mass, USA, 1997.
  17. S. Canazza, G. De Poli, C. Drioli, A. Rodà, and A. Vidolin, “Modeling and control of expressiveness in music performance,” Proceedings of the IEEE, vol. 92, no. 4, pp. 686–701, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. B. H. Repp, “Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's ‘Traumerei’,” The Journal of the Acoustical Society of America, vol. 92, no. 5, pp. 2546–2568, 1992. View at Publisher · View at Google Scholar · View at Scopus
  19. B. H. Repp, “The aesthetic quality of a quantitatively average music performance: two preliminary experiments,” Music Perception, vol. 14, no. 4, pp. 419–444, 1997. View at Publisher · View at Google Scholar
  20. L. Mion, G. De Poli, and E. Rapanà, “Perceptual organization of affective and sensorial expressive intentions in music performance,” ACM Transactions on Applied Perception, vol. 7, no. 2, article 14, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Dixon and W. Goebl, “The 'air worm': an interface for real-time manipulation of expressive music performance,” in Proceedings of the International Computer Music Conference (ICMC '05), Barcelona, Spain, 2005.
  22. A. Roda, S. Canazza, and G. De Poli, “Clustering affective qualities of classical music: beyond the valence-arousal plane,” IEEE Transactions on Affective Computing, vol. 5, no. 4, pp. 364–376, 2014. View at Publisher · View at Google Scholar
  23. P. N. Juslin and J. A. Sloboda, Music and Emotion: Theory and Research, Oxford University Press, 2001.
  24. A. Camurri, G. De Poli, M. Leman, and G. Volpe, “Communicating expressiveness and affect in multimodal interactive systems,” IEEE Multimedia, vol. 12, no. 1, pp. 43–53, 2005. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Kirke and E. R. Miranda, “A survey of computer systems for expressive music performance,” ACM Computing Surveys, vol. 42, no. 1, article 3, pp. 1–41, 2009. View at Publisher · View at Google Scholar · View at Scopus
  26. G. Widmer, S. Flossmann, and M. Grachten, “YQX plays Chopin,” AI Magazine, vol. 30, no. 3, pp. 35–48, 2009. View at Google Scholar
  27. G. Widmer, “Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries,” Artificial Intelligence, vol. 146, no. 2, pp. 129–148, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. J. Sundberg, A. Askenfelt, and L. Fryden, “Musical performance: a synthesis-by-rule approach,” Computer Music Journal, vol. 7, no. 1, pp. 37–43, 1983. View at Google Scholar
  29. I. Ipolyi, “Innforing i musiksprakets opprinnelse och struktur,” TMHQPSR, vol. 48, no. 1, pp. 35–43, 1952. View at Google Scholar
  30. A. Friberg, R. Bresin, and J. Sundberg, “Overview of the KTH rule system for musical performance,” Advances in Cognitive Psychology, Special Issue on Music Performance, vol. 2, no. 2-3, pp. 145–161, 2006. View at Publisher · View at Google Scholar
  31. S. Tanaka, M. Hashida, and H. Katayose, “Shunji: a case-based performance rendering system attached importance to phrase expression,” in Proceedings of Sound and Music Computing Conference (SMC '11), F. Avanzini, Ed., pp. 1–2, 2011. View at Google Scholar
  32. M. Hamanaka, K. Hirata, and S. Tojo, “Implementing ‘a generative theory of tonal music’,” Journal of New Music Research, vol. 35, no. 4, pp. 249–277, 2006. View at Publisher · View at Google Scholar · View at Scopus
  33. F. Lerdahl and R. Jackendoff, A Generative Theory of Tonal Music, MIT Press, Cambridge, Mass, USA, 1983.
  34. T. Baba, M. Hashida, and H. Katayose, “A conducting system with heuristics of the conductor ‘virtualphilharmony’,” in Proceedings of the New Interfaces for Musical Expression Conference (NIME '10), pp. 263–270, Sydney, Australia, June 2010.
  35. M. Good, “MusicXML for notation and analysis,” in The Virtual Score: Representation, Retrieval, Restoration, W. B. Hewlett and E. SelfridgeField, Eds., pp. 113–124, MIT Press, Cambridge, Mass, USA, 2001. View at Google Scholar
  36. I. Peretz, L. Gagnon, and B. Bouchard, “Music and emotion: perceptual determinants, immediacy, and isolation after brain damage,” Cognition, vol. 68, no. 2, pp. 111–141, 1998. View at Publisher · View at Google Scholar · View at Scopus
  37. R. Plutchik, Emotion: A Psychoevolutionary Synthesis, Harper & Row, New York, NY, USA, 1980.
  38. D. Temperley, The Cognition of Basic Musical Structures, MIT Press, Cambridge, Mass, USA, 2001.
  39. S. Canazza, G. de Poli, S. Rinaldin, and A. Vidolin, “Sonological analysis of clarinet expressivity,” in Music, Gestalt, and Computing. Studies in Cognitive and Systematic Musicology, vol. 1317 of Lecture Notes in Computer Science, pp. 431–440, Springer, Berlin, Germany, 1997. View at Publisher · View at Google Scholar
  40. H. Katayose, M. Hashida, G. de Poli, and K. Hirata, “On evaluating systems for generating expressive music performance: the rencon experience,” Journal of New Music Research, vol. 41, no. 4, pp. 299–310, 2012. View at Publisher · View at Google Scholar · View at Scopus
  41. G. de Poli, S. Canazza, A. Rodà, and E. Schubert, “The role of individual difference in judging expressiveness of computer assisted music performances by experts,” ACM Transactions on Applied Perception, vol. 11, no. 4, article 22, 2014. View at Publisher · View at Google Scholar