Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-12-02T20:45:15.016Z Has data issue: false hasContentIssue false

A Cinematic Spatial Sound Display for Panorama Video Applications

Published online by Cambridge University Press:  25 October 2010

Jonas Braasch*
Affiliation:
CA3RL, Graduate Program in Acoustics, School of Architecture, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180, USA
Johannes Goebel*
Affiliation:
Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC), Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180, USA
Todd Vos*
Affiliation:
Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC), Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180, USA

Abstract

This paper describes a new sound spatialisation system which is an integral part of Rensselaer Polytechnic Institute’s new Experimental Media and Performing Arts Center (EMPAC). The Cinematic Spatial Sound Display (CSSD) was originally conceived for interactive panorama video installations, but the architecture goes beyond this particular application. The CSSD is characterised by its scalability to various loudspeaker configurations. It spatialises sound from dry sound files or live sources using control data that describe the spatial scenes. The time lines for source positions and other experimental parameters can be stored and edited in the CSSD, and the system can also process live user input to control selected parameters. The CSSD is more than just a sound positioning tool, and the underlying Virtual Microphone Control (ViMiC) technology was developed to support artists in designing new forms of spatial imagery. The software enables the user to create computer-generated rooms with virtual microphones and sound sources. The algorithm uses physical laws to auralise acoustic scenes – allowing realistic effects such as the Doppler shift and the simulation of various classical microphone techniques. Various parameters of ViMiC can be adjusted in real time, including the directivity patterns and orientations of both the microphones and sound sources as well as their precise locations. Surreal scenes can be created by assigning artificial directivity patterns to microphones or changing the laws of physics in the model. An algorithm to extract the sound-source positions in recordings using a microphone array is also part of the CSSD. The algorithm was specifically designed to operate in multiple sound-source scenarios and can also be used for telematic music applications.

Type
Articles
Copyright
Copyright © Cambridge University Press 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allen, J.B., Berkley, D.A. 1979. Image Method for Efficiently Simulating Small-Room Acoustics. Journal of the Acoustical Society of America 65(4): 943950.Google Scholar
Braasch, J., Peters, N., Valente, D.L. 2008. A Loudspeaker-Based Projection Technique for Spatial Music Applications using Virtual Microphone Control. Computer Music Journal 32(3): 5571.Google Scholar
Braasch, J., Tranby, N. 2007. A Sound-Source Tracking Device to Track Multiple Talkers from Microphone Array, and Lavalier Microphone Data. 19th International Congress on Acoustic (Madrid, Spain), ELE-03-009.Google Scholar
Goebel, J. 2009. The ZKM Institute for Music and Acoustics up to 2002: Politics, Context and Foundations. Organised Sound 14(3): 236247.Google Scholar
Jot, J.-M. 1992. Étude et réalisation d’un spatialisateur de sons par modèles physiques et perceptifs, PhD thesis, Télécom Paris.Google Scholar
Jot, J., Chaigne, A. 1991. Digital Delay Networks for Designing Artificial Reverberators. Proceedings of the 90th Convention of the of the Audio Engineering Society (Paris, France), Paper Number 6610.Google Scholar
Kuchelmeister, V., Shaw, J., McGinity, M., Favero, D.D., Hardjono, A. 2009. Immersive Mixed Media Augmented Reality Applications and Technology. in P. Muneesawang, F. Wu, I. Kumazawa, A. Roeksabutr, M. Liao and X. Tang (eds.), Proceedings: Advances in Multimedia Information in Processing-PCM 2009: 10th Pacific Rim Conference on Multimedia (Berlin, Heidelberg and Bangkok: Springer-Verlag), pp. 1,1121118.Google Scholar
Lintermann, B. 2009. Panorama Technology, http://on1.zkm.de/zkm/stories/storyReader$5955 (accessed on 30 December 2009).Google Scholar
McGinity, M., Shaw, J., Kuchelmeister, V., Hardjono, A., Favero, D.D., Hardjono, A. 2007. AVIE: A Versatile Multi-User Stereo 360° Interactive VR Theatre. EDT ’07: Proceedings of the 2007 Workshop on Emerging Displays Technologies, 2 (New York: ACM).Google Scholar
Pledger, D., Shaw, J. 2004. Eavesdrop: Interactive Installation, www.icinema.unsw.edu.au/projects/prj_eavesdrop.html (accessed on 30 December 2009).Google Scholar
Pulkki, V. 1997. Virtual Sound Source Positioning Using Vector Base Amplitude Panning, Journal of the Audio Engineering Society 45(6): 456466.Google Scholar
Pulkki, V., Faller, C. 2006. Directional Audio Coding: Filterbank and STFT-Based Design. Proceedings of the 120th Convention of the Audio Engineering Society (Paris, France), Paper Number 6658.Google Scholar
Pulkki, V., Merimaa, J., Lokki, T. 2004. Reproduction of Reverberation with Spatial Impulse Response Rendering. Proceedings of the 116 th Convention of the Audio Engineering Society (Berlin, Germany), Paper Number 6057.Google Scholar
Ramakrishnan, C. 2009. Zirkonium: Non-Invasive Software for Sound Spatialisation. Organised Sound 14(3): 268276.Google Scholar
Shaw, J. 1993. EVE (Extended Virtual Environment), www.medienkunstnetz.de/works/eve (accessed on 30 December 2009).Google Scholar
Shaw, J., Favero, D.D., Brown, N., Howard, I., Gibson, R., Gugliemetti, M., Miles, A., McQuire, S., Papastergiadis, N., Gardner, H., Vuylsteker, P., Kuchelmeister, V. 2009. Spherecam: Website and video documentation, www.icinema.unsw.edu.au/projects/infra_spherecam_1.html (accessed on 30 December 2009).Google Scholar
Tintinnabulate and Soundwire, Braasch, J., Chafe, C., Oliveros, P., Woodstrup, B. 2009. Tele-Colonization (Deep Listening Institute, Ltd., DL-TMS/DD-1).Google Scholar
Wright, M. 2005. Open Sound Control: An Enabling Technology for Musical Networking. Organised Sound 10(3): 193200.Google Scholar
Würfel, W. 1997. Passive akustische Lokalisation [passive acoustical localisation], Master’s thesis, Technical University Graz.Google Scholar