Spatialized Audio Rendering for Immersive Virtual Environments
We present a spatialized audio rendering system for the use in immersive virtual environments. The system is optimized for rendering a sufficient number of dynamically moving sound sources in multi-speaker environments using off-the-shelf audio hardware. Based on simplified physics-based models, we achieve a good trade-off between audio quality, spatial precision, and performance. Convincing acoustic room simulation is accomplished by integrating standard hardware reverberation devices as used in the professional audio and broadcast community. We elaborate on important design principles for audio rendering as well as on practical implementation issues. Moreover, we describe the integration of the audio rendering pipeline into a scene graph-based virtual reality toolkit.