SP-1 | Spatial Audio | R3 | 2018-11-14 | 12:30 - 14:30
moderated by Thomas Lund
The following list of contributions (1 of 4 in total) to this event is incomplete due to filtering.
Investigations on the Impact of Distance Cues in Virtual Acoustic Environments
SP-1-2 | Start 13:00 | Duration 30 min. | Christoph Pörschmann |
Johannes M. Arend, Philipp Stade | AES Reviewed Paper (English)
In this paper, we analyze different auditory distance cues in dynamic binaural synthesis. We compare the contributions of sound intensity, direct-to-reverberant ratio (DRR), and of near-field cues. For the auralization, we use the BinRIR method, which allows to generate binaural room impulse responses (BRIRs) for dynamic binaural synthesis based on a measured omnidirectional room impulse response (RIR). With BinRIR based on a simple geometric model, the listener position can be freely adjusted and the distance cues can be adapted separately. Furthermore, near-field head-related impulse responses (HRIRs) can be applied for direct sound and early reflections if the listener is very close to the virtual sound source. In a listening experiment, we presented stimuli at different distances in four synthesized rooms. In one condition, the stimuli contained natural distance-dependent intensity cues, and in another condition, the stimuli were loudness normalized. The results showed that even for loudness-normalized stimuli, an adequate distance perception can be obtained by adapting the DRR. The influence of near-field HRIRs, which were as well tested in the experiment, is very weak.
PE-2 | Perception & Aesthetics | R3 | 2018-11-15 | 09:00 - 11:00
moderated by David Griesinger
The following list of contributions (1 of 4 in total) to this event is incomplete due to filtering.
On Human Perceptual Bandwidth and Slow Listening
PE-2-1 | Start 09:00 | Duration 30 min. | Thomas Lund |
Aki Mäkivirta | AES Reviewed Paper (English)
Seventy-four recent physiological and psychological studies revolving around human perception and its bandwidth were reviewed. The brain has ever only learned about the world through our five primary senses. With them, we receive a fraction of the information actually available, while we perceive far less still. A fraction of a fraction: The perceptual bandwidth. Conscious perception is furthermore influenced by long-term experience and learning, to an extent that it might be more accurately understood and studied as primarily a reach-out phenomena. Considering hearing, time is found to be a determining factor on several planes. It is discussed how such sentient observations could be taken into account in pro audio, for instance when conducting subjective tests; and the term “slow listening” is devised.
OB-2 | Object Based Production | R4 | 2018-11-17 | 09:30 - 11:30
moderated by Christoph Sladeczek
The following list of contributions (1 of 4 in total) to this event is incomplete due to filtering.
An Investigation into the Distribution of 3D Immersive Audio Renderer Processing to Speaker Endpoint Processors
OB-2-4 | Start 11:00 | Duration 30 min. | Sean Devonport |
Richard Foss | AES Reviewed Paper (English)
Immersive, object audio based systems typically require the rendering of multiple audio tracks to multiple speakers. This rendering is processor intensive, and often is required to co-exist with Digital Audio Workstation (DAW) processing. A common solution is to dedicate a separate unit with a powerful processor to perform the rendering. This paper describes an alternative solution, the distribution of the rendering to the speakers, where each speaker contains rendering capability appropriate to its position in the system configuration. This capability is enabled by the incorporation of an XMOS processor and SHARC DSP into each speaker. A range of rendering algorithms have been implemented on the XMOS processor - Distance Based Amplitude Panning, Vector Based Amplitude Panning, Wave Field Synthesis, as well as Ambisonic decoding. CPU processing load tests are described, where the relative CPU load for centralized and distributed rendering is highlighted for each algorithm.