Alle Veranstaltungen (Sessions) sind nach Thema (Topic), Format oder individuell gruppiert und gekennzeichnet. Jede einzelne Veranstaltung kann einen oder mehrere Beiträge enthalten.
Klicken Sie auf das Etikett, um die Veranstaltungsliste (unten) nach Gruppierung zu filtern.
OB-1 | Objektbasierte Produktion | R5 | 2018-11-16 | 15:00 - 16:00
moderiert von Peter Hirscher
Eurovision Song Contest 2018 - immersiv und interaktiv
OB-1-1 | Beginn 15:00 | Dauer 60 min. | Andreas Turnwald |
Ulli Scuda, Christian Simon | 3D-Audio-Workshop (Deutsch)
Der Eurovision Song Contest (ESC) ist mit 186 Millionen Fernsehzuschauern eine der größten Musikveranstaltungen der Welt. 2018 fand er vor über 10.000 Zuschauern in der Altice Arena in Lissabon statt. Durch seine Kombination aus Musik und Publikumsreaktionen ist er prädestiniert für eine dreidimensionale Audioproduktion. Parallel zum normalen 2.0 und 5.1 Sendeton hat das Fraunhofer Institut für Integrierte Schaltungen (IIS) in Kooperation mit der European Broadcasting Union (EBU) vor Ort eine immersive Tonfassung produziert, welche die Atmosphäre dieses Live-Events eindrucksvoll wiedergibt. Mittels einer MPEG-H Übertragung lassen sich mehrere Kommentare in unterschiedlichen Sprachen innerhalb eines Sendesignals ausstrahlen. Darüber hinaus ist Nutzer-Interaktion möglich, die dem Zuhörer zusätzliche Varianten der Mischung zugänglich macht. Der Beitrag berichtet über die technische Realisierung der Aufnahme, die klangliche Gestaltung, das Authoring und Enkodieren als MPEG-H-Datenstrom.
OB-2 | Objektbasierte Produktion | R4 | 2018-11-17 | 09:30 - 11:30
moderiert von Christoph Sladeczek
The Impact of Object-Based Audio on the Production of a Binaural Audio Play
OB-2-1 | Beginn 09:30 | Dauer 30 min. | Yannik Grewe |
Julian Stuchlik, Massimo Ehrhard, Maurice Schill, Michael Wagner | Vortrag (Englisch)
Object-based audio offers advantages over channel-based approaches such as format agnostic reproduction to improve the listening experience. An audio scene comprises of a number of objects each consisting of audio content and additional information which is affixed in the shape of metadata. This metadata is interpreted by a renderer, which creates the audio signals, according to the target reproduction system.
In an artistic context, recent work focused more on the use of audio objects played back through loudspeaker systems, while the influence of an object-based approach for binaural reproduction has been understudied.
This paper shows aspects of an object-based production, while using the example of an immersive, binaural audio play. In an object-based context, reconsiderations of the production process for binaural reproduction is required. Usually, audio plays consist of different elements such as dialog, music and effects, which were typically produced in multichannel-based formats. The results show, that correlated multichannel signals are not feasible for binaural reproduction due to coloration, localization and externalization issues. Furthermore, technical aspects as well as creative challenges are discussed and the developed production workflow is presented.
Open standards and open source software for object-based audio production
OB-2-2 | Beginn 10:00 | Dauer 30 min. | Benjamin Weiss | Vortrag (Englisch)
The demand for personalised, interactive, scaleable and immersive audio is on the rise and pushes the fixed channel-based approach on its limits. Object-based audio makes it possible to use all these features effectively. In order to do that the individual tracks are not – as with channel-based audio – mixed together for a certain loudspeaker setup, but rather transmitted to the consumer as distinct audio-objects accompanied by metadata. On the consumer side the individual audio objects are then rendered for the desired playback system. Of course these additional possibilities can only be implemented with an increased complexity.
To preserve thereby the creative intent of the audio engineer involves new challenges. Only with open standards like the "Audio Definition Model" (ADM) and the "EBU ADM Renderer" (EAR) it can be ensured that even after a long time a production can be played back as the audio engineer intended. Whereas in some cases even open standards can leave room for interpretation, an open-source reference implementation can potentially resolve also the last ambiguities.
This talk gives an overview about the current open standards for the production of object-based audio, their spread on the market and presents open source projects, which can be already used today.
Immersive Object-based Mastering
OB-2-3 | Beginn 10:30 | Dauer 30 min. | Simon Hestermann |
Mario Seideneck, Christoph Sladeczek | Vortrag (Englisch)
To date, object-based spatial audio has been subject to much research, and a few production tools have entered the market. The main difference to the channel-based audio paradigm is that object-based spatial audio enables engineers to create three-dimensional audio scenes regardless of any particular loudspeaker setup. To playback audio scenes, a separate rendering process calculates all loudspeaker signals in real time, based on the meta data of audio objects in a scene. While the authoring process of an audio scene can be easily connected to conventional audio recording and mixing processes, possibilities for mastering audio scenes are still very limited. The main limitation stems from the aspect that compared to channel-based mastering, no loudspeaker channels can be altered. However, instead of always altering audio objects individually, the meta data part of audio objects can be used to provide engineers with convenient tools for object-based spatial mastering as well. Beyond solely mastering the audio data of a scene, new tools can also alter the meta data of audio objects on a global level for meta data mastering. These new object-based spatial mastering techniques are presented.
An Investigation into the Distribution of 3D Immersive Audio Renderer Processing to Speaker Endpoint Processors
OB-2-4 | Beginn 11:00 | Dauer 30 min. | Sean Devonport |
Richard Foss | AES Reviewed Paper (Englisch)
Immersive, object audio based systems typically require the rendering of multiple audio tracks to multiple speakers. This rendering is processor intensive, and often is required to co-exist with Digital Audio Workstation (DAW) processing. A common solution is to dedicate a separate unit with a powerful processor to perform the rendering. This paper describes an alternative solution, the distribution of the rendering to the speakers, where each speaker contains rendering capability appropriate to its position in the system configuration. This capability is enabled by the incorporation of an XMOS processor and SHARC DSP into each speaker. A range of rendering algorithms have been implemented on the XMOS processor - Distance Based Amplitude Panning, Vector Based Amplitude Panning, Wave Field Synthesis, as well as Ambisonic decoding. CPU processing load tests are described, where the relative CPU load for centralized and distributed rendering is highlighted for each algorithm.