The inaugural International Conference on Audio for Virtual and Augmented Reality
has unveiled an expansive programme of technical papers, workshops and tutorials
for
the two-day event to be held on September 30 and October 1, 2016, co-located with
the 141st AES Convention at the Los Angeles Convention Center’s West Hall. The
conference and manufacturer’s Expo will spotlight research institutions, developers
and vendors that are providing immersive spatial audio for virtual-reality and
augmented-reality media, demonstrably one the fastest-growing segments of the
entertainment-audio industry.
Targeting content developers, researchers, manufacturers, consultants and students
looking to expand their knowledge of sound production for VR and AR, a total of 27
technical papers will be offered during the two-day programme, in addition to
multiple
practical workshops and tutorial sessions. Programme content will focus on the
AR/VR
creative process, applications workflow and product development. The companion
Expo will feature displays from leading-edge manufacturers and service providers.
Chris Pike, Richard Taylor, Tom Parnell and Frank Melchior from BBC Research &
Development will present a paper entitled “Object-based 3D Audio Production for
Virtual Reality Using The Audio Definition Model,’ during which they will describe
the development of a virtual-reality experience with object-based 3D audio
rendering, and eventual production of an audio mix in the form of a single WAV file.
In a paper entitled “Augmented Reality Headphone Environment Rendering,’ Keun
Sup Lee and Jean-Marc Jot from DTS will address binaural artificial reverberation
processing to match local environment acoustics, so that synthetic audio objects are
not distinguishable from sounds occurring naturally or reproduced over
loudspeakers, while a team from Qualcomm lead by Shankar Shivappa and Martin
Morrell, in a paper entitled “Efficient, Compelling and Immersive VR Audio
Experience Using Scene Based Audio/Higher Order Ambisonics,’ will focus on
mechanisms for acquiring and delivering live soundfields for VR productions.
Durand Begault from NASA ARC, in a paper entitled “Spatial Auditory Feedback in
Response to Tracked Eye Position,’ will focus on securing spatial auditory feedback
about whether or not the user’s gaze is specifically directed towards a desired
position, while Tim Gedemer and Charles Deenen from Source Sound will present a
paper entitled “Mars 2030: A Virtual Reality Manned Mars Mission as Predicted by
NASA Scientists in 2016,’ which will describe the challenges of creating of a virtual-
reality simulation for NASA’s planned manned mission to the Red Planet. Johannes
Kares and Veronique Larcher from Sennheiser, in a paper entitled “Streaming
Immersive Audio Content,’ will describe ways in which engineers can create
immersive audio content for live streaming.
Workshop session include presentations from Fraunhofer, Sennheiser, Dolby, Sound
Particles, Technicolor and other companies, addressing such topics as “Audio
Production, Mixing and Delivery for VR Audio,’ “Immersive Sound Capture for
Cinematic Virtual Reality,’ “Object-based Audio Mixing for AR/VR Applications,’
“Immersive Sound Design with Particle Systems,’ and “3D Audio Post-Production
Workflows for VR.’
For further information on the 2016 International Conference on Audio for Virtual
and Augmented Reality, visit http://www.aes.org/avar.