OnQ Blog

MPEG-H audio takes immersive and interactive sound experiences to a new dimension

Apr 11, 2015

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

Recent advances in multimedia technology have made modern visual entertainment a truly immersive and engrossing experience. Can audio—our music, along with the sound for our TV shows, movies, and live sports—be just as immersive and interactive?

It can with MPEG-H audio, a technology designed to help content creators craft digital 3D soundscapes with greater depth and realism. The advantages of MPEG-H audio are engineered to go beyond 5.1 or even 7.1 surround sound to paint a more vivid soundscape than has ever been possible before. With MPEG-H audio, it can feel like the source of the sound, whether musician or actor, is right in the same room as the listener.

Qualcomm Technologies, Fraunhofer IIS, and Technicolor, have developed the important component for MPEG-H audio. What sets MPEG-H audio apart from others is its support for the new scene-based audio representation (also known as Higher Order Ambisonics, or HOA), in addition to the traditional channel and object-based formats. As you’ll see in the image below, rather than preserving each individual object within the recording, scene-based audio effectively creates a compact representation of the entire audio scene, and then optimally recreates the entire audio scene as needed at the location of the playback.

With scene-based audio representation, content production in multiple formats (5.1, 7.1 etc.) is no longer necessary. Bandwidth requirements and implementation complexity do not scale up as the number of playback channels or the number of dynamic objects in the scene increase. Also, the single representation (HOA) can be used to generate optimal outputs for any loudspeaker geometry and acoustic landscape. That means it can deliver the same experiences from a 22 speaker layout on down to normal headphones.

Scene-based audio can be used for both live capture and for recorded content, and can fit into existing infrastructure for audio broadcast and streaming. MPEG-H allows scene-based audio to be transmitted simultaneously with other audio objects. This means people watching a sporting event could choose commentary in their preferred language, select a preferred commentator, or mute commentary altogether. Scene-based audio also lets the end user focus on a specific direction in the sound field and rotate the sound field.

Implementing MPEG-H would have many benefits for broadcasters, movie and music producers, and audio engineers. And of course, the technology inherently provides a significantly enhanced experience for audiences. To learn more about how Qualcomm Technologies is revolutionizing the future of scene-based audio, visit the MPEG-H web page and download our recent white paper.

Engage with us on

and