May 19, 2021
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
Think forward to when the world has returned to normal, and you’re heading out for a business trip.
The taxi picks you up early in the morning – way before your family is awake - and you sit in the back as the driver takes the freeway, maybe catching up on sleep, maybe just watching the world go by.
You arrive at the airport and check-in. Busy travellers are rushing about with luggage, tired parents are rounding up hysterical children, and a large group of friends cheer and laugh loudly as they head to Las Vegas.
You make your way through security, grab a coffee and some breakfast from the departure lounge and head to the gate early to escape the chaos, and catch up on email before your flight leaves. You are the first person at the departure gate, and it is far calmer. However, as boarding time approaches, the gate gets busier and the acoustic environment changes radically.
This journey is familiar to all of us – from the repetitive, boring freeway traffic sounds and the chatter at airport check-in, to the clamour at the departure terminal and the relative calm inside departure gates – acoustic environments are dynamic and are changing constantly. Sometimes it’s not us moving through the different soundscapes but the acoustic environment changing around us as we sit idle. Still, the sounds around us are changing.
Acoustic environments are affected by many different factors, such as the time of day or year, your location, and the structure around you. As a result, you cannot simply use location-based information such as GPS to guide a smartphone’s behaviour and user interface. You need to adapt to the local context in real time, which is why sound recognition is such an exciting AI technology.
Thanks to Audio Analytic’s Acoustic Scene Recognition technology and the Qualcomm Platform Solutions Ecosystem program, this innovative capability is pre-certified to run on the Qualcomm Snapdragon 888 Mobile Platform, with the 2nd Generation Sensing Hub. This means that smartphones can adapt to this shifting contextual information and adjust UI, notification, alert, and call settings – making sure you don’t miss important messages and calls from your family before you board your flight.
However, Acoustic Scene Recognition does more than just apply acoustic contextual information to notification settings. As it runs in always-on, low-power mode, it is an enabling technology that can empower a wide range of new, useful, and entertaining features on next-generation devices.
You can see Acoustic Scene Recognition in action in a video from Audio Analytic.
Audio Analytic’s Acoustic Scene Recognition is one feature of its incredibly compact, edge-based sound recognition technology, which supports sound recognition tasks on smartphones such as audio event detection and content tagging, in addition to scene recognition.
For always-on applications such as Acoustic Scene Recognition, Audio Analytic has developed an ultra-compact version of its ai3 inference engine called ai3-nano, which takes up 40kB of ROM and requires around 1mA of power. The compactness of ai3-nano means that it can also run concurrently with a wake word module on the Sensing Hub, which means that consumers can benefit from sound recognition and voice recognition.
The optimization of Audio Analytic’s ai3-nano and Acoustic Scene Recognition technology enables a new wave of contextual smartphone applications and experiences, and it was made possible through close collaboration between Audio Analytic and Qualcomm Technologies.