Jul 18, 2018
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
Artificial intelligence isn’t just a buzzword; it’s the next industrial revolution, and it’s already changing the world as we know it. By building machines that can actually learn and improve, we can truly harness their power to compute. But how does a machine learn?
To get a better idea, we’re digging into AI-driven technology across industries for a new OnQ series that attempts to answer that question. Each piece will feature an interview with a different company using Qualcomm Technologies’ AI solutions to compete, innovate, and transform the world. We’ll look at how the company is using AI and what that means today and for the future of our customers. We’re kicking things off with one of the most rapidly advancing sectors: the Internet of Things.
Here we talk with Lighthouse co-founder and CTO Hendrik Dahlkamp about his company’s technology and how it uses 3D sensing and AI to create innovation in the home camera category.
This interview has been edited for clarity and length.
Tell us about Lighthouse, the company and camera, and the services you offer?
At our core, we’re an AI services company with a mission to bring accessible and useful intelligence to the world’s physical spaces. We’re starting with the home, but our long-term vision is much broader.
We were founded just over three years ago and have been working mostly on our flagship home camera, which launched in February. It’s a different kind of home camera that combines computer vision and 3D sensing with natural language understanding. Other cameras just ship you pixels, but we ship you intelligence.
Most cameras can only show you what’s happening now or what happened in the past. But if your camera understood what it was seeing, it could tell you specifics — like what your pet is doing, if your kids have come home, or if the house cleaner is on time. Lighthouse can do this. What’s more, you can tell it what you care about, and then Lighthouse will tell you when those specific things happen.
How does Lighthouse use AI?
Basically, we use it everywhere. We use the more classical forms of AI to segment and track objects, like being able to tell something is part of an image and knowing where it is as it moves. We use deep learning for object classification — so for example, knowing what is an adult, kid, or cat. We also use deep learning for basic action recognition. Your kids can wave at the camera, and Lighthouse can recognize the action of waving and send a push notification to your phone so you can see they’re communicating. We use AI for natural language understanding, so that you can just ask the camera questions, and it will understand what you’re looking for. And finally, we use AI for facial recognition to understand who’s who.
How is Lighthouse able to recognize who is who in the home?
It’s a very interesting combination of Qualcomm Technologies’ on-device AI compute and our own. Qualcomm Technologies’ solutions are optimized and accurate for detecting faces using minimal on-device compute resources. So we run that code on the Qualcomm Snapdragon 410 processor and detect the info on the camera. We have very high-quality images and high resolution without any kind of video artifacts. So if we see a face, we crop it and send it into the Lighthouse cloud where our deep learning-based network identifies who it is.
This is very easy to use for the user, too, with the app showing you what unknown faces it saw. Then you can identify people, and going forward, Lighthouse immediately knows who they are.
Lighthouse is able to eliminate false security alerts. How does AI enable this?
The typical false alerts you see in traditional cameras are caused by things like moving shadows from trees and car headlights. Thanks to our 3D sensor, Lighthouse knows this isn’t true movement in the home and disregards it. Other cameras may also be ineffective if you have a pet because they look for motion without differentiation, but with our AI-powered object classification, we know to ignore dogs and cats even as they roam around the house. We incorporate a lot of data about what goes on in the home.
What was it like building AI into your product? What were some of the challenges?
It’s been quite a journey from when we were just two guys working in a garage. We had to learn how to work with contract manufacturers, get hardware made in large numbers, extend the AI team, build the application around it, and deliver our intelligence to users. What’s been really hard is achieving the last one percent of accuracy. It takes a while, and we need to test it for a long time to see all of the things that can happen and get events classified.
Was there any part that was easier than expected?
There never is!
Is this all done locally on device, in the cloud, or a combination of both?
Both. We’re very happy with the Snapdragon 410 platform that we’re using. It’s quite powerful and allows us to do a lot on the device, including integrated connectivity — much more than traditional color cameras do.
For example, it computes the full 3D scene of what’s going on, and segments objects and tracks them as they move. We use this for very accurate activity detection, completely on the device. This allows us to go into sleep mode if there is no activity. Then when real activity is detected, Lighthouse sends it out into the cloud to be processed. As a plus, if you have a security incident, this means the data is already stored on the safe secure Lighthouse server and a burglar can’t just take the evidence along with the camera.
The activity detection and face detection are both run on the device. We also run wave detection on the device, so if you wave hello to Lighthouse, it gets that right away and it gives you feedback on the device. The identification — who’s who and what they’re doing — is done in the cloud. We keep the data in the cloud for up to 30 days so that it can be viewed everywhere.
What attracted Lighthouse to the Qualcomm AI solutions?
We like that Snapdragon has so many hardware-enabled features, and we’ve been using almost all of them. It’s like a showcase for all of the Qualcomm technologies. It’s got four ARM cores for lots of custom compute. It has 64-bit instructions, which we use to optimize our 3D computation pipeline. It comes with a very high quality, optimized face detection that we use. We’ve been tuning the ISP to get perfect image quality in all situations. The camera has a night vision mode, a day mode, and low-light mode. We’re using the hardware video codec to encode our videos. And we’ve been doing some audio intelligence running on the CPU, which we haven’t launched as a feature yet, but is coming soon.
What else is next for Lighthouse — the product and the company?
We think there’s a lot more we can do to turn your home into a truly smart one, with Lighthouse acting as a personal assistant for the home. For instance, we envision Lighthouse one day enabling you to leave messages for household members: “Hey Lighthouse, if you see Julie, tell her I went to the library.” Then, when Julie comes home, she’ll get the message right then. Another example would be telling your Lighthouse you want to chat with Mom, who also has one in her home, when she’s free. It could connect to her Lighthouse, ask — or perhaps even just check — if she’s available, and automatically figure out a good time to talk.
Our vision is to keep reinventing the traditional camera, whether that means becoming the eyes of the smart home or bringing intelligence to various physical spaces.
Read more about Qualcomm AI technology solutions.