How can we transform the hardware that we already have around us? Qualcomm® Developer of the Month Nikos Fragoulis has some interesting answers.
We can connect more devices than ever. Perhaps the defining character of the Internet of Things boom will come from new kinds of ‘intelligence’ we can give our Things. Embedded and in the cloud, machine learning techniques are bringing new possibilities to our mobile devices.
Nikos Fragoulis knows this well, and is using heterogeneous computing, computer vision and machine learning with his company IRIDA Labs. The aim? To give connected cameras new ways to produce images.
We caught up with Nikos to talk computational photography and coffee.
Tell us about your company.
IRIDA Labs is bridging the gap between a camera and the human eye, by bringing visual perception to any device. We develop software in Computer Vision, using Image Processing and Machine Learning techniques made for any CPU, GPU or DSP/ASP platform (or a combination of them) using heterogeneous programming techniques.
Our product and technology portfolio includes applications in Computational Photography and Visual Perception/Analytics addressing various markets such as mobile devices, action cameras, drones, surveillance, automotive, industrial and robot vision.
How was your company started?
We were three colleagues doing post-doc research at the local University. We decided to challenge our luck and try to make money out of our ideas instead of just writing papers!
We founded the company in 2009, and our portfolio now addresses the challenge of delivering innovative computer vision solutions while keeping optimal system requirements in terms of power consumption, memory and processing speed.
What advice would you give to other developers?
Entrepreneurs - even the more successful ones - are just humans. So, go for it, you never know! The voyage is just as rewarding as the final success.
Share a fun fact about the company.
Lots of us play a musical instrument. So when we hire a new employee, and they also happen to play, we always say: “I’m putting the band back together…”, as Jake and Elwood do in the Blues Brothers movie.
What do you love about embedded and IoT development?
The potential of using an affordable hardware system, and building useful software on it which can affect the lives of millions of people.
Where do you and your team get inspiration for your work?
Most our team members (8 out of 14 people!) hold a PhD degree. A lot of us in our academic years only considered computer vision and machine learning as scientific fields, rather than a technological field with business opportunities. Implementing this technology and making it available for the masses inspires us to do what we do.
Who is your technology hero?
Mike Lazaridis, one of the co-founders of RIM Corp. producing Blackberry. He started with zero capital to become a successful businessman.
When enduring a long day, how do you and your team stay energized? (e.g. energy drinks, chocolate chip cookies, power naps, etc.)
Inspiration and a pleasant working environment are an endless source of energy! But our break room is not short of coffee and treats...
Where do you see the IoT industry in 10 years?
It is a very dynamic market, horizontally affecting many other end markets. It might not reach the predicted 50bn units, but in the next ten years tens of billions of devices will find their way to a home, a factory facility, a car, and so on.
What projects are you working on using Qualcomm technologies?
IRIDA Labs’ business model is to offer computer vision apps in a business-to-business fashion rather than in retail through app stores. We have implemented:
1. A couple of astonishing computational photography apps, aiming to turn a smartphone’s camera into a DSLR: Video stabilization (IRIS-ViSTA), Low-Light enhancement (IRIS-EnLight) and super-resolution (IRIS-HyperView) are just some examples of the software featuring this kind of functionality. How can you optimally process images and videos without being able to perceive and understand them as a human does?
Visual perception through machine learning is an obligatory feature of any software that we make. In every application, computational efficiency and low power consumption is of paramount importance. To this end we rely on the code optimization features of Snapdragon™ LLVM Compiler for generating optimal code. We also occasionally use FastCV™ and Snapdragon SDK for Android to fast prototype key CV functionality and performance comparison. In our applications, we employ heterogeneous computing techniques. That involves off-loading computationally intensive parts of our systems to the Adreno™ GPU and Hexagon™ DSP. We use Adreno SDK and Hexagon SDK for developing code for these units and manage the overall partition of the code.
Another valuable tool in this task is Symphony™ SDK, which allows us easy integration of heterogeneous system elements and shortens the development time. Power consumption is crucial for mobile and IoT applications and we use Symphony’s power management API for optimizing power consumption of our code.
We use Trepn™ Profiler for analyzing the computational load and power consumption of key individual units (CPU cores, GPU) and optimizing the overall performance. Real time implementation and continuous testing is another key element of our development procedure. We find the Snapdragon MDP and DragonBoard™ 410c valuable tools to fulfil these tasks.
2. Apart from computational photography apps, autonomous visual perception apps form another major product line. This features machine learning as well as deep learning technology. These apps include Video Face Tagging (IRIS-FaceTag), Automatic Photo Annotation, and Object Detection. They are not available on Google Play, but if you are interested, throw us an email and we will be happy to send you a demo!
Low light correction
What Qualcomm technologies are featured in your projects?
Snapdragon LLVM Compiler
Snapdragon Mobile Development Platform (MDP)
Snapdragon SDK for Android
We occasionally use all these products to our development. The exact way that we integrate these into our products varies from product to product. However, the main target being to build heterogeneous processing code, we find Symphony as a valuable and powerful tool for carrying out this task. Through properly devised interfaces, Symphony facilitates offloading critical code parts to various computing units.
How do Qualcomm products assist in the development of your projects?
A key aspect of our technology is the super-optimization of our code. In this way, we can provide real-world functional computer vision software that doesn’t drain the battery, or trigger the thermal limit of any device. To that end, software that contributes to this code optimization and helps us analyze the computational code is very important to us. In addition, development boards such as Dragonboard or Snapdragon MDP help us to prototype our software more efficiently, since they guarantee access to the various computing resources (for example Hexagon), and feature pre-installed analysis software.
As a follow-up question, did the use of this Qualcomm technology help to overcome any specific problems your team was facing during development?
At the initial steps of our development efforts – before we became Qualcomm Snapdragon gurus – we enjoyed using MARE (now Symphony) for partitioning our code, and parallelizing it across the various CPUs of the Snapdragon processor. Then we managed to use this software for managing the power consumption, and keep it within the limits of the specifications. We find the Hexagon GPU a valuable ally in the battle for low power consumption! By using the Hexagon SDK, we managed to develop super-efficient software.
Did using this Qualcomm technology speed up your development process?
Although we are power users of software tools from Qualcomm Developer Network in general, we have enjoyed using Symphony SDK for some time now. By using this software, we could efficiently partition and optimize our code onto Snapdragon processors, and improve the power efficiency via the power management API.