Developer Blog

Developer of the month: What can machine learning bring to photography? Find out with Nikos Fragoulis of IRIDA Labs

Jan 11, 2017

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

How can we transform the hardware that we already have around us? Qualcomm® Developer of the Month Nikos Fragoulis has some interesting answers.

We can connect more devices than ever. Perhaps the defining character of the Internet of Things boom will come from new kinds of ‘intelligence’ we can give our Things. Embedded and in the cloud, machine learning techniques are bringing new possibilities to our mobile devices.

Nikos Fragoulis knows this well, and is using heterogeneous computing, computer vision and machine learning with his company IRIDA Labs. The aim? To give connected cameras new ways to produce images.

We caught up with Nikos to talk computational photography and coffee.

Tell us about your company.
​IRIDA Labs is bridging the gap between a camera and the human eye, by bringing visual perception to any device. We develop software in Computer Vision, using Image Processing and Machine Learning techniques made for any CPU, GPU or DSP/ASP platform (or a combination of them) using heterogeneous programming techniques.

Our product and technology portfolio includes applications in Computational Photography and Visual Perception/Analytics addressing various markets such as mobile devices, action cameras, drones, surveillance, automotive, industrial and robot vision.

How was your company started?
We were three colleagues doing post-doc research at the local University. We decided to challenge our luck and try to make money out of our ideas instead of just writing papers!

We founded the company in 2009, and our portfolio now addresses the challenge of delivering innovative computer vision solutions while keeping optimal system requirements in terms of power consumption, memory and processing speed.

What advice would you give to other developers?
Entrepreneurs - even the more successful ones - are just humans. So, go for it, you never know! The voyage is just as rewarding as the final success.

Share a fun fact about the company.
Lots of us play a musical instrument. So when we hire a new employee, and they also happen to play, we always say: “I’m putting the band back together…”, as Jake and Elwood do in the Blues Brothers movie.

Face detection

What do you love about embedded and IoT development?
The potential of using an affordable hardware system, and building useful software on it which can affect the lives of millions of people.

Where do you and your team get inspiration for your work?
Most our team members (8 out of 14 people!) hold a PhD degree. A lot of us in our academic years only considered computer vision and machine learning as scientific fields, rather than a technological field with business opportunities. Implementing this technology and making it available for the masses inspires us to do what we do.

Who is your technology hero?
Mike Lazaridis, one of the co-founders of RIM Corp. producing Blackberry. He started with zero capital to become a successful businessman.

When enduring a long day, how do you and your team stay energized? (e.g. energy drinks, chocolate chip cookies, power naps, etc.)
Inspiration and a pleasant working environment are an endless source of energy! But our break room is not short of coffee and treats...

Where do you see the IoT industry in 10 years?
It is a very dynamic market, horizontally affecting many other end markets. It might not reach the predicted 50bn units, but in the next ten years tens of billions of devices will find their way to a home, a factory facility, a car, and so on.

What projects are you working on using Qualcomm technologies?
IRIDA Labs’ business model is to offer computer vision apps in a business-to-business fashion rather than in retail through app stores. We have implemented:

1. A couple of astonishing computational photography apps, aiming to turn a smartphone’s camera into a DSLR: Video stabilization (IRIS-ViSTA), Low-Light enhancement (IRIS-EnLight) and super-resolution (IRIS-HyperView) are just some examples of the software featuring this kind of functionality. How can you optimally process images and videos without being able to perceive and understand them as a human does?

Visual perception through machine learning is an obligatory feature of any software that we make. In every application, computational efficiency and low power consumption is of paramount importance. To this end we rely on the code optimization features of Snapdragon™ LLVM Compiler for generating optimal code. We also occasionally use FastCV™ and Snapdragon SDK for Android to fast prototype key CV functionality and performance comparison. In our applications, we employ heterogeneous computing techniques. That involves off-loading computationally intensive parts of our systems to the Adreno™ GPU and Hexagon™ DSP. We use Adreno SDK and Hexagon SDK for developing code for these units and manage the overall partition of the code.

Another valuable tool in this task is Symphony™ SDK, which allows us easy integration of heterogeneous system elements and shortens the development time. Power consumption is crucial for mobile and IoT applications and we use Symphony’s power management API for optimizing power consumption of our code.

We use Trepn™ Profiler for analyzing the computational load and power consumption of key individual units (CPU cores, GPU) and optimizing the overall performance. Real time implementation and continuous testing is another key element of our development procedure. We find the Snapdragon MDP and DragonBoard™ 410c valuable tools to fulfil these tasks. 

2. Apart from computational photography apps, autonomous visual perception apps form another major product line. This features machine learning as well as deep learning technology. These apps include Video Face Tagging (IRIS-FaceTag), Automatic Photo Annotation, and Object Detection. They are not available on Google Play, but if you are interested, throw us an email and we will be happy to send you a demo!

Low light correction

What Qualcomm technologies are featured in your projects?

Adreno Profiler
Adreno SDK
Hexagon SDK
Trepn Profiler
Snapdragon LLVM Compiler
Snapdragon Mobile Development Platform (MDP)
Snapdragon SDK for Android
Symphony SDK

We occasionally use all these products to our development. The exact way that we integrate these into our products varies from product to product. However, the main target being to build heterogeneous processing code, we find Symphony as a valuable and powerful tool for carrying out this task. Through properly devised interfaces, Symphony facilitates offloading critical code parts to various computing units.

How do Qualcomm products assist in the development of your projects?
A key aspect of our technology is the super-optimization of our code. In this way, we can provide real-world functional computer vision software that doesn’t drain the battery, or trigger the thermal limit of any device. To that end, software that contributes to this code optimization and helps us analyze the computational code is very important to us. In addition, development boards such as Dragonboard or Snapdragon MDP help us to prototype our software more efficiently, since they guarantee access to the various computing resources (for example Hexagon), and feature pre-installed analysis software.

As a follow-up question, did the use of this Qualcomm technology help to overcome any specific problems your team was facing during development?
At the initial steps of our development efforts – before we became Qualcomm Snapdragon gurus – we enjoyed using MARE (now Symphony) for partitioning our code, and parallelizing it across the various CPUs of the Snapdragon processor. Then we managed to use this software for managing the power consumption, and keep it within the limits of the specifications. We find the Hexagon GPU a valuable ally in the battle for low power consumption! By using the Hexagon SDK, we managed to develop super-efficient software.  

Did using this Qualcomm technology speed up your development process?
Although we are power users of software tools from Qualcomm Developer Network in general, we have enjoyed using Symphony SDK for some time now. By using this software, we could efficiently partition and optimize our code onto Snapdragon processors, and improve the power efficiency via the power management API.

Mike Roberts

Senior Director of Global Product Marketing

More articles from this author

About this author

Related News


3GPP starts study on 5G NR spectrum sharing

The second week of March 2017 was a momentous week for the global standardization of 5G, known as 5G New Radio or 5G NR. The big news was that 3GPP agreed on an accelerated 5G schedule that will enable 3GPP-based large-scale trials and deployments as early as 2019. This development is truly exciting and shows that the industry has come together and is working collaboratively toward the common goal of enabling early enhanced 5G mobile broadband deployments, while still ensuring forward compatibility, to enable the broader 5G vision.

But there were many other important outcomes of the 3GPP meeting, including one in particular that I want to expand on: the new study on 5G NR operating in unlicensed spectrum, both licensed-assisted and stand-alone. A study item is the first step in the 3GPP process of standardizing key technologies, and what makes this study item noteworthy is that this is the first time 3GPP will be studying the development of a cellular technology operating solely in unlicensed spectrum. It is also significant that the 3GPP-approved study includes a wide array of unlicensed spectrum ranges, all the way to 60 GHz also known as mmWave. The study will be led by Qualcomm together with other partners and will run through the beginning of 2018.

You may ask why this is such a big deal. It’s because 5G NR will proliferate around the world more broadly and more rapidly if all spectrum types can be used, especially unlicensed spectrum. Doing so will allow 5G to support more uses and deployments models so that many more entities will be able to enjoy the benefits of 5G in a much broader 5G ecosystem. Using unlicensed spectrum on a stand-alone basis enables a wider variety of new deployment scenarios, such as local area networks in dense deployments, so-called private IoT networks for enterprises or Industrial IoT (explicitly called out in the project descriptions in 3GPP), neighborhood networks, and neutral host deployments (where one deployment serves multiple operators). Examples where such private IoT networks can be deployed are anything from factories, ports, and mines to warehouses and smart buildings. Enabling the use of unlicensed spectrum assisted by licensed spectrum, will allow mobile operators to aggregate more spectrum to provide extreme bandwidths and more capacity (Figure 1). In other words, consumers will enjoy faster, better broadband if 5G uses unlicensed spectrum.

Apr 26, 2017


Snapdragon Wear 2100 powers high-end fashion smartwatches at Baselworld

Silicon Valley met Switzerland at this year’s Baselworld, the world’s premier event for the watch and jewelry industry, which celebrated its 100th anniversary this year. Several impressive smartwatches made their debut, all touting the Qualcomm Snapdragon Wear 2100 Platform and all powered by Android Wear 2.0. With this reliable platform and OS developed specifically for wearables, it’s no wonder high-end brands are looking beyond basic wearable functions, and combining style with technology to develop chic smartwatches fit for any lifestyle.

The superior SoC for smartwatches, Snapdragon Wear 2100, is an integrated, ultra-low power sensor hub. It’s 30 percent smaller than previous-generation wearable SoCs, allowing OEMs the freedom to develop thinner, sleeker product designs. And because it uses 25 percent less power than its older sibling (the Snapdragon 400), watchmakers can offer even more features and better designs.

The Snapdragon Wear 2100 comes in both tethered (Bluetooth and Wi-Fi) and connected (3G and 4G LTE) versions. The latter allows wearers to do more with their wearables, from streaming music to sending messages to calling a cab, in tandem with — or even without — having to bring their smartphones along.

Each of the touchscreen smartwatches included in this roundup run Android Wear 2.0, Google’s latest wearable operating system, and can pair with both iOS and Android phones. With Android Wear 2.0, users can personalize their watch faces with chronometer-style complications and create shortcuts to their favorite applications. In addition to the pre-installed Google Fit and calendar apps, more apps can be downloaded directly through the on-watch Google Play store, so wearers can customize their device to their lifestyle.

Android Wear 2.0 brings the Google Assistant to your wrist. Find answers and get things done even when your hands are full. Reply to a friend, set a reminder, or ask for directions. Just hold the power button or say “OK Google”.

Check out the some of Snapdragon Wear powered smartwatches that made a splash at this year’s Baselworld:

Apr 18, 2017


Caffe2 and Snapdragon usher in the next chapter of mobile machine learning

Machine learning, at its core, is a method by which we can turn huge data into useful actions. Most of the attention around machine learning technology has involved super-fast data processing applications, server farms, and supercomputers. However far-flung servers don’t help when you’re looking to magically perfect a photo on your smartphone, or to translate a Chinese menu on the fly. Making machine learning mobile — putting it on the device itself — can help unlock everyday use cases for most people.

Qualcomm Technologies’ engineers have been working on the machine learning challenge for years, and the fruits of that work are evident in Qualcomm Snapdragon mobile platforms, which have become a leader for on-device mobile machine learning. It’s a core component of the Snapdragon product line, and you’ll see machine learning technologies both in our SoCs (820, 835, and some 600-tier chipsets) and adjacent platforms like the IoT and automotive.

And we aren’t pushing this technology forward by ourselves. We’re working with a whole ecosystem of tools, savvy OEMs, and software innovators to proliferate new experiences for consumers. These experiences use on-device machine learning, and we could not have conceived of them all by ourselves.

An exciting development in this field is Facebook’s stepped up investment in Caffe2, the evolution of the open source Caffe framework. At this year’s F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook’s open source deep learning framework, and the Qualcomm Snapdragon neural processing engine (NPE) framework. The NPE is designed to do the heavy lifting needed to run neural networks efficiently on Snapdragon, leaving developers with more time and resources to focus on creating their innovative user experiences. With Caffe2’s modern computation graph design, minimalist modularity, and flexibility to port to multiple platforms, developers can have greater flexibility to design a range of deep learning tasks including computer vision, natural language processing, augmented reality, and event prediction, among others.

Caffe2 is deployed at Facebook to help developers and researchers train machine learning models and deliver artificial intelligence (AI)-powered experiences in various mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.

One of the benefits of Snapdragon and the NPE is that a developer can target individual heterogeneous compute cores within Snapdragon for optimal performance, depending on the power and performance demands of their applications. The Snapdragon 835 is designed to deliver up to 5x better performance when processing Caffe2 workloads on our embedded Qualcomm Adreno 540 GPU (compared to CPU). The Hexagon Vector eXtensions (HVX) in the Qualcomm Hexagon DSP are also engineered to offer even greater performance and energy efficiency. The NPE includes runtime software, libraries, APIs, offline model conversion tools, debugging and benchmarking tools, sample code, and documentation. It is expected to be available later this summer to the broader developer community.

Qualcomm Technologies continues to support developers and customers with a variety of cognitive capabilities and deep learning tools alongside the Snapdragon platform. We anticipate that developers will be able to participate in a wider and more diverse ecosystem of powerful machine learning workloads, allowing more devices to operate with greater security and efficiency.

We don’t yet know the full range of applications for the technology, but we can’t wait to see how it’s used by innovative developers around the world.

Sign up to be notified when the Snapdragon neural processing engine SDK is available later this summer.

Apr 18, 2017


Hardware-software convergence: Key skills to consider

Hardware-software convergence, or how hardware and software systems are working more closely together, illustrates how each are empowering (and sometimes literally powering) the other. And in our current development environment, this is happening more than ever. Of course, deep technical skills will be of the utmost importance to navigate this technological trend, but it is also the soft skills we apply to our engineering practices that are as important in determining our success.

What skills do developers need to nurture, and how do you put them to good use? In this piece, we’ll cover three soft skills developers can use to stay ahead of the hardware-software convergence, and share resources to help you grow and maintain those skills.

Creative inspiration

First off: Creative Inspiration. While it’s easy to identify your technical shortcomings and fill those gaps with training and practice, knowing which soft skills to hone can be a lot more complicated. In fact, you could even think of these soft skills as “mindsets,” since they’re more about how you approach a problem instead of just being a tool you use to solve it. For this first skill, it will be important to start approaching challenges antidisciplinarily, rather than relying on existing mental frameworks. That’s what being creative is all about – finding new ways of doing things.

So where do you start? Ask yourself this question: What is the dent you want to make in the universe? Begin from a place of passion – think about what problems and projects keep you up at night, and what issues big or small you want to solve.

Then, understand that creative inspiration is a process. What seems like overnight genius is often the result of many erroneous attempts (ex: Thomas Edison’s 1,000 or so attempts in creating the lightbulb) and then having the fortitude to gain deeper understanding of an issue to then apply your imagination. We particularly like the design thinking method, which encourages starting from a place of inspired empathy and developing knowledge through lean prototyping and iteration. The Stanford D.School has a Bootcamp Bootleg that you can download for a quick start guide to this design framework.

Apr 17, 2017


Artificial intelligence tech in Snapdragon 835: personalized experiences created by machine learning

As our mobile devices have matured, gaining the ability to connect to the Web, we’ve labeled them as “smart.” But why settle for just smart? Harnessing the power of the Qualcomm Snapdragon 835 processor, developers, and OEMs are taking our devices to the next level, creating new experiences with the aid of machine learning. From superior video and security to your own personal assistant, your Snapdragon device has the ability to operate intelligently — outside of the cloud or Web connection — allowing you to experience your smarter phone in an entirely new way.

Application developers and device manufacturers understand what their users want. They can create a feature or an application that uses machine learning (more specifically, deep neural networks) to improve the performance a particular task, such as detecting or recognizing objects, filtering out background noise, or recognizing voices or languages. These applications are usually run in the cloud, and depending on the device they’re in, this could be sub-optimal.

The Snapdragon Neural Processing Engine SDK was created to help developers determine where to run their neural network-powered applications on the processor. For example, an audio/speech detection application might run on the Qualcomm Hexagon DSP and an object detection or style transfer application on the Qualcomm Adreno GPU. With the help of the SDK, developers have the flexibility to target the core of choice that best matches the power and performance profile of the intended user experience. The SDK supports convolutional neural networks, LSTMs (Long Short-Term Memory) expressed in Caffe and TensorFlow, as well as conversion tools designed to ensure optimal performance on Snapdragon heterogenous cores.

The Hexagon DSP and its wide vector extensions (HVX) offer an impressive power and performance mix for running neural networks on device. Performance is up to 8X faster and 25X more power efficient than using the CPU, which translates to lower battery consumption overall. In addition to support via the Snapdragon Neural Processing Engine, TensorFlow is directly supported on the Hexagon DSP, giving developers multiple options to run their chosen neural network power apps.

Here are a few applications that could be facilitated by Snapdragon 835 on-device machine learning tech:

Photography: Machine learning can aid in scene classification, real-time noise reduction, and object tracking, making it easier to take the perfect shot, or capture video regardless of the conditions.

VR/AR: With machine learning on your device, VR/AR feature can operate faster and with less lag, so everything from gestures and facial recognition to object tracking and depth perception are an immersive experience.

Voice detection: Your phone’s on-device AI can listen for commands and keywords to help you navigate the data and apps on your device more efficiently, and save power doing so.

Security: With facial recognition software and iris scanning, all operating independently from the cloud, your device can learn to identify, and help protect, you.

Connections: Your Snapdragon device has the ability to filter out distracting background noise during calls for clearer conversations with friends and family.

Qualcomm Technologies’ unique machine learning platform is engineered so devices powered by the Snapdragon 835 can run trained neural networks on your devices without relying on a connection to the cloud. Pretty innovative, right?

Take a look at our previous deep dives into each of the Snapdragon 835 key components — batteryimmersive AR and VRphotos and videoconnectivity, and security — all of which combine to make the Snapdragon 835 mobile platform truly groundbreaking.

And sign up to receive the latest Snapdragon news.

Apr 13, 2017

Join the conversation.