Jun 14, 2018
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
LTE networks contributed to a rise in mobile video usage with more fluid video experiences and new ways to interact with and consume video. Now, video is finding its way into Industrial IoT (IIoT), one of today’s hottest areas of growth. IIoT brings together vision processing, sensor data, machine-to-machine communication, artificial intelligence, machine learning, and other technologies to automate tasks in commercial and industrial settings. Through this ecosystem of components, video plays an important role, allowing both humans and machines to interact through video data in a myriad of ways.
In my previous blog post on developing expansive video experiences, we looked at some of the key components for handling video on mobile devices. In this blog post, we’ll explore three key elements that developers should consider for smart interactive IIoT video experiences: vision capture and processing, the role of sensors, and feedback.
Vision capture and processing
In the context of IIoT, video capture and processing play a key role in automating tasks. For example, this Smart Airport Demo from Chordant uses a Sony SNC-XM631 camera and the DragonBoard™ 410c development board from Arrow Electronics to identify luggage and to perform facial recognition.
In such IIoT systems, the camera typically sits at the edge of the network (i.e., acting as or connected to a client device) capturing imagery that forms the basis for accurate vision processing. Thus, selecting the right camera is critical when developing an IIoT device. Aside from quality, look for a camera that has the right form factor and reliability for the intended environment, as well as the necessary frame rate and resolution support.
Another key element is an actuator with a camera mount that can control the camera’s position and orientation. They come in a variety of sizes and motor ratings and can be controlled from an IoT device.
Bandwidth is another key consideration and often determines whether image data will be processed at the edge or in the cloud. If high bandwidth will be available (e.g., through a hard-wired or Wi-Fi connection), then sending large amounts of image data (e.g., every frame of captured video) to the cloud for processing may be practical. If bandwidth is limited or unknown, or data is of a sensitive nature, then captured data should probably be processed at the edge to minimize the data sent to the cloud. For example, our Qualcomm Snapdragon 845 Spectra 280 ISP was designed for image processing at the edge, supporting up to seven cameras and Ultra HD capture at 60fps. Processing at the edge can further be enhanced through its support for Caffe2 and TensorFlow.
With much of the focus on cool features like high-resolution graphics and fast processors, it’s easy to overlook the sensors that bring data from the real world into the digital world. Sensors perform measurements of events from both human interaction and from the surrounding environment. They can be hard-wired through ports such as GPIO or operate wirelessly through standards such as Bluetooth Low Energy (BLE).
Examples include the gyroscope, accelerometer, and touch screens, which are often used to capture human input. While other sensors detect the state of an environment such as temperature, air pressure, sound, light, motion, speed/acceleration, etc. Our Wine Demo, which is powered by the DragonBoard 410c, demonstrates how sensors can be used for various measurements in agriculture, manufacturing, and logistics IIoT applications.
Sensors can also play a critical role in facilitating interactive IIoT video experiences. For example, a machine could look for data from a motion sensor and then turn on a camera overlooking that area to start video capture. A human operator may also receive an alert and manually control both the position and zoom of that camera.
When choosing a sensor, start by analyzing the environment that it will operate in. Be sure to choose sensors that meet or exceed the demands of the environment such as severe temperatures, vibration, excessive moisture or submersion, as well as its expected lifespan.
Once data is collected and processed, an IIoT application may provide feedback which can include everything from alerts for human operators to callbacks and webhooks to other applications.
Feedback can occur through on-screen elements (e.g., alerts, interactive and engaging UIs, vibration in response to menu selection, etc.), and can also include sound, actuators, external indicators (e.g., LEDs). In Chordant’s Smart Airport Demo, luggage that has not gone through security or that has been lost after passing through security, triggers events that are transformed into digital alerts and visual LED notifications. This demo could, for example, be extended so that the device handles the alert by analyzing video coverage from other cameras to try and determine the cause.
As this example shows, feedback can be based on data accumulated from sensors and/or vision procession and involves application logic and the right type feedback mechanism. Here you will want to focus on good UX design for feedback intended for human operators, and well-architected event handling mechanisms.
As we’ve seen, interactive video for IIoT involves a number of key elements including video capture and processing, sensor data, and feedback. Products like the DragonBoard 410c and Snapdragon 845 mobile platform are designed to provide you with the platform for IIoT applications, but you’ll need to combine it with video, sensor hardware, and good design for quality interactive video experiences. We’d love to hear about some of your interactive video projects and any lessons you can share with our QDN community.