Embedded Vision: A New Era of Opportunity and Innovation

Christos Kyrkou
5 min readJan 18, 2017

Most of us have smart phones and tablets with front- and rear-viewing cameras capable of capturing high-resolution still images and high-definition video clips.

Many of us have enjoyed the fresh new gaming experiences offered by the Xbox’s Microsoft Kinect and Sony’s Playstation 4 video game consoles. Some of us may even have a car with a rear-view parking camera, or a more advanced driver assistance system capable of detecting pedestrians, lanes, or even classify various traffic signs including speed limit signs, etc. What we may not realize is that all of these devices, which have recently become an essential part of our everyday lives, have something in common — embedded vision.

Embedded vision is a technology that entails a hybrid of two well-established fields: Embedded Systems and Computer Vision. This emerging technology aims at incorporating automated image analysis and vision capabilities into any kind of a computer-based system that is not a general purpose computer, but rather it is designed to perform certain tasks only. Using digital processing and intelligent algorithms, an embedded vision system can extract and interpret meaning from images or video, enabling it to understand the surrounding world and interact with its host environment [1]. Embedded vision can lead to the development of safer, smarter and more responsive machines, which like humans, see and understand. To put it simply, embedded vision refers to devices or machines that are empowered with the gift of sight, and are able to see and understand their environment!

Although computer vision algorithms have been extensively studied over the last few decades in academic research, they have only been implemented using large, heavy, expensive, and power-draining computers, restricting their usage to a short list of applications such as factory automation / assembly line inspection, optical character recognition and military systems [2]. In recent years, however, the emergence of very powerful, low-cost and energy-efficient processors, image sensors, memories, and other semiconductor devices, along with robust computer vision algorithms, has made embedded vision much more accessible and feasible [3]. Nowadays, even inexpensive smart phones and tablets are capable to supply formidable processing capabilities, including multi-core high-frequency CPUs and embedded graphics processors, on-chip DSPs and imaging coprocessors, and multiple gigabytes of memory. They are also provided with front- and rear-viewing camera sensors that support high image resolutions and frame rates. Therefore, a major transformation is underway, aiming to integrate vision capabilities into a wide variety of embedded systems and electronic products to make them more intelligent and responsive than before, and thus more valuable to the users.

An embedded vision system is comprised of four major elements [4], which are illustrated in Figure 1. The image sensor outputs images at some resolution of pixels and a specific rate, which corresponds to how many image frames the sensor outputs per second. These images are processed by an embedded processing device, in the form of specialized processors that implement unique architectures, or dedicated accelerators specific to image and video processing. Image sensors generate image/video data in a streaming fashion. Processing the data output from the video sensor usually requires storing in memory either all or some parts of the image/video. Finally, specialized vision algorithms are required to manipulate and analyze the vast amount of incoming video data to extract visual meaning about the surrounding 3D world.

Figure 1: Elements of an embedded vision system.

Due to continues technological advances in sensors, processors, memory and algorithms, embedded vision systems have nowadays the potential to revolutionize a multitude of industries, including medicine, advertising, security, personal health, entertainment, automotive and more [5]. Embedded vision has high potential in medicine, where it can be incorporated for example in medical electronic devices such as intelligent x-ray and MRI systems to assist radiologist to rapidly and accurately identify image irregularities, eliminating degrading factors like fatigue and distraction, which occurred when the image analysis is performed by humans [6]. Another medical application involves the detection of skin cancer signs in moles on the human body, using a smartphone to capture images of a mole and process them by a complex vision algorithm developed by dermatologists [6]. Other revolutionary medical applications aim at providing assistance to blind people, by utilizing a camera to interpret real objects and communicate them to the user as auditory cues [7]. In automotive, vision-based systems utilize gesture and face recognition for car safety; the driver for example can use a winking of the eye to turn the radio on and off, or a movement of the head to change the volume, thus reducing distractions while driving[7]. Furthermore, the ability of such systems to detect meaning from images of the road ahead the car could be used to provide warnings if for example a car begins leaving a lane, approaches a car too closely, or detects a bicycle or a pedestrian. Furthermore, active research in the field of embedded vision aims to incorporate face recognition for advertising, in order to track the facial responses of internet users while they view online advertisements [7]. In general, the era of embedded vision has just started. We would need several pages to list the abundant applications that could benefit from the use of this emerging technology, as the technology’s potential is fundamentally limited only by our imagination. Moreover, there are great expectations that within the next ten years, embedded vision will broaden and accelerate its penetration into numerous new markets, creating exciting products for a range of applications [2].

References

1. Berkeley Design Technology, Inc. (2011) Implementing Vision Capabilities in Embedded Systems. [Online]. http://www.bdti.com/private/pubs/BDTI_ESC_Embedded_Vision.pdf

2. ALTERA. (2012) Processing Options For Implementing Vision Capabilities in Embedded Systems. [Online]. http://www.altera.com/technology/system-design/articles/2012/vision-capabilities-in-embedded-systems.html

3. T. Wilson and B. Dipert, “Embedded Vision on Mobile Devices,” Journal of Electronic Engineering, July 2013.

4. AVNET. (2013, June) EMBEDDED VISION: Creating a Next-Generation of Machines that “See”. [Online]. http://www.em.avnet.com/en-us/design/publications/Documents/AXIOM_Embedded%20Vision.pdf

5. Embedded Vision Alliance. (2014) Applications for Embedded Vision. [Online]. http://www.embedded-vision.com/applications/medical

6. Jamie Hartford. (2013, April) The Embedded Vision Revolution. [Online]. http://www.mddionline.com/article/embedded-vision-revolution

6. Argon Design. (2014, January) Embedded vision systems set to revolutionise electronics. [Online]. http://www.argondesign.com/news/2014/jan/22/embedded-vision-systems/

--

--

Christos Kyrkou

Research Lecturer at the KIOS Research & Innovation Center of Excellence at the University of Cyprus. Research in the areas of Computer Vision and Deep Learning