5 February 2021
by Andrea Gaini

Optical sensor mimics human eye for self-driving cars

A retinomorphic optical sensor that mimics the human eye’s ability to capture changes in the visual field is the focus of research at Oregon State University (OSU), USA.

Rainbow eye
© Selman Keles/Getty

The device could be used in real-time processing of visual information in autonomous vehicles and robotics, or for projectile tracking.

The sensor uses ultra-thin layers of perovskite semiconductors – a material that has been widely studied for its solar energy potential. The semiconductors can change from strong electrical insulators to strong conductors when placed in light.

‘Most conventional detectors operate by outputting a signal (e.g., a voltage) which is roughly proportional to the intensity of the light that falls upon it,’ says Assistant Professor John Labram, one of the researchers working on the project at OSU College of Engineering.

‘Our device operates on a different principle. It will only output a high voltage if the intensity of light is changing, regardless of intensity level. For example, if a light is switched on and left on over the detector, it will output a brief voltage spike followed by a quick decay back to zero volts. If the sensor is under a bright but constant light, it will output zero volts regardless of the light intensity.’

In the same way, Labram explains that the human sense of vision is particularly well adapted to detect moving objects and is comparatively ‘less interested’ in static images. ‘The way the eye processes optical information is incredibly complex, but cells are known to respond more strongly to optical signals that have strong time-dependence than those that are static,’ he says.

‘This is because much of the field of vision is not relevant for mammalian behaviour. For example, mammals will, in general, respond much more strongly to objects travelling towards them than to a static background image.’ 

The sensor stems from the same motivation and is said to be the first such device that works fundamentally like photoreceptors in the eye. The researchers note that it is one part of an ongoing endeavour to make certain types of electronics more human-like.

By using the perovskite semiconductors, the team has developed an optical circuitry that gives priority to signals from photoreceptors detecting a change in light intensity, similar to the human eye.

Although Labram’s lab can currently test only one sensor at a time, his team has measured a number of devices and created a numerical model to replicate their behaviour. This has enabled them to simulate an array of retinomorphic sensors to predict how a retinomorphic video camera would respond to input stimulus.

A simulation using footage of a baseball practice session demonstrates the expected results. Players in the infield show up as clearly visible, bright moving objects. Relatively static objects — the baseball diamond, the stands, even the outfielders — fade into darkness.

‘Conventional sensing technologies, like the chips found in digital cameras and smartphones, are better suited to sequential processing,’ Labram continues. ‘Images are scanned across a 2D array of sensors, pixel-by-pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a constant output voltage from the sensor.’

The ability to easily differentiate moving objects from the background in a visual field is valuable in a range of applications. Labram adds, ‘You could summarise [our study] as a single pixel doing something that would currently require a microprocessor.’

The new sensor is said to be suitable for the neuromorphic computers that will power the next generation of artificial intelligence in applications such as self-driving cars, robotics and advanced image recognition. Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain’s parallel networks.

‘People have tried to replicate this in hardware and have been reasonably successful,’ Labram says. ‘However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers.’

So, to reach its full potential, a computer that ‘thinks’ more like a human brain needs an image sensor that ‘sees’ more like a human eye.

Authors

Andrea Gaini