Sony announces the launch of two stacked vision sensors

Sony annuncia il lancio di due sensori di visione di tipo stacked thumbnail

Sony announced, on September 13, 2021, the upcoming launch of two types of stacked vision sensors with event programming. Intended for industrial applications, the new devices are capable of limiting detection to changes in subjects and are equipped with the most compact pixels in the industry, with a side of only 4.86 μm.

Event-based vision sensors asynchronously detect changes in luminance for each pixel and transmit only the changed data, combining them with pixel position information (XY coordinates) and time, to ensure ultra-fast output data flow with minimal latency.

Sony, stacked vision sensors: innovation, technology and method

The two new sensors are based on stacked technology that draws on Sony’s proprietary Cu-Cu links to limit pixel size to 4.86 μm – the lowest in the industry. In addition to reducing power consumption and ensuring high speed, low latency and high temporal resolution output, they will also offer extraordinary resolution for their format. All these advantages, added together, allow you to recognize moving subjects in various environments and contexts.

The milestone is the result of the collaboration between Sony and Prophesee: the union between the CMOS image sensor technology of the Japanese company and the exclusive neuromorphic solutions of the French partner was the key to improving the accuracy and speed of the acquisition of data, which will positively affect the productivity of industrial machinery.

Given the increasingly complex and diversified needs of industrial production, the use of sensors to extrapolate information from the images taken by video cameras is progressively spreading, and efficiency has become an essential requirement.

The classic frame-based method transmits images at predefined time intervals, fixed by the frame rate. Sony’s new sensors, on the other hand, employ an event-based method, which asynchronously detects changes in pixel luminance and triggers data output complete with pixel position (XY coordinates) and time information. The stacked type structure is Sony’s patented one, which uses the Cu-Cu connections to conduct the charge between the pixel layer and the logic layer, where a signal processing circuit capable of recording the luminance variations of the individual pixels. This configuration ensures that only the pixels that have detected a different luminance in the subjects transmit the data, allowing the sensor to grasp the change with the greatest possible timeliness and temporal resolution and without latency, while limiting energy consumption. Another design advantage is the pixel size, which at 4.86 μm is the smallest in the industry * 1 and forms a very compact sensor with excellent resolution.

Both models are able to perceive even the slightest variations in vibrations and thus signal possible anomalies to proceed with preventive maintenance of the equipment. Equally accurate is the detection of sparks generated by welding and metal cutting, which can be useful for workers to understand, for example, when it is time to change tools. Sony sensors can also help improve productivity in applications that are a problem for their frame-based counterparts and are ideal for supporting work processes based on human experience.

The main features

Compact design and high resolution made possible by the smallest pixels

The new sensors adopt Sony’s proprietary stacked structure, which uses Cu-Cu connections to conduct the charge between the pixel chip and the logic chip, where a signal processing circuit capable of recording changes in signal resides. luminance of individual pixels. Typically, the pixel unit and the signal processing circuit are placed in the same plane. These products, on the other hand, have a unique structure that integrates the smallest pixels in the industry1 (only 4.86 μm), without sacrificing a sizeable aperture ratio4. The optimal solution to combine compactness and resolution, ensuring high precision detection performance.

Extraction of events at high speed and temporal resolution and low latency, with reduced consumption

Sony’s proprietary stacked structure allowed for an event-based method that asynchronously detects changes in pixel luminance and triggers data output complete with pixel position (XY coordinates) and time information. Unlike frame-based sensors, the new models read only the data strictly necessary, saving valuable energy and activating low latency, high temporal resolution and high speed output in the order of microseconds.

Event filtering function for agile information acquisition

Sony’s sensors are equipped with an event filtering function developed by Prophesee that eliminates unnecessary data, preparing it for various applications. These filters help to eliminate events that are not significant for detection purposes, such as the flickering of LEDs at certain frequencies (anti-flicker filter, or events not attributable to a moving subject (event filter). The filters also allow, if necessary, to adjust the volume of data to keep it below the frequency of events that the downstream systems are able to process (control of the frequency of events).

As part of the collaboration with Sony, Prophesee has also made available Metavision Intelligence Suite, a software for processing event signals optimized for sensor performance. Coupled with Sony’s event-based vision sensors, the software allows you to efficiently develop applications and find solutions for various use cases.

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.