NTU Singapore devices an ultrafast camera for self-driving vehicles and drones

17 February, 2017

NTU’s patent-pending camera records the changes between light intensity of individual pixels at its sensor, which reduces the data output. This avoids the needs to capture the whole scene like a photograph, thus increasing the camera’s processing speed. The camera sensor also has a built-in processor that can analyse the flow of data instantly to differentiate between the foreground objects and the background, also known as optical flow computation

NTU-Camera-T'wire

Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed an ultrafast high-contrast camera that could help self-driving cars and drones see better in extreme road conditions and in bad weather.
NTU’s new smart camera can record the slightest movements and objects in real time which gives it an edge over the typical optical cameras, which can be blinded by bright light and unable to make out details in the dark. It has been developed by Assistant Professor Chen Shoushun from NTU’s School of Electrical and Electronic Engineering, and has been named Celex, it is now in its final prototype phase.

The camera records the changes in light intensity between scenes at nanosecond intervals, much faster than conventional video, and it stores the images in a data format that is many times smaller as well. With a unique in-built circuit, the camera can do an instant analysis of the captured scenes, highlighting important objects and details.

A typical camera sensor has several millions pixels, which are sensor sites that record light information and are used to form a resulting picture. High-speed video cameras that record up to 120 frames or photos per second generate gigabytes of video data, which are then processed by a computer in order for self-driving vehicles to “see” and analyse their environment. The more complex the environment, the slower the processing of the video data, leading to lag times between “seeing” the environment and the corresponding actions that the self-driving vehicle has to take.

To enable an instant processing of visual data, NTU’s patent-pending camera records the changes between light intensity of individual pixels at its sensor, which reduces the data output. This avoids the needs to capture the whole scene like a photograph, thus increasing the camera’s processing speed.

The camera sensor also has a built-in processor that can analyse the flow of data instantly to differentiate between the foreground objects and the background, also known as optical flow computation. This innovation allows self-driving vehicles more time to react to any oncoming vehicles or obstacles.

The research into the sensor technology started in 2009 and it has received $500,000 in funding from the Ministry of Education Tier 1 research grant and the Singapore-MIT Alliance for Research and Technology (SMART) Proof-of-Concept grant.

The technology was also published in two academic journals published by the Institute of Electrical and Electronics Engineers (IEEE), the world’s largest technical professional organisation for the advancement of technology.

With keen interest from the industry, Asst Prof Chen and his researchers have spun off a start-up company named Hillhouse Tech to commercialise the new camera technology. The start-up is incubated by NTUitive, NTU’s innovation and enterprise company.

Asst Prof Chen expects that the new camera will be commercially ready by the end of this year, as they are already in talks with global electronic manufacturers.


Related Articles

Copyright © 2014
TELEMATICS WIRE

TELEMATICS WIRE

Your 'go-to' news source for Vehicle Telematics

D-98, 2nd Floor, Sector 63, Noida - 201301, India

+91 - 120-4218526 / 4218528

info@telematicswire.net