Neural network techniques can dramatically improve low-light imaging for smartphones and smart cars.
Why can you see everything in a darkened room far better than your smartphone camera? The camera’s optical sensors actually detect light much more efficiently than your eye does, but your brain is doing much more sophisticated image processing, explains Bo Zhu, co-founder and chief technical officer of BlinkAI Technologies in Boston. Founded in 2018, BlinkAI provide algorithms to dramatically improve image reconstruction, especially under low-light conditions. “We use machine learning to enhance the sensor performance in visually challenging conditions, extending the range of what a camera can see and detect in the real world,” says BlinkAI chief executive officer Selina Liying Shen.
We use machine learning to enhance the sensor performance in visually challenging conditions, extending the range of what a camera can see and detect in the real world.
“It turns out that systems with predictive ability, which understand what they're about to see, can rely on much less information and still provide very accurate images and videos,” Zhu says. An MIT Startup Exchange STEX25 company, BlinkAI is talking with potential partners who have applications in smartphones, autonomous vehicles, security and other markets. In many cases, BlinkAI’s software solution may prove much cheaper than alternative options in sensor hardware.
Perceptual learning about light Blink AI technology is built on Zhu’s postdoctoral work at Harvard in the Martinos Center for Biomedical Imaging, where he applied machine learning to reconstruct medical images based on less-than-ideal data from devices such as computer tomography (CT) and magnetic resonance imaging (MRI) scanners. The research was published in Nature in 2018 and described how a method called AUTOMAP can take “noisy” imaging data from these machines and still come up with very clear high-resolution images for radiologists to accurately read and diagnose. The AUTOMAP approach is similar to a human cognitive process called perceptual learning. “Starting at birth, your brain trains itself how to see—how to interpret these raw noisy signals that are coming in from the retina,” says Zhu. “Certain visual features like edges or patterns or textures come up over and over again. Your brain has this amazing predictive ability to very efficiently encode this data, and therefore it doesn't need that much information to create a clear image.” BlinkAI was founded to commercialize this paradigm more generally in digital imaging. “We put a brain behind the camera sensor so it has more predictive power, improving the performance of the sensor five- to ten-fold,” he says. “Today’s image signaling algorithms fail in low light conditions, where the particularities of the image sensor and its very complicated noise profiles become very important, and the traditional algorithms just aren't adaptive and sophisticated enough to handle each sensor in the best way,” he says. “With AUTOMAP, we train up a neural network to optimally process the raw frames that are coming in from the optical sensors in each individual camera model,” Zhu says. “We’re really able to hone in on the particularities of individual camera models, whose designs and manufacturing processes can be very different. We learn the very complicated noise properties that are contaminating the signals that you care about. And we can disentangle the noise from the signal with that understanding, so that our output is much clearer.” Solving this problem with BlinkAI software will be cheaper than with more expensive sensor hardware. And the demand for the approach is growing rapidly as cameras are being deployed in ever-expanding numbers in all sorts of devices and vehicles. Their optical sensors often are getting smaller and smaller, to fit into more compact cameras and lowers costs. That means the sensors pick up even less light, which is even more of a problem because applications still expect to work with very high-resolution images. “So every pixel has less light intake, and that’s a real problem in dark conditions,” Zhu says.
The importance of clear images becomes more critical as we begin to have self-navigating vehicles that rely upon good images to make decisions that are often life or death.
Building the markets BlinkAI, which has raised $2.5 million to date, was named an MIT STEX25 company this spring. “It can be hard as a startup company to reach out to big corporations,” Shen says. “But the STEX25 program provides a very deep relationship with numerous corporations, and our connections with these corporations are speeding up the overall process of engagement.” Among potential clients, the startup is talking with a number of smartphone manufacturers. “Right now, cameras often become the primary competitive advantage for smartphones, and our ability to offer much better low light and video photography is important,” Shen says. Other opportunities are opening up in automobiles, whose manufacturers are quickly adding cameras to deliver advanced driver assistance capabilities, and readying designs to deliver fully autonomous vehicles a few years down the road. “Our technologies will enable cars to perform better in low-light settings, as well as visually challenging environments such as rain, snow and fog,” she says. “As more and more of these sensors are being placed on vehicles, there are real safety concerns that begin to pop up,” Zhu says. “The importance of clear images becomes more critical as we begin to have self-navigating vehicles that rely upon good images to make decisions that are often life or death.”
BlinkAI also expects better low-light images to aid navigation in drones and other types of vehicles, and to enhance a wide range of visual security systems. “We’ll find many ways to use machine learning to enhance the sensor performance in visually challenging conditions, and help establish the next generation of intelligent imaging sensors that amplify the critical perception capabilities of future devices and vehicles,” Shen sums up.