10.3.23-Showcase-Osaka-Ubicept

Startup Exchange Video | Duration: 7:01
October 3, 2023
  • Interactive transcript
    Share

    SEBASTIAN BAUER: Great, thank you very much. So as mentioned, I'm the co-founder and CEO of Ubicept. We are a spinoff from MIT and University of Wisconsin in Madison. And three of our co-founders have an MIT affiliation.

    I'm sure many of you are working with some kind of camera perception in all different environments. Think of driver monitoring, think of pedestrian detection in the dark, which is also very challenging, and QR code detection, manufacturing inspection, these kinds of applications. And all of these use cases really suffer from bad camera perception, when there's low light and there's fast motion, and also various street brightness variations.

    What Ubicept is doing, we are making computer vision and image processing algorithms that build on a new emerging class of image sensors, that operate in a different way than existing sensors. And what we can do with that is essentially make driver monitoring much more reliable, especially in low light, also when there's fast motion. Pedestrian detection looks much better when you think of these bounding boxes around the detected pedestrians. In QR code detection we can do so much better.

    So be free to stop by our booth later or check out the videos in our web page to see these things in motion. So this is basically a type of perception that we think will disrupt all of the imaging and camera perception market in the next couple of years. To give you an idea of existing pain points and also quantify things, this is what we call super perception.

    So one scenario in which existing cameras have big problems with perception is fast motion, as mentioned before. And you want to have at least 1,000 frames per second in order to be able to reliably detect and track objects with AI. There's existing sensors here, and you can improve the output with some software, for example, for motion deblurring. When you work in camera perception I'm sure you guys have heard of event cameras. And their advantage is that they are really fast in what they can image.

    And we have human perception here for comparison. And this whole bar here consists of that new sensor technology. These are called SPADs or Single Photon Avalanche Diodes for short. And this bar on the top is actually that Ubicept provides as an improvement on tap. So you can see with our type of perception, we are above that threshold here.

    Similar things hold for low-light perception. So we want to be able to see in 10 millilux, that is this line, 1 over lux, or even 1 millilux. And when you consider event cameras, they are really good at seeing in fast motion, but really bad in low light perception. So there's a trade-off here.

    And the third challenging scenario is dynamic range, so the ability to resolve super-dark, super-bright regions at the same time. And only with this new detector technology, with our processing on top, we are always above this line here. So we can operate in low light, fast motion, and also high dynamic range scenarios.

    This is just to give you an idea of what it looks like. That's the setup we have in our lab. This is a disk with some toys glued to it, that we can make spin. And this is a low light camera, and the single photon camera, which is shown here on the left with our processing. And that's for comparison with an existing low light camera.

    And when you work in perception you know low light and fast motion is a problem. This one is at around 1,000 millilux. The existing camera can somewhat resolve that. But now the disk is spinning extremely fast.

    And this means there's not really a reliable perception all the time. We can go to even lower light levels. So, sorry, we have 100 millilux only. OK, that was too fast. A second.

    Exactly, so this is 100 millilux. It's so dark that you don't see anything actually except for that button here, that sign here. In this one, the low light existing camera is failing pretty much all the time. And we don't change any perception parameters. This now is when you have a bright, fresh light, hitting directly into the camera.

    And so this existing solution is blinded very easily, saturated, whereas in our case, we can handle all these things with the very same camera settings. So low light, fast motion, and all kinds of brightness variation, super-bright, super-dark, and everything in between. More practical use cases here for surveillance applications.

    So pointing this camera outside, detecting a license plate is a big problem, but also the make and model of a car. So really detecting the shape is extremely important for surveillance applications. This is something where single photon perception is getting much better.

    Right now, how can you work with us? We have our evaluation kit, which consists of a camera from our hardware partner, and the processing box and/or laptop that actually runs our compute on it. And if you have some idea of camera perception, these specs are really outstanding. We can achieve all of them at the same time.

    You can have a high dynamic range camera with 140 dB, but for sure not at 500 frames per second. We have each frame with this high dynamic range. We're already working with partners here in Japan and Korea and Asia in general. Most of our feedback comes from-- and engagements come from the automotive/mobility industry. Anytime you have a moving platform in an uncontrollable environment like a car, drone, truck, helicopter, that kind of thing, that platform is moving. So you have motion.

    You want to be able to operate in low light at night, and also you want to be able to handle sunlight reflections, for example. And we are definitely looking for more partners towards commercial engagements. I would think naturally a first step for an engagement is a proof of concept, and then joint development.

    And after that a large scale deployment, pretty much for all use cases where camera/perception is being used nowadays. So please stop by the booth, see the camera live in action. And looking forward to chatting with you. Thank you.

  • Interactive transcript
    Share

    SEBASTIAN BAUER: Great, thank you very much. So as mentioned, I'm the co-founder and CEO of Ubicept. We are a spinoff from MIT and University of Wisconsin in Madison. And three of our co-founders have an MIT affiliation.

    I'm sure many of you are working with some kind of camera perception in all different environments. Think of driver monitoring, think of pedestrian detection in the dark, which is also very challenging, and QR code detection, manufacturing inspection, these kinds of applications. And all of these use cases really suffer from bad camera perception, when there's low light and there's fast motion, and also various street brightness variations.

    What Ubicept is doing, we are making computer vision and image processing algorithms that build on a new emerging class of image sensors, that operate in a different way than existing sensors. And what we can do with that is essentially make driver monitoring much more reliable, especially in low light, also when there's fast motion. Pedestrian detection looks much better when you think of these bounding boxes around the detected pedestrians. In QR code detection we can do so much better.

    So be free to stop by our booth later or check out the videos in our web page to see these things in motion. So this is basically a type of perception that we think will disrupt all of the imaging and camera perception market in the next couple of years. To give you an idea of existing pain points and also quantify things, this is what we call super perception.

    So one scenario in which existing cameras have big problems with perception is fast motion, as mentioned before. And you want to have at least 1,000 frames per second in order to be able to reliably detect and track objects with AI. There's existing sensors here, and you can improve the output with some software, for example, for motion deblurring. When you work in camera perception I'm sure you guys have heard of event cameras. And their advantage is that they are really fast in what they can image.

    And we have human perception here for comparison. And this whole bar here consists of that new sensor technology. These are called SPADs or Single Photon Avalanche Diodes for short. And this bar on the top is actually that Ubicept provides as an improvement on tap. So you can see with our type of perception, we are above that threshold here.

    Similar things hold for low-light perception. So we want to be able to see in 10 millilux, that is this line, 1 over lux, or even 1 millilux. And when you consider event cameras, they are really good at seeing in fast motion, but really bad in low light perception. So there's a trade-off here.

    And the third challenging scenario is dynamic range, so the ability to resolve super-dark, super-bright regions at the same time. And only with this new detector technology, with our processing on top, we are always above this line here. So we can operate in low light, fast motion, and also high dynamic range scenarios.

    This is just to give you an idea of what it looks like. That's the setup we have in our lab. This is a disk with some toys glued to it, that we can make spin. And this is a low light camera, and the single photon camera, which is shown here on the left with our processing. And that's for comparison with an existing low light camera.

    And when you work in perception you know low light and fast motion is a problem. This one is at around 1,000 millilux. The existing camera can somewhat resolve that. But now the disk is spinning extremely fast.

    And this means there's not really a reliable perception all the time. We can go to even lower light levels. So, sorry, we have 100 millilux only. OK, that was too fast. A second.

    Exactly, so this is 100 millilux. It's so dark that you don't see anything actually except for that button here, that sign here. In this one, the low light existing camera is failing pretty much all the time. And we don't change any perception parameters. This now is when you have a bright, fresh light, hitting directly into the camera.

    And so this existing solution is blinded very easily, saturated, whereas in our case, we can handle all these things with the very same camera settings. So low light, fast motion, and all kinds of brightness variation, super-bright, super-dark, and everything in between. More practical use cases here for surveillance applications.

    So pointing this camera outside, detecting a license plate is a big problem, but also the make and model of a car. So really detecting the shape is extremely important for surveillance applications. This is something where single photon perception is getting much better.

    Right now, how can you work with us? We have our evaluation kit, which consists of a camera from our hardware partner, and the processing box and/or laptop that actually runs our compute on it. And if you have some idea of camera perception, these specs are really outstanding. We can achieve all of them at the same time.

    You can have a high dynamic range camera with 140 dB, but for sure not at 500 frames per second. We have each frame with this high dynamic range. We're already working with partners here in Japan and Korea and Asia in general. Most of our feedback comes from-- and engagements come from the automotive/mobility industry. Anytime you have a moving platform in an uncontrollable environment like a car, drone, truck, helicopter, that kind of thing, that platform is moving. So you have motion.

    You want to be able to operate in low light at night, and also you want to be able to handle sunlight reflections, for example. And we are definitely looking for more partners towards commercial engagements. I would think naturally a first step for an engagement is a proof of concept, and then joint development.

    And after that a large scale deployment, pretty much for all use cases where camera/perception is being used nowadays. So please stop by the booth, see the camera live in action. And looking forward to chatting with you. Thank you.

    Download Transcript