Akasha Imaging

Startup Exchange Video | Duration: 4:41
October 4, 2021
  • Interactive transcript
    Share

    KARTIK VENKATARAMAN: Hi, I'm Kartik Venkataraman. I'm CEO of Akasha Imaging. Our mission is automate the impossible-- in other words, we are enabling automation of almost any kind of application, whether it is assembly automation, navigation, and in unstructured environments, where these things haven't been possible until now.

    A unique value proposition that Akasha's technology brings is three things. One is the accuracy with which we are able to discern the localization of objects that are being scanned is much higher than what you get from other technologies. In many cases, it's almost 10x higher than what you get with existing incumbent technologies.

    Second is it works in all manner of lighting conditions-- in unstructured environments as well as in structured environments, in any kind of lighting condition with almost any kind of material. So the potential applications of this technology are much larger than with existing vision systems. And thirdly is the reliability with which it's able to perform the operations. There are other vision systems that can do the scanning of these kind of difficult materials, but it turns out that they are not very reliable. Oftentimes, the reliability is 20%, or maybe less than 50%. And when you are trying to automate something, that reliability needs to be much higher-- in the high 90s-- and we are able to achieve that with this technology. And those three reasons are the key value propositions of this technology in all of the different markets and applications that we're looking at.

    Obviously, manufacturing is one obvious place where you can use it to be able to enable robots to interact with the parts that it handles-- be able to segment the parts clearly, so it knows how to pick them up. There is also potential use cases in autonomous navigation, or advanced driver assistance systems, where, because of its ability to perform in adverse lighting conditions, it can be used to detect and segment out road hazards-- potholes, any kind of object on the road.

    A lidar, for example, today-- if it sees an object on the road that's 100 meters away, and the car is traveling at 70 miles per hour, it all it knows is that that's an object there. It doesn't know what kind of object it this. And that information is often useful-- in some cases, vital-- to be able to take any preventive action. And with our solution, you can actually identify the object, because the material type affects the nature of polarization, and that is indicative of the texture of the material-- the make of the material.

    So all of these different markets have these different applications, all using the same principle. Our initial focus is in manufacturing, because we have a lot of customers there that have reached out to us, where they've been trying to solve this problem of assembly-- automating the assembly of components. Well, obviously there, we have to mature this technology, which means testing it and developing the robustness of the technology. We are also looking at partnerships with other technology providers that can augment our vision system and continue to provide a much more improved experience to the navigation system.

    And finally, of course, we are talking to customers, and we hope to build on those customer relationships even more, where they are looking to deploy these kind of navigation sensors in their autonomous mobile robots that are currently seeing an explosion in their use, due to COVID-19 lockdowns, for all kinds of autonomous operations in the warehouse and factories and delivery bots. So even though autonomous driving systems can take some time to materialize in actual commercial vehicles, in environments such as factories and warehouses, this can happen much faster, and that's where we are beginning to see a large amount of interest.

  • Interactive transcript
    Share

    KARTIK VENKATARAMAN: Hi, I'm Kartik Venkataraman. I'm CEO of Akasha Imaging. Our mission is automate the impossible-- in other words, we are enabling automation of almost any kind of application, whether it is assembly automation, navigation, and in unstructured environments, where these things haven't been possible until now.

    A unique value proposition that Akasha's technology brings is three things. One is the accuracy with which we are able to discern the localization of objects that are being scanned is much higher than what you get from other technologies. In many cases, it's almost 10x higher than what you get with existing incumbent technologies.

    Second is it works in all manner of lighting conditions-- in unstructured environments as well as in structured environments, in any kind of lighting condition with almost any kind of material. So the potential applications of this technology are much larger than with existing vision systems. And thirdly is the reliability with which it's able to perform the operations. There are other vision systems that can do the scanning of these kind of difficult materials, but it turns out that they are not very reliable. Oftentimes, the reliability is 20%, or maybe less than 50%. And when you are trying to automate something, that reliability needs to be much higher-- in the high 90s-- and we are able to achieve that with this technology. And those three reasons are the key value propositions of this technology in all of the different markets and applications that we're looking at.

    Obviously, manufacturing is one obvious place where you can use it to be able to enable robots to interact with the parts that it handles-- be able to segment the parts clearly, so it knows how to pick them up. There is also potential use cases in autonomous navigation, or advanced driver assistance systems, where, because of its ability to perform in adverse lighting conditions, it can be used to detect and segment out road hazards-- potholes, any kind of object on the road.

    A lidar, for example, today-- if it sees an object on the road that's 100 meters away, and the car is traveling at 70 miles per hour, it all it knows is that that's an object there. It doesn't know what kind of object it this. And that information is often useful-- in some cases, vital-- to be able to take any preventive action. And with our solution, you can actually identify the object, because the material type affects the nature of polarization, and that is indicative of the texture of the material-- the make of the material.

    So all of these different markets have these different applications, all using the same principle. Our initial focus is in manufacturing, because we have a lot of customers there that have reached out to us, where they've been trying to solve this problem of assembly-- automating the assembly of components. Well, obviously there, we have to mature this technology, which means testing it and developing the robustness of the technology. We are also looking at partnerships with other technology providers that can augment our vision system and continue to provide a much more improved experience to the navigation system.

    And finally, of course, we are talking to customers, and we hope to build on those customer relationships even more, where they are looking to deploy these kind of navigation sensors in their autonomous mobile robots that are currently seeing an explosion in their use, due to COVID-19 lockdowns, for all kinds of autonomous operations in the warehouse and factories and delivery bots. So even though autonomous driving systems can take some time to materialize in actual commercial vehicles, in environments such as factories and warehouses, this can happen much faster, and that's where we are beginning to see a large amount of interest.

    Download Transcript