
2020 STEX25 Accelerator Startups Day 1 - Startup Lightning Talks with Q&A, Session 1

-
Video details
Abhijit Ghosh
VP of Engineering, Akasha Imaging
Andy Wang
Founder & CEO, Prescient Devices
Boaz Efroni Rotman
VP of Marketing & Business Development, Lightelligence
Tom Baran
Cofounder & CEO, Lumii
-
Interactive transcript
MARCUS DAHLLOF: Good morning, good afternoon, good evening. And welcome to day one of the STEX25 showcase. Today we're going to have eight companies from the realms of robotics, IoT, advanced manufacturing, and AI. I'm very pleased to announce that we have over 750 attendees. And when I looked over the weekend, that number, there were actually representatives from 150 multinational corporations and from 36 different countries. So this number has gone up over the last two days by 200.
STEX25 is our accelerator program. It's a 12-month program that features 25 of the most prominent MIT startups. Sorry. And I want to spend a moment detailing the quality of this group of companies. We have eight of them with us today.
Over the last three and a half years, we've had 74 startups be part STEX25. And when we looked at the composition of their founding teams, we found that 46% have at least one MIT faculty. 55% have at least one MIT PhD, 61%, one MIT master's or more, and 31%, MIT bachelors. So the founding team of these startups is arguably world-class. They have enormous technical expertise.
Another interesting component of the startups is that 43% have founders from more than one MIT department or school. And this is very typical MIT, the cross-pollination of disciplines to create superior solutions. Together, they've raised over $2.2 billion in capital. And we know there's a number of unicorns in this group, as well as there had been a number of acquisitions.
The agenda for today is as follows. We're going to start with some lightning talks. Each startup is going to present for four minutes. There's going to be about two minutes of Q&A. The first batch is going to include Akasha Imaging, Prescient Devices, Lightelligence, and Lumii. Then the same four startups are going to be part of a panel discussion. The topic for the panel is going to be startup and corporate partnerships and collaborations. And the goal there is to share lessons learned and best practices.
We're going to repeat that same format with the second batch of startups. And those are Vecna Robotics, Everactive, Profit Isle, and Realtime Robotics. We aim to wrap up around 12:35 PM Eastern Time.
In terms of audience interaction, there are a couple of ways for you to engage. So number one is the Q&A feature. And so this is where you can ask questions to startups that relate to their presentations, or you can ask questions to the panel. Make sure you enter those questions as soon as you think about them, and then upvote the good ones. We're going to do the best to get to as many of these questions as we can, starting at the top-- so the questions with the most votes.
We're likely not going to get to all the questions. So there's also the chat feature. So this is where you can follow up with additional questions that you have or questions that are unanswered. And we're asking the startups to monitor the chat and answer using the chat. So you can think of this as a little bit of a Twitter feed within Zoom. Finally, there is a request an intro poll. So you select the startups you want to meet, and your ILP contact will connect you after the event.
And then finally, before we start, I want to spend a minute on what I call mindset minute. This is a very fast-paced event and will actually not cover anything in depth. Rather, the objective here is to be the spark for the beginning of a conversation and the beginning of a partnership. If you have any comments, feedback, or want to engage generally, my email is marcusd@mit.edu.
And so with that, we're going to go to our first presenter. It's Abhijit Ghosh from Akasha Imaging. He's the VP of engineering there.
ABHIJIT GHOSH: Good morning, good afternoon, good evening wherever you may be. I'm Abhijit Ghash. As Marcus said, I'm the VP of engineering at Akasha Imaging. We are an imaging and artificial intelligence company targeting manufacturing, automation, and inspection. Our connection to MIT is through Achuta Kadambi and Ramesh Raskar, who both hail from the MIT Media Labs. Let's see.
So let's talk about the problem with manufacturing automation. Only about 15% of industrial robots in vision syst-- use vision systems today. And why is that? Why is that number so low? Why are most of these robots using fixtures? And that is because manufacturing is filled with optically challenging objects, things like shiny metals, black rubber, transparent materials such as glass or plastic. Traditional or current vision systems struggle with such optically challenging objects.
Also, when the lighting conditions change, which happens often on factory floors, these systems often need expensive recalibration and downtime, which brings the line down. Some of these systems also use very expensive hardware-- lasers, complicated lenses-- that increase their costs. This all leads to long sales cycles, slow time to adoption, and often, frustrated customers.
We are using a new modality of light. We are using the physics of light. We are using multi-view geometry. We are using spectral imaging. And we are also using deep learning to create a system that is much more robust. I'll give you an example here. On the left side of the screen where my cursor is, you see a Styrofoam cup scanned with existing technology. As you can see, the depth is very coarse.
We can refine that depth to the depth-- to a depth method as seen on the right side where we can see the high spatial as well as depth resolution. This is done by passively taking a few pictures from-- of the cup and constructing a dense surface normal map of the object being imaged. This technology is robust to different kinds of materials. We can handle shiny metals, black rubber, transparent objects, glass, or plastic. We are robust to lighting changes, because we are just using the physics of light. And we use custom, off-the-shelf components. Therefore our hardware cost is actually significantly lower than existing solutions.
I'll give you a demonstration of this technology with this robotic bin picking system. Here we are picking an optically challenging object, which is transparent balls. Where my cursor is is our camera system looking down at the bin of transparent objects here. And you see the robot picking the objects. What I want to draw your attention to is on the bottom left corner-- bottom corner-- bottom side of the screen where I show you the results of segmentation of both the Akasha Imaging system and a conventional imaging system.
I've paused the video here. You can see that the plastic balls are very clearly segmented in the Akasha Imaging solution, while in the conventional imaging solution, you can see how segmentation is very bad. It bleeds all over. And the boundaries are not clear. If the boundaries are not clear, you cannot locate the object in space in 3D. And therefore you will not be able to pick it up. Let's see.
Here is another example of our technology in action. Here is a black rubber, another optically challenging material, which has a hairline crack in it. Now if we zoom into the area of the hairline crack where my cursor is, you cannot see the crack in a RGB image. This color image is a color-coded surface normal image of the same tire where the surface normal-- that's the direction in which the surface normal points-- is color-coded. And in that color-coded surface normal image, you can easily see the crack, because the crack leads to a change in the surface normal.
Today these defects are also not visible to the human eye. Inspectors, manual inspectors, use their fingers and touch to basically identify these defects. We can identify these defects using vision systems. We are going to be in pilot deployment in early 2021 with our robot-agnostic perception platform. We are working with a tier one automotive manufacturer right now. We're calling automotive manufacturers, plastic and rubber manufacturers, and electronics manufacturers for applications such as pick and place, end-of-line inspection, sorting, welding, machine tending. And that's it.
SPEAKER 1: Thanks, Abhijit. So a couple of questions for you from the audience--
MARCUS DAHLLOF: So the question is, what type of imaging sensor do you use? Is this RGB-D? Is this required? i.e. can you use laser scan stereo camera?
ABHIJIT GHOSH: So we do not use laser scan. We use RGB sensors. The depth is something that we infer from our processing. So we use sensors that are used for cell phone cameras.
MARCUS DAHLLOF: Got it. Next question-- how long does it take to process an image?
ABHIJIT GHOSH: Typically, we can go as high as 30 frames per second. But for most industrial applications, cycle times are in the order of 500 to-- 200 to 500 milliseconds.
MARCUS DAHLLOF: Could you talk a little bit more about the underlying software technology?
ABHIJIT GHOSH: The underlying software technology is a combination of imaging techniques, physics of light, deep learning, spectral imaging, and multi-view geometry. They come together in very interesting ways. And that's basically our competitive differentiation in how we use these technologies. Unfortunately, I don't think we have enough time to go into the details of the underlying software.
MARCUS DAHLLOF: Right, and I think, actually, at this point, we're going to have to go to the next speaker. There are many great questions here. So I'm going to encourage everyone to try to move that over to the chat discussion. And then we're going to go to our next speaker, Andy Wang, founder and CEO of Prescient Devices.
ANDY WANG: OK, great. My name's Andy Wang. I'm the CEO of prescient devices. We actually have quite a number of MIT alums as well as MIT professors involved in the founding team. So what we do is we provide a IoT development platform for enterprises. Increasingly, enterprises are hiring engineers and data scientists today in-house to integrate IoT and AI with core products and core processes. This is the only way to build unique competitive advantages on those products and processes.
But the problem is that, as most of you know, IoT technology infrastructure is quite complex, going from the sensors, to embedded systems, to security, to communication, et cetera. And this slows down the adoption in enterprises. So we provide a technology platform that enables enterprise engineers and data scientists to develop IoT applications without worrying about the underlying technology infrastructure.
So the way we do that is virtualizing the entire technology infrastructure behind a design automation software. This design automation software is the first of its kind in the history to allow users to implement the entire IoT system in the same design. This means you can directly program your edge devices, your cloud functionalities. You can connect any device to any other device. And this not only simplifies the design process, but it actually enables users to develop very powerful applications, which is the end goal. So ultimately, users are able to focus on extracting value out of the data, which is the value for enterprises, rather than spending most of their time maintaining the technology infrastructure behind the application.
One of the major uses for the technology is building digital twin systems. A digital twin is a digital representation of a product or process. And you can use that digital information to monitor, predict, and eventually improve your product or process. This is going to become a core competency for most enterprises going forward and will enable the continuous development and integration of such digital twin systems in-house.
So currently, our v1.0 product is released. We are looking for customers and pilots. We encourage you to try our software to see how much easier and how much faster it makes your team to work with IoT. We also have a lot of very good information in our blog, because we have seen a lot of adoption cases. So I would encourage you to read those as well. So for any information, please contact me at my email, which is down here. Thank you.
SPEAKER 1: We have a question for you. What technology experts should enterprises hire in-house to advance the IoT adoption?
ANDY WANG: So what we've seen is that a lot of enterprises are hiring data scientists. This is because data is the most important asset-- rather, the value behind the data. Therefore hiring data scientists give enterprises the biggest value. But the problem is that data scientists, and even IoT engineers, have trouble managing that entire IoT technology infrastructure. You need that in order to get the data. So we manage that part. We enable the IT engineers and data scientists to work on the valuable and the unique part for the enterprises.
SPEAKER 1: Great. And is your platform solely a software as a service solution? Can you talk a little more to that?
ANDY WANG: That's a great question. So we actually have a lot of experience supporting the in-car solutions. We have built a previous startup that has deployed over half a million devices. And we built everything from the hardware all the way to the cloud. So the way we see it is that a lot of enterprises need help with getting the sensors, getting the data acquisition, getting a lot of the things set up. And we would help them.
And of course, over time, this would become sort of a standard set of systems, right? And so that enterprise will just focus on the data part while we would be able to take care of everything else for them.
SPEAKER 1: Great, thank you.
ANDY WANG: Thank you.
BOAZ EFRONI ROTMAN: So my name is Boaz Efroni. I'm the VP of product management and business development at Lightelligence, where we are here to reinvent computing for AI using integrated photonic. Our founding team are all MIT alumnis, including one professor still within the company.
So one of the things that happens is that the number of applications for AI that uses machine learning AI are growing strongly every year. The problem with that is they need to run on hardware. That hardware is what is known as AI inference or AI training hardware. That, however, is usually electronics. And that hardware takes about three to four years to get to the next generation, smaller, faster.
The AI market today, with application and data, is actually growing, doubling every year. So that puts a problem for the enterprises. They need to stock up more and more server. There is a problem in space, system cost, and electricity cost.
Now if you look at what hardware is running today, the AI inference and training, you see anywhere from CPU for flexibility through GPU, FPGAs, and ASICs. We want to change that. We want to enable to jump, leapfrog the ability and the limitations of the geometry and the clock speed by providing a new photonic ASIC. It's an integrated photonic machine, basically a chip integrated, that provides a leapfrog and fundamentally go over everything that's out there.
Now what you can do with that, the throughput is now becoming 20x to 30x what electronics provide for the same area. There's no heat dissipation from the matrix multiplication that's done in photonic, enabling us to use very sophisticated 3D packaging that is very hard to do with electronics due to heat dissipation on the other side. And of course, speed of light-- so we're looking at a very, very low latency. These are the three major contributor for the success of our technology.
And what do we want to do for the go to market is to make sure it's easy for you, our customers, to integrate our hardware into new hardware, into your servers. So we're going to build a PCI card. It's a drop-in replacement PCI card. The form of the card is like a GPU or an FPGA that you see out there, that you already buy out there, today, to your systems. And we'll provide all the software tools to take the output from your training and compile it, quantize it, and run it on our system. So you'll have a huge advantage.
Now let's look at an example of a problem where such thing occurs. If you look at an enterprise, you're limited with latency. That limits your ability to run very sophisticated algorithms. The space is growing dramatically, because you have to add more and more servers to comply with your customer base increasing. And the system cost is, of course, growing high.
What we will provide, of course, is improve by 20x in latency, 12x less space, and 10x less system costs-- all of that, basically 12 to 1 ability to provide you with a much, much better solution, condense, and money. So what we're looking for, basically, is a company that has what is known as on-prem servers-- so servers that you have on your premises. We are already talking to the cloud providers. But we're looking for companies that have on-prem servers that would like to continue and improve.
So let's talk if you have on-prem and you'd like to evaluate us and work with us. We're looking for companies in enterprise, in data centers, video surveillance and analytics, finance, 5G, high-performance computing, medical, autonomous transport vehicle, industrial inspection, and robotics. My email is here. And you can contact me through MIT. I'll be more than happy to answer questions and follow up with you about it. Thank you very much.
SPEAKER 1: Thank you, Boaz. We'll do a couple of audience questions. Do you plan an edge computing device?
BOAZ EFRONI ROTMAN: That's a good question. So we're focusing-- our solution is very high-throughput. And it's a little-- it's a higher power state than what's usually used inside edge devices. So the answer is no. We're looking more into either cloud enterprise servers or what is known as edge server. So anywhere that the PCA card is used today, it's the right place for us to go.
SPEAKER 1: Great, and can you talk a little more of why photonics is coming back now even though it's been around since the '60s?
BOAZ EFRONI ROTMAN: Sure, that's a great question. So in the '60s and '70s, electronics and photonics were running one against the other to see who's going to make a better computer. The von Neumann architecture turned out to enable the electronics to squeeze much, much faster into a lower and lower geometry and squeeze more and more things on it, made it a much more viable solution for that. Machine learning turned things around by making, now, the computation statistic and linear.
Matrix multiplication is the big thing here. Additional thing that happened is that in present years, there was research that showed-- basically, in testing, it shows that for inference, for analytics, you can go lower-resolution. So it can go integer 8 instead of full-blown floating point. Both of these together made photonic very, very viable and a possibility for machine learning. And that's where we come in today.
SPEAKER 1: Great, thank you. Back to you, Marcus.
MARCUS DAHLLOF: OK, so we're going to go to the next speaker. And just again, a quick reminder to the startups-- there are plenty of questions I'm seeing in the chat and plenty of unanswered questions in the Q&A. So let's please go ahead and try to address those. Thank you. Our next speaker is Tom Baran, co-founder and CEO of Lumii.
TOM BARAN: Great, thanks so much. So thank you so much to STEX25 for having us. I think one of the things that's really fantastic about a lot of the startups that you'll see in this year's class is the number of folks who are working on innovating materials for packaging, consumer packaged goods. And we're another one of those companies doing that, but we're doing it in a little bit of a different way.
So we are packaging materials innovation company, but we're not selling any materials at all. What we sell is data that you put into a printing press. And if you put that data into a printing press, it makes inexpensive materials look like very, very expensive materials just using the data that we provide. So our vision really is to transform this giant global printing industry. And that's the name of the game. We're saying, OK, well, what can we do by putting different data, using software that we're providing, into printing presses, taking standard materials that you use every day, and making them look like a brand-new type of experience on the shelf?
So as a lot of people know, the name of the game in package printing really is shelf appeal, right? So if you're able to get a customer, or a consumer, to take a product off the shelf and hold it in their hand, they're something like 80% more likely to actually put it into their basket. And the tried and true method for doing this is to make the package look shiny and look really cool-- so things like metallics, holographic foils like you'd see on toothpaste boxes, and sometimes lenticular.
The challenge with all of these is that they're very difficult to integrate. They're less sustainable, because you're making a laminate of dissimilar materials. They're less robust. They can flake off in shipment. And they're expensive, not only in terms of the material cost, but in terms of the time that it might take to slow down the assembly line to integrate these materials.
So we have a digital enhancement solution that creates this category of effects and a new category of effects as well without any new materials and no new equipment. So it's dramatically less expensive than using standard materials for packaging embellishment. You can save hundreds of thousands of dollars on a run-- more sustainable, more elegant. And for a variety of reasons that I can get into, it's more secure than existing packaging solutions, and in fact, can be a standalone authentication solution for your products.
So just some quick examples that you should see on the screen here of some effects that we can get-- these are two labels on beer cans. But of course, it'll work on any type of package. But it's two very broadly used types of substrates. One is pressure-sensitive materials on the left, and on the right is a shrink sleeve. What you're looking at is a clear label that started off clear. And it's being printed on both sides of the label with standard black ink. The secret sauce is the algorithms that we have that come up with where that ink has to go.
And to give a sense of why this is a difficult technical problem, It's about a factor of 1,000 times harder than preparing a standard printing plate. But of course, once you prepare that plate, you can produce literally miles and miles of these kinds of things.
OK, so just a few other quick examples to go through-- so the first one or two here are in the security space, authentication. So you can see we can produce, for authentication products, a very broad depth, deep depth on thin substrates, also having different types of materials in the packet-- or in the label substrate all at once. This is something that we did for a local brewery. You can see the motion in those little triangles at the bottom.
What's notable about this is this was on a run where there was a job right before it, a job right after it. And we had to make our stuff work in a very, very constrained time slot, no change to the material at all, zero increase cost from a materials perspective. And they were able to get these effects on a shrink sleeve.
So yeah, so we've gotten a lot of interest from a lot of different folks, actually, a lot of people in the beer space. And on the left is a company that we recently just completed a run for. We went from initial art exchange to labels on the can within four weeks, which is really fast. We're eager to explore relationships with CPG brands as well as print producers to move quickly. So yeah, I guess at that point, I'll turn it over for Q&A.
SPEAKER 1: So can you tell us a little bit about how your strategy is changing now that a lot of the shopping is in person or at stores?
TOM BARAN: Yeah, that's a fantastic question. So one of the things that we found, actually, especially in the world of COVID, is that most of our direct customers, who are the print production houses, are tremendously busy. So because there are more people who are buying things in stores, that shelf experience is critical. It's all about building a better billboard for your product on the shelf. So it's only enhanced things.
SPEAKER 1: Great. And you've talked a lot about-- we've seen some cool things about beer cans. But what other market verticals could this be used for?
TOM BARAN: Yeah, so that's a fantastic question. So you know, one of the reasons that we're sharing the beer cans is that although it's one particular market vertical, the labeling technology that you use on a beer can, for example, with the shrink sleeve, applies broadly across any packaging vertical that you might imagine. So we've talked with different CPG companies who have told us that for every one of their brands in their portfolio, there's at least one product, and sometimes the majority of products, that you shrink sleeves. So that's a 360-degree wrap around any kind of product from a toothpaste tube, to a household cleaner, to shampoo, anything, a pretty broad range.
-
Video details
Abhijit Ghosh
VP of Engineering, Akasha Imaging
Andy Wang
Founder & CEO, Prescient Devices
Boaz Efroni Rotman
VP of Marketing & Business Development, Lightelligence
Tom Baran
Cofounder & CEO, Lumii
-
Interactive transcript
MARCUS DAHLLOF: Good morning, good afternoon, good evening. And welcome to day one of the STEX25 showcase. Today we're going to have eight companies from the realms of robotics, IoT, advanced manufacturing, and AI. I'm very pleased to announce that we have over 750 attendees. And when I looked over the weekend, that number, there were actually representatives from 150 multinational corporations and from 36 different countries. So this number has gone up over the last two days by 200.
STEX25 is our accelerator program. It's a 12-month program that features 25 of the most prominent MIT startups. Sorry. And I want to spend a moment detailing the quality of this group of companies. We have eight of them with us today.
Over the last three and a half years, we've had 74 startups be part STEX25. And when we looked at the composition of their founding teams, we found that 46% have at least one MIT faculty. 55% have at least one MIT PhD, 61%, one MIT master's or more, and 31%, MIT bachelors. So the founding team of these startups is arguably world-class. They have enormous technical expertise.
Another interesting component of the startups is that 43% have founders from more than one MIT department or school. And this is very typical MIT, the cross-pollination of disciplines to create superior solutions. Together, they've raised over $2.2 billion in capital. And we know there's a number of unicorns in this group, as well as there had been a number of acquisitions.
The agenda for today is as follows. We're going to start with some lightning talks. Each startup is going to present for four minutes. There's going to be about two minutes of Q&A. The first batch is going to include Akasha Imaging, Prescient Devices, Lightelligence, and Lumii. Then the same four startups are going to be part of a panel discussion. The topic for the panel is going to be startup and corporate partnerships and collaborations. And the goal there is to share lessons learned and best practices.
We're going to repeat that same format with the second batch of startups. And those are Vecna Robotics, Everactive, Profit Isle, and Realtime Robotics. We aim to wrap up around 12:35 PM Eastern Time.
In terms of audience interaction, there are a couple of ways for you to engage. So number one is the Q&A feature. And so this is where you can ask questions to startups that relate to their presentations, or you can ask questions to the panel. Make sure you enter those questions as soon as you think about them, and then upvote the good ones. We're going to do the best to get to as many of these questions as we can, starting at the top-- so the questions with the most votes.
We're likely not going to get to all the questions. So there's also the chat feature. So this is where you can follow up with additional questions that you have or questions that are unanswered. And we're asking the startups to monitor the chat and answer using the chat. So you can think of this as a little bit of a Twitter feed within Zoom. Finally, there is a request an intro poll. So you select the startups you want to meet, and your ILP contact will connect you after the event.
And then finally, before we start, I want to spend a minute on what I call mindset minute. This is a very fast-paced event and will actually not cover anything in depth. Rather, the objective here is to be the spark for the beginning of a conversation and the beginning of a partnership. If you have any comments, feedback, or want to engage generally, my email is marcusd@mit.edu.
And so with that, we're going to go to our first presenter. It's Abhijit Ghosh from Akasha Imaging. He's the VP of engineering there.
ABHIJIT GHOSH: Good morning, good afternoon, good evening wherever you may be. I'm Abhijit Ghash. As Marcus said, I'm the VP of engineering at Akasha Imaging. We are an imaging and artificial intelligence company targeting manufacturing, automation, and inspection. Our connection to MIT is through Achuta Kadambi and Ramesh Raskar, who both hail from the MIT Media Labs. Let's see.
So let's talk about the problem with manufacturing automation. Only about 15% of industrial robots in vision syst-- use vision systems today. And why is that? Why is that number so low? Why are most of these robots using fixtures? And that is because manufacturing is filled with optically challenging objects, things like shiny metals, black rubber, transparent materials such as glass or plastic. Traditional or current vision systems struggle with such optically challenging objects.
Also, when the lighting conditions change, which happens often on factory floors, these systems often need expensive recalibration and downtime, which brings the line down. Some of these systems also use very expensive hardware-- lasers, complicated lenses-- that increase their costs. This all leads to long sales cycles, slow time to adoption, and often, frustrated customers.
We are using a new modality of light. We are using the physics of light. We are using multi-view geometry. We are using spectral imaging. And we are also using deep learning to create a system that is much more robust. I'll give you an example here. On the left side of the screen where my cursor is, you see a Styrofoam cup scanned with existing technology. As you can see, the depth is very coarse.
We can refine that depth to the depth-- to a depth method as seen on the right side where we can see the high spatial as well as depth resolution. This is done by passively taking a few pictures from-- of the cup and constructing a dense surface normal map of the object being imaged. This technology is robust to different kinds of materials. We can handle shiny metals, black rubber, transparent objects, glass, or plastic. We are robust to lighting changes, because we are just using the physics of light. And we use custom, off-the-shelf components. Therefore our hardware cost is actually significantly lower than existing solutions.
I'll give you a demonstration of this technology with this robotic bin picking system. Here we are picking an optically challenging object, which is transparent balls. Where my cursor is is our camera system looking down at the bin of transparent objects here. And you see the robot picking the objects. What I want to draw your attention to is on the bottom left corner-- bottom corner-- bottom side of the screen where I show you the results of segmentation of both the Akasha Imaging system and a conventional imaging system.
I've paused the video here. You can see that the plastic balls are very clearly segmented in the Akasha Imaging solution, while in the conventional imaging solution, you can see how segmentation is very bad. It bleeds all over. And the boundaries are not clear. If the boundaries are not clear, you cannot locate the object in space in 3D. And therefore you will not be able to pick it up. Let's see.
Here is another example of our technology in action. Here is a black rubber, another optically challenging material, which has a hairline crack in it. Now if we zoom into the area of the hairline crack where my cursor is, you cannot see the crack in a RGB image. This color image is a color-coded surface normal image of the same tire where the surface normal-- that's the direction in which the surface normal points-- is color-coded. And in that color-coded surface normal image, you can easily see the crack, because the crack leads to a change in the surface normal.
Today these defects are also not visible to the human eye. Inspectors, manual inspectors, use their fingers and touch to basically identify these defects. We can identify these defects using vision systems. We are going to be in pilot deployment in early 2021 with our robot-agnostic perception platform. We are working with a tier one automotive manufacturer right now. We're calling automotive manufacturers, plastic and rubber manufacturers, and electronics manufacturers for applications such as pick and place, end-of-line inspection, sorting, welding, machine tending. And that's it.
SPEAKER 1: Thanks, Abhijit. So a couple of questions for you from the audience--
MARCUS DAHLLOF: So the question is, what type of imaging sensor do you use? Is this RGB-D? Is this required? i.e. can you use laser scan stereo camera?
ABHIJIT GHOSH: So we do not use laser scan. We use RGB sensors. The depth is something that we infer from our processing. So we use sensors that are used for cell phone cameras.
MARCUS DAHLLOF: Got it. Next question-- how long does it take to process an image?
ABHIJIT GHOSH: Typically, we can go as high as 30 frames per second. But for most industrial applications, cycle times are in the order of 500 to-- 200 to 500 milliseconds.
MARCUS DAHLLOF: Could you talk a little bit more about the underlying software technology?
ABHIJIT GHOSH: The underlying software technology is a combination of imaging techniques, physics of light, deep learning, spectral imaging, and multi-view geometry. They come together in very interesting ways. And that's basically our competitive differentiation in how we use these technologies. Unfortunately, I don't think we have enough time to go into the details of the underlying software.
MARCUS DAHLLOF: Right, and I think, actually, at this point, we're going to have to go to the next speaker. There are many great questions here. So I'm going to encourage everyone to try to move that over to the chat discussion. And then we're going to go to our next speaker, Andy Wang, founder and CEO of Prescient Devices.
ANDY WANG: OK, great. My name's Andy Wang. I'm the CEO of prescient devices. We actually have quite a number of MIT alums as well as MIT professors involved in the founding team. So what we do is we provide a IoT development platform for enterprises. Increasingly, enterprises are hiring engineers and data scientists today in-house to integrate IoT and AI with core products and core processes. This is the only way to build unique competitive advantages on those products and processes.
But the problem is that, as most of you know, IoT technology infrastructure is quite complex, going from the sensors, to embedded systems, to security, to communication, et cetera. And this slows down the adoption in enterprises. So we provide a technology platform that enables enterprise engineers and data scientists to develop IoT applications without worrying about the underlying technology infrastructure.
So the way we do that is virtualizing the entire technology infrastructure behind a design automation software. This design automation software is the first of its kind in the history to allow users to implement the entire IoT system in the same design. This means you can directly program your edge devices, your cloud functionalities. You can connect any device to any other device. And this not only simplifies the design process, but it actually enables users to develop very powerful applications, which is the end goal. So ultimately, users are able to focus on extracting value out of the data, which is the value for enterprises, rather than spending most of their time maintaining the technology infrastructure behind the application.
One of the major uses for the technology is building digital twin systems. A digital twin is a digital representation of a product or process. And you can use that digital information to monitor, predict, and eventually improve your product or process. This is going to become a core competency for most enterprises going forward and will enable the continuous development and integration of such digital twin systems in-house.
So currently, our v1.0 product is released. We are looking for customers and pilots. We encourage you to try our software to see how much easier and how much faster it makes your team to work with IoT. We also have a lot of very good information in our blog, because we have seen a lot of adoption cases. So I would encourage you to read those as well. So for any information, please contact me at my email, which is down here. Thank you.
SPEAKER 1: We have a question for you. What technology experts should enterprises hire in-house to advance the IoT adoption?
ANDY WANG: So what we've seen is that a lot of enterprises are hiring data scientists. This is because data is the most important asset-- rather, the value behind the data. Therefore hiring data scientists give enterprises the biggest value. But the problem is that data scientists, and even IoT engineers, have trouble managing that entire IoT technology infrastructure. You need that in order to get the data. So we manage that part. We enable the IT engineers and data scientists to work on the valuable and the unique part for the enterprises.
SPEAKER 1: Great. And is your platform solely a software as a service solution? Can you talk a little more to that?
ANDY WANG: That's a great question. So we actually have a lot of experience supporting the in-car solutions. We have built a previous startup that has deployed over half a million devices. And we built everything from the hardware all the way to the cloud. So the way we see it is that a lot of enterprises need help with getting the sensors, getting the data acquisition, getting a lot of the things set up. And we would help them.
And of course, over time, this would become sort of a standard set of systems, right? And so that enterprise will just focus on the data part while we would be able to take care of everything else for them.
SPEAKER 1: Great, thank you.
ANDY WANG: Thank you.
BOAZ EFRONI ROTMAN: So my name is Boaz Efroni. I'm the VP of product management and business development at Lightelligence, where we are here to reinvent computing for AI using integrated photonic. Our founding team are all MIT alumnis, including one professor still within the company.
So one of the things that happens is that the number of applications for AI that uses machine learning AI are growing strongly every year. The problem with that is they need to run on hardware. That hardware is what is known as AI inference or AI training hardware. That, however, is usually electronics. And that hardware takes about three to four years to get to the next generation, smaller, faster.
The AI market today, with application and data, is actually growing, doubling every year. So that puts a problem for the enterprises. They need to stock up more and more server. There is a problem in space, system cost, and electricity cost.
Now if you look at what hardware is running today, the AI inference and training, you see anywhere from CPU for flexibility through GPU, FPGAs, and ASICs. We want to change that. We want to enable to jump, leapfrog the ability and the limitations of the geometry and the clock speed by providing a new photonic ASIC. It's an integrated photonic machine, basically a chip integrated, that provides a leapfrog and fundamentally go over everything that's out there.
Now what you can do with that, the throughput is now becoming 20x to 30x what electronics provide for the same area. There's no heat dissipation from the matrix multiplication that's done in photonic, enabling us to use very sophisticated 3D packaging that is very hard to do with electronics due to heat dissipation on the other side. And of course, speed of light-- so we're looking at a very, very low latency. These are the three major contributor for the success of our technology.
And what do we want to do for the go to market is to make sure it's easy for you, our customers, to integrate our hardware into new hardware, into your servers. So we're going to build a PCI card. It's a drop-in replacement PCI card. The form of the card is like a GPU or an FPGA that you see out there, that you already buy out there, today, to your systems. And we'll provide all the software tools to take the output from your training and compile it, quantize it, and run it on our system. So you'll have a huge advantage.
Now let's look at an example of a problem where such thing occurs. If you look at an enterprise, you're limited with latency. That limits your ability to run very sophisticated algorithms. The space is growing dramatically, because you have to add more and more servers to comply with your customer base increasing. And the system cost is, of course, growing high.
What we will provide, of course, is improve by 20x in latency, 12x less space, and 10x less system costs-- all of that, basically 12 to 1 ability to provide you with a much, much better solution, condense, and money. So what we're looking for, basically, is a company that has what is known as on-prem servers-- so servers that you have on your premises. We are already talking to the cloud providers. But we're looking for companies that have on-prem servers that would like to continue and improve.
So let's talk if you have on-prem and you'd like to evaluate us and work with us. We're looking for companies in enterprise, in data centers, video surveillance and analytics, finance, 5G, high-performance computing, medical, autonomous transport vehicle, industrial inspection, and robotics. My email is here. And you can contact me through MIT. I'll be more than happy to answer questions and follow up with you about it. Thank you very much.
SPEAKER 1: Thank you, Boaz. We'll do a couple of audience questions. Do you plan an edge computing device?
BOAZ EFRONI ROTMAN: That's a good question. So we're focusing-- our solution is very high-throughput. And it's a little-- it's a higher power state than what's usually used inside edge devices. So the answer is no. We're looking more into either cloud enterprise servers or what is known as edge server. So anywhere that the PCA card is used today, it's the right place for us to go.
SPEAKER 1: Great, and can you talk a little more of why photonics is coming back now even though it's been around since the '60s?
BOAZ EFRONI ROTMAN: Sure, that's a great question. So in the '60s and '70s, electronics and photonics were running one against the other to see who's going to make a better computer. The von Neumann architecture turned out to enable the electronics to squeeze much, much faster into a lower and lower geometry and squeeze more and more things on it, made it a much more viable solution for that. Machine learning turned things around by making, now, the computation statistic and linear.
Matrix multiplication is the big thing here. Additional thing that happened is that in present years, there was research that showed-- basically, in testing, it shows that for inference, for analytics, you can go lower-resolution. So it can go integer 8 instead of full-blown floating point. Both of these together made photonic very, very viable and a possibility for machine learning. And that's where we come in today.
SPEAKER 1: Great, thank you. Back to you, Marcus.
MARCUS DAHLLOF: OK, so we're going to go to the next speaker. And just again, a quick reminder to the startups-- there are plenty of questions I'm seeing in the chat and plenty of unanswered questions in the Q&A. So let's please go ahead and try to address those. Thank you. Our next speaker is Tom Baran, co-founder and CEO of Lumii.
TOM BARAN: Great, thanks so much. So thank you so much to STEX25 for having us. I think one of the things that's really fantastic about a lot of the startups that you'll see in this year's class is the number of folks who are working on innovating materials for packaging, consumer packaged goods. And we're another one of those companies doing that, but we're doing it in a little bit of a different way.
So we are packaging materials innovation company, but we're not selling any materials at all. What we sell is data that you put into a printing press. And if you put that data into a printing press, it makes inexpensive materials look like very, very expensive materials just using the data that we provide. So our vision really is to transform this giant global printing industry. And that's the name of the game. We're saying, OK, well, what can we do by putting different data, using software that we're providing, into printing presses, taking standard materials that you use every day, and making them look like a brand-new type of experience on the shelf?
So as a lot of people know, the name of the game in package printing really is shelf appeal, right? So if you're able to get a customer, or a consumer, to take a product off the shelf and hold it in their hand, they're something like 80% more likely to actually put it into their basket. And the tried and true method for doing this is to make the package look shiny and look really cool-- so things like metallics, holographic foils like you'd see on toothpaste boxes, and sometimes lenticular.
The challenge with all of these is that they're very difficult to integrate. They're less sustainable, because you're making a laminate of dissimilar materials. They're less robust. They can flake off in shipment. And they're expensive, not only in terms of the material cost, but in terms of the time that it might take to slow down the assembly line to integrate these materials.
So we have a digital enhancement solution that creates this category of effects and a new category of effects as well without any new materials and no new equipment. So it's dramatically less expensive than using standard materials for packaging embellishment. You can save hundreds of thousands of dollars on a run-- more sustainable, more elegant. And for a variety of reasons that I can get into, it's more secure than existing packaging solutions, and in fact, can be a standalone authentication solution for your products.
So just some quick examples that you should see on the screen here of some effects that we can get-- these are two labels on beer cans. But of course, it'll work on any type of package. But it's two very broadly used types of substrates. One is pressure-sensitive materials on the left, and on the right is a shrink sleeve. What you're looking at is a clear label that started off clear. And it's being printed on both sides of the label with standard black ink. The secret sauce is the algorithms that we have that come up with where that ink has to go.
And to give a sense of why this is a difficult technical problem, It's about a factor of 1,000 times harder than preparing a standard printing plate. But of course, once you prepare that plate, you can produce literally miles and miles of these kinds of things.
OK, so just a few other quick examples to go through-- so the first one or two here are in the security space, authentication. So you can see we can produce, for authentication products, a very broad depth, deep depth on thin substrates, also having different types of materials in the packet-- or in the label substrate all at once. This is something that we did for a local brewery. You can see the motion in those little triangles at the bottom.
What's notable about this is this was on a run where there was a job right before it, a job right after it. And we had to make our stuff work in a very, very constrained time slot, no change to the material at all, zero increase cost from a materials perspective. And they were able to get these effects on a shrink sleeve.
So yeah, so we've gotten a lot of interest from a lot of different folks, actually, a lot of people in the beer space. And on the left is a company that we recently just completed a run for. We went from initial art exchange to labels on the can within four weeks, which is really fast. We're eager to explore relationships with CPG brands as well as print producers to move quickly. So yeah, I guess at that point, I'll turn it over for Q&A.
SPEAKER 1: So can you tell us a little bit about how your strategy is changing now that a lot of the shopping is in person or at stores?
TOM BARAN: Yeah, that's a fantastic question. So one of the things that we found, actually, especially in the world of COVID, is that most of our direct customers, who are the print production houses, are tremendously busy. So because there are more people who are buying things in stores, that shelf experience is critical. It's all about building a better billboard for your product on the shelf. So it's only enhanced things.
SPEAKER 1: Great. And you've talked a lot about-- we've seen some cool things about beer cans. But what other market verticals could this be used for?
TOM BARAN: Yeah, so that's a fantastic question. So you know, one of the reasons that we're sharing the beer cans is that although it's one particular market vertical, the labeling technology that you use on a beer can, for example, with the shrink sleeve, applies broadly across any packaging vertical that you might imagine. So we've talked with different CPG companies who have told us that for every one of their brands in their portfolio, there's at least one product, and sometimes the majority of products, that you shrink sleeves. So that's a 360-degree wrap around any kind of product from a toothpaste tube, to a household cleaner, to shampoo, anything, a pretty broad range.