
10.5.23-Showcase-Tokyo-Themis_AI

-
Video details
Startup Lightening Talk
-
Interactive transcript
SAM YOON: Hello, everyone. My name is Sam Yoon, Head of Development-- Business Development at Themis AI. I have a question for you all. Put your hand up if you've used Google before. A lot of hands. So we all trust AI for search engines.
Now put your hand up if you've used ChatGPT before. Wow, that's still a lot of hands. So we trust, AI, for, in my case, a best friend. Put your hand up, lastly, if you would trust a autonomous vehicle right now. Look around. Not a lot of hands.
So there is a problem in humanity leveraging AI for everyone to benefit it. Deploying trustworthy AI has fundamental problems. And this is where Themis wants to add value.
Currently, there are no good solutions to reliably control the accuracy of general AI outputs. Fine-tuning large language models is extremely expensive and takes a long time. And real-time detection of AI results is very cumbersome for many people that are deploying AI technologies.
We're excited to bring over five years of MIT research and the uncertainty of AI space and bring it to Japan, which we know is trying to take an active voice in shaping how the world will look like with AI. We know that you recently hosted the G7 summit in Japan. And as a part of that, the Hiroshima process was a big headline from the event, where Japan was trying to take the lead to bring together the powerful countries in the world to make an example of how we can use AI more responsibly.
Here's one use case of how our technology can add real benefit to how AI can help humanity. When I asked that question before around autonomous vehicles, not a lot of you put your hand up. And that is very fair. It's because there's a lot of problems with it's status currently. And we have a lot of amazing people in this room and in the start-up booths outside, actually, that are working on that problem.
Using our uncertainty-aware detection algorithm, we find significantly enhanced capabilities when it comes to autonomous vehicle performance. Another example is hallucination in generative models. You may have heard of the very famous open-source image generator called Stable Diffusion. And we worked with them to basically wrap their current models and identify where their generated images were hallucinating.
So you can understand, once we start to figure out using generative AI in areas such as autonomous driving or health care or military applications, it's very, very important for us to have a good grasp of where it is going wrong. And our technology provides a solution to that.
I want to introduce to you Capsa. In a nutshell, the beauty of Capsa is that it can wrap any existing machine learning model that you have and any data point that it also serves. And with just a few lines of code that you can access through our API, it will make your AI model uncertainty aware. So it saves two months of multiple full-time employees converting your current models into uncertainty-aware ones. And we can do that. We can package five years plus of MIT research at your fingertips.
Again, currently, when you use these generative AI tools, you can only see the generated output. It's helpful in many cases. It could be helpful for your sons and daughters for a primary school assignment. But when you're trying to use that in industry, that's not enough. That's where we bring additional value to actually show you where those images could be going wrong.
Prompt engineering is also a key part of the generative AI journey. And we can use our detection algorithm to identify specific permutations of prompts that will generate the most highest quality outputs that you desire. We have also worked with large language models to prove that using our technology can make them safer.
In a nutshell, Capsa allows you to actively curate your data, enables risk-aware learning, and provides a real-time guardian against hallucination problems that the next generation of AI models will bring to society. We have worked with companies in many different industries. One example is a major robotics company in Boston. I know there's a lot of robotics use cases and companies in the crowd today. And we've partnered with them to help them make their perceptions more safer as well.
Please find us at the booth. We would love to partner you if you want to deploy your AI in a more risk-aware situation and potentially be ready for the upcoming regulations that will be coming to this space. Thank you very much.
-
Video details
Startup Lightening Talk
-
Interactive transcript
SAM YOON: Hello, everyone. My name is Sam Yoon, Head of Development-- Business Development at Themis AI. I have a question for you all. Put your hand up if you've used Google before. A lot of hands. So we all trust AI for search engines.
Now put your hand up if you've used ChatGPT before. Wow, that's still a lot of hands. So we trust, AI, for, in my case, a best friend. Put your hand up, lastly, if you would trust a autonomous vehicle right now. Look around. Not a lot of hands.
So there is a problem in humanity leveraging AI for everyone to benefit it. Deploying trustworthy AI has fundamental problems. And this is where Themis wants to add value.
Currently, there are no good solutions to reliably control the accuracy of general AI outputs. Fine-tuning large language models is extremely expensive and takes a long time. And real-time detection of AI results is very cumbersome for many people that are deploying AI technologies.
We're excited to bring over five years of MIT research and the uncertainty of AI space and bring it to Japan, which we know is trying to take an active voice in shaping how the world will look like with AI. We know that you recently hosted the G7 summit in Japan. And as a part of that, the Hiroshima process was a big headline from the event, where Japan was trying to take the lead to bring together the powerful countries in the world to make an example of how we can use AI more responsibly.
Here's one use case of how our technology can add real benefit to how AI can help humanity. When I asked that question before around autonomous vehicles, not a lot of you put your hand up. And that is very fair. It's because there's a lot of problems with it's status currently. And we have a lot of amazing people in this room and in the start-up booths outside, actually, that are working on that problem.
Using our uncertainty-aware detection algorithm, we find significantly enhanced capabilities when it comes to autonomous vehicle performance. Another example is hallucination in generative models. You may have heard of the very famous open-source image generator called Stable Diffusion. And we worked with them to basically wrap their current models and identify where their generated images were hallucinating.
So you can understand, once we start to figure out using generative AI in areas such as autonomous driving or health care or military applications, it's very, very important for us to have a good grasp of where it is going wrong. And our technology provides a solution to that.
I want to introduce to you Capsa. In a nutshell, the beauty of Capsa is that it can wrap any existing machine learning model that you have and any data point that it also serves. And with just a few lines of code that you can access through our API, it will make your AI model uncertainty aware. So it saves two months of multiple full-time employees converting your current models into uncertainty-aware ones. And we can do that. We can package five years plus of MIT research at your fingertips.
Again, currently, when you use these generative AI tools, you can only see the generated output. It's helpful in many cases. It could be helpful for your sons and daughters for a primary school assignment. But when you're trying to use that in industry, that's not enough. That's where we bring additional value to actually show you where those images could be going wrong.
Prompt engineering is also a key part of the generative AI journey. And we can use our detection algorithm to identify specific permutations of prompts that will generate the most highest quality outputs that you desire. We have also worked with large language models to prove that using our technology can make them safer.
In a nutshell, Capsa allows you to actively curate your data, enables risk-aware learning, and provides a real-time guardian against hallucination problems that the next generation of AI models will bring to society. We have worked with companies in many different industries. One example is a major robotics company in Boston. I know there's a lot of robotics use cases and companies in the crowd today. And we've partnered with them to help them make their perceptions more safer as well.
Please find us at the booth. We would love to partner you if you want to deploy your AI in a more risk-aware situation and potentially be ready for the upcoming regulations that will be coming to this space. Thank you very much.