
2023-Management-Modzy

-
Interactive transcript
KIRSTEN LLOYD: All right good afternoon. My name is Kirsten Lloyd. I'm one of the co-founders of Modzy, and one of my co-founders, Seth Clark, an MIT alumnus. So very excited to be here with you today. All right. So great ordering of these presentations. We've already spoken a little bit about the edge today, and obviously, there's been a lot of discussion of artificial intelligence. And we believe at Modzy that Edge is the future of AI. So by 2023, we expect that there will be 43 billion connected devices in the world. And my previous presenter mentioned the fact that 25%-- or sorry-- 75% of enterprise data will be created at the edge by 2025. However, we know that 90% of that data today is unprocessed because there is no way to actually get those analytical insights running at the Edge.
We believe that this problem is one that is worth solving. So some of the challenges related to actually getting AI running at the Edge stem from deployment complexity and then also issues with the models themselves today. So on the deployment side, it's super difficult to actually get a model that's trained on a data scientist laptop actually running in production at scale. Most of these deployments can take anywhere up to nine months, and that is just way too long to actually get the insight to be able to make a better business decision. Some of this has to do with actually the hardware networking and resources related to these different deployments, whether you're trying to run that model in the Cloud, on Premise, or at the Edge.
Similarly, models today trained-- we've talked a lot about ChatGPT, large language models. They are not optimized to run at the Edge on these low power, low hardware type solutions. And so Modzy solves these problems. Modzy is an MLops platform that helps organizations deploy, connect, run, and monitor machine learning models anywhere. So that can be in the Cloud, on Premises, or at the Edge. And what we do with this is tools for data scientists to easily containerize and deploy their models into production, then turn those models into API endpoints that can be integrated and called anywhere. So I have a lot of different logos up on this slide, but basically, what we do is we work with your organization's existing tech stack to make it easy to make your AI work anywhere for any kind of use case across any kind of industry.
And we've seen a lot of success with this. 15 times faster for AI model deployment for data science teams, 20 times faster solution development as IT teams try and build these solutions, and up to 80% Cloud cost savings for organizations trying to manage some of these costs with running AI and production. And so how we do this is, we really believe that if you turn the problem on its head, you can get a more efficient solution. So if you're solving for Edge first, then trying to figure out ways that you're machine learning or artificial intelligence models can run anywhere at the Edge-- running them in the Cloud or on Premise is just one different deployment paradigm.
So we have a number of different customers today, but one that I would like to highlight is in the telecommunications space here in the US. We're working with a large telecommunications provider to underpin their 5G transformation efforts. Think about-- we have our smartphones today. If you are driving in your car using a GPS app, you would need a low latency response. And so that model that is running on your cell phone cannot go back to a Cloud data center, get a result, and send it back to you. You'll miss your turn. So we're helping this telecommunications provider run machine learning models directly at their 5G towers and MAX centers.
So what we're helping them do is create and offer new machine learning enabled business solutions to their customers, whether that be IoT management, fleet tracking, cybersecurity, and more. We're also working with a large industrial vehicle manufacturer here in the US to help do a lot of different types of machine learning use cases, some related to supply chain management, others related to even doing predictive air quality monitoring in their factory floor, putting a sensor next to a chemical vat to be able to turn on a fan when hazardous working conditions could potentially threaten workers.
And so my partnership asked for you today-- we work with organizations that have gotten to a point where they have a number of machine learning models developed. Their data science teams are hard at work training these models, and then they're looking to production size them in a more efficient, scalable and secure way. So we work with the industries mentioned, but as I also said, it's a horizontal platform. And we're really just hoping to try and meet with some companies here today that are looking to streamline their AI integration efforts. So please come see me over at the booth. I have a demo running. And thank you again for the opportunity today.
-
Interactive transcript
KIRSTEN LLOYD: All right good afternoon. My name is Kirsten Lloyd. I'm one of the co-founders of Modzy, and one of my co-founders, Seth Clark, an MIT alumnus. So very excited to be here with you today. All right. So great ordering of these presentations. We've already spoken a little bit about the edge today, and obviously, there's been a lot of discussion of artificial intelligence. And we believe at Modzy that Edge is the future of AI. So by 2023, we expect that there will be 43 billion connected devices in the world. And my previous presenter mentioned the fact that 25%-- or sorry-- 75% of enterprise data will be created at the edge by 2025. However, we know that 90% of that data today is unprocessed because there is no way to actually get those analytical insights running at the Edge.
We believe that this problem is one that is worth solving. So some of the challenges related to actually getting AI running at the Edge stem from deployment complexity and then also issues with the models themselves today. So on the deployment side, it's super difficult to actually get a model that's trained on a data scientist laptop actually running in production at scale. Most of these deployments can take anywhere up to nine months, and that is just way too long to actually get the insight to be able to make a better business decision. Some of this has to do with actually the hardware networking and resources related to these different deployments, whether you're trying to run that model in the Cloud, on Premise, or at the Edge.
Similarly, models today trained-- we've talked a lot about ChatGPT, large language models. They are not optimized to run at the Edge on these low power, low hardware type solutions. And so Modzy solves these problems. Modzy is an MLops platform that helps organizations deploy, connect, run, and monitor machine learning models anywhere. So that can be in the Cloud, on Premises, or at the Edge. And what we do with this is tools for data scientists to easily containerize and deploy their models into production, then turn those models into API endpoints that can be integrated and called anywhere. So I have a lot of different logos up on this slide, but basically, what we do is we work with your organization's existing tech stack to make it easy to make your AI work anywhere for any kind of use case across any kind of industry.
And we've seen a lot of success with this. 15 times faster for AI model deployment for data science teams, 20 times faster solution development as IT teams try and build these solutions, and up to 80% Cloud cost savings for organizations trying to manage some of these costs with running AI and production. And so how we do this is, we really believe that if you turn the problem on its head, you can get a more efficient solution. So if you're solving for Edge first, then trying to figure out ways that you're machine learning or artificial intelligence models can run anywhere at the Edge-- running them in the Cloud or on Premise is just one different deployment paradigm.
So we have a number of different customers today, but one that I would like to highlight is in the telecommunications space here in the US. We're working with a large telecommunications provider to underpin their 5G transformation efforts. Think about-- we have our smartphones today. If you are driving in your car using a GPS app, you would need a low latency response. And so that model that is running on your cell phone cannot go back to a Cloud data center, get a result, and send it back to you. You'll miss your turn. So we're helping this telecommunications provider run machine learning models directly at their 5G towers and MAX centers.
So what we're helping them do is create and offer new machine learning enabled business solutions to their customers, whether that be IoT management, fleet tracking, cybersecurity, and more. We're also working with a large industrial vehicle manufacturer here in the US to help do a lot of different types of machine learning use cases, some related to supply chain management, others related to even doing predictive air quality monitoring in their factory floor, putting a sensor next to a chemical vat to be able to turn on a fan when hazardous working conditions could potentially threaten workers.
And so my partnership asked for you today-- we work with organizations that have gotten to a point where they have a number of machine learning models developed. Their data science teams are hard at work training these models, and then they're looking to production size them in a more efficient, scalable and secure way. So we work with the industries mentioned, but as I also said, it's a horizontal platform. And we're really just hoping to try and meet with some companies here today that are looking to streamline their AI integration efforts. So please come see me over at the booth. I have a demo running. And thank you again for the opportunity today.