
10.12-13.22-DigitalTech-OmniML

-
Interactive transcript
LUCAS LIEBENWEIN: Hi, Everyone. Thank you for the kind introduction. I'm Lucas, and I want to tell you a little bit about our vision at OmniML of how we want to make machine learning available everywhere and across all devices by adapting neural networks and the machine learning technologies for the hardware that we see out in the world.
A little bit about ourselves. So we were co-founded about a year ago by one of professors here at MIT. I myself did my PhD in efficient machine learning also here at MIT. And so really, OmniML is a combination of decade-long research into understanding how machine learning, how deep learning works, what are the driving factors to making it more effective, more efficient, and more useful across different applications and hardware devices.
Fast forward a year from here-- from now. So we are about 20, 25 people now across three offices in the US. It's been a really fun and exciting journey seeing lots of different exciting problems, use cases, work with interesting customers. So yeah. Want to tell you a little bit more about what we do.
So the vision and the pain point that we start out with is that nowadays what we see is that if you develop machine learning models, you're mostly going to develop and train them in a cloud environment where, really, the underlying hardware is homogeneous and is ubiquitous available.
So you can simply scale up your cloud environment and have more compute resources. And that's actually quite a big contrast to how many people think about deploying machine learning and putting that into production where you have all different kinds of requirements, different hardware, different constraints.
And so this gap between training and developing your machine learning model and actually putting it into production creates a long time to market. It makes it really hard to maintain the performance that you see in your development environment. And also, you just have very high development costs, high R&D costs, and potentially low return on investment, all things that really make it hard for you as a business to venture into AI i making some of these grand visions of AI that we see every day a reality.
So what we do at OmniML is we just released our first product, Omnimizer, which is essentially a platform to help existing machine learning teams become more productive and make it much easier and more streamlined for them to deploy machine learning models into production. So the way we think about this problem is essentially in two stages right now.
And what we want to do is really close this gap between your chip, your hardware environment, your production, and where you see yourself as machine learning engineer and scientists working on actual machine learning application. So the first component in our product vision is our engine.
With our engine, we want to essentially provide a streamlined, completely standardized way for you to interact with different hardware devices directly from your cloud native environment where you develop and train your models. That makes it actually a lot easier to interact with your model, get feedback, and understand what you're working on.
The second part is kind of like our Omnimizer core which is a set of model optimization, adaptation, design, and training tools that integrate into existing training environment. And that is really where a lot of the research that we have been working on over the last decade comes into play trying to understand what makes a machine learning model tick, how we can adapt the model, how can we design more effective models.
So using Omnimizer, you end up with machine learning applications that make it to production faster, they're easier to deploy while you still maintain full control over your AI IP. Also at the end of the day, you get more inference and more power from your existing hardware. Yeah. So I talked a little bit about that already, but really what makes using Omnimizer different to any other kind of machine learning tool is that it gives you the availability not only to develop and train the model but also directly test the model in your production environment and get feedback.
So ultimately leading to much better models that, compared to previously, you might not actually have applications that are made possible in the first place. We see a lot of people that either have very, very stringent constraints in terms of the accuracy and performance or stringent constraints in terms of their product budget and what they can actually do, what kind of chip they can afford to put into their device.
So all of that is not just a vision, but we have already worked extensively with different customers across different industries. This is an example of where we worked with a robotics customer working on computer vision. We actually achieved up to 8, 9 times speed up under existing hardware for their machine learning workloads without any drop in accuracy.
So in that particular instance, I'm showing you an image of using image segmentation, so trying to understand different objects in the environment. Another early design partner and customer of us is Wyze. They're actually producer of smart home equipment. Among them, also smart home cameras for monitoring.
So previously, they would run all of their machine learning workload in a cloud. And after using Omnimizer, they were actually able to bring all of their machine learning workload on their actual camera. So for increased robustness, better security, and also just overall more robust and cost-saving measurements, still enabling you to run things like person detection and any other kind of event detection that you would want to monitor in your house.
All right. And finally, I want to quickly also mention Qualcomm is also an important design partner. So if you think about it from the perspective of different hardware vendors right now, we have an environment that is largely dominated by NVIDIA. And it makes it really hard for other vendors to get into this place where they can offer machine learning chips.
And so we actually also help both hardware vendors and customers that are interested to diversify their hardware platforms, use different hardware platforms while still running their same standardized type of machine learning application they're interested in. All right. If you want to learn more, please come by and stop at our booth.
We are interested in expanding our customer base who are interested in hearing about you, what kind of machine learning applications you have. And so thank you very much.
[APPLAUSE]
SPEAKER: Thank you, Lucas.
-
Interactive transcript
LUCAS LIEBENWEIN: Hi, Everyone. Thank you for the kind introduction. I'm Lucas, and I want to tell you a little bit about our vision at OmniML of how we want to make machine learning available everywhere and across all devices by adapting neural networks and the machine learning technologies for the hardware that we see out in the world.
A little bit about ourselves. So we were co-founded about a year ago by one of professors here at MIT. I myself did my PhD in efficient machine learning also here at MIT. And so really, OmniML is a combination of decade-long research into understanding how machine learning, how deep learning works, what are the driving factors to making it more effective, more efficient, and more useful across different applications and hardware devices.
Fast forward a year from here-- from now. So we are about 20, 25 people now across three offices in the US. It's been a really fun and exciting journey seeing lots of different exciting problems, use cases, work with interesting customers. So yeah. Want to tell you a little bit more about what we do.
So the vision and the pain point that we start out with is that nowadays what we see is that if you develop machine learning models, you're mostly going to develop and train them in a cloud environment where, really, the underlying hardware is homogeneous and is ubiquitous available.
So you can simply scale up your cloud environment and have more compute resources. And that's actually quite a big contrast to how many people think about deploying machine learning and putting that into production where you have all different kinds of requirements, different hardware, different constraints.
And so this gap between training and developing your machine learning model and actually putting it into production creates a long time to market. It makes it really hard to maintain the performance that you see in your development environment. And also, you just have very high development costs, high R&D costs, and potentially low return on investment, all things that really make it hard for you as a business to venture into AI i making some of these grand visions of AI that we see every day a reality.
So what we do at OmniML is we just released our first product, Omnimizer, which is essentially a platform to help existing machine learning teams become more productive and make it much easier and more streamlined for them to deploy machine learning models into production. So the way we think about this problem is essentially in two stages right now.
And what we want to do is really close this gap between your chip, your hardware environment, your production, and where you see yourself as machine learning engineer and scientists working on actual machine learning application. So the first component in our product vision is our engine.
With our engine, we want to essentially provide a streamlined, completely standardized way for you to interact with different hardware devices directly from your cloud native environment where you develop and train your models. That makes it actually a lot easier to interact with your model, get feedback, and understand what you're working on.
The second part is kind of like our Omnimizer core which is a set of model optimization, adaptation, design, and training tools that integrate into existing training environment. And that is really where a lot of the research that we have been working on over the last decade comes into play trying to understand what makes a machine learning model tick, how we can adapt the model, how can we design more effective models.
So using Omnimizer, you end up with machine learning applications that make it to production faster, they're easier to deploy while you still maintain full control over your AI IP. Also at the end of the day, you get more inference and more power from your existing hardware. Yeah. So I talked a little bit about that already, but really what makes using Omnimizer different to any other kind of machine learning tool is that it gives you the availability not only to develop and train the model but also directly test the model in your production environment and get feedback.
So ultimately leading to much better models that, compared to previously, you might not actually have applications that are made possible in the first place. We see a lot of people that either have very, very stringent constraints in terms of the accuracy and performance or stringent constraints in terms of their product budget and what they can actually do, what kind of chip they can afford to put into their device.
So all of that is not just a vision, but we have already worked extensively with different customers across different industries. This is an example of where we worked with a robotics customer working on computer vision. We actually achieved up to 8, 9 times speed up under existing hardware for their machine learning workloads without any drop in accuracy.
So in that particular instance, I'm showing you an image of using image segmentation, so trying to understand different objects in the environment. Another early design partner and customer of us is Wyze. They're actually producer of smart home equipment. Among them, also smart home cameras for monitoring.
So previously, they would run all of their machine learning workload in a cloud. And after using Omnimizer, they were actually able to bring all of their machine learning workload on their actual camera. So for increased robustness, better security, and also just overall more robust and cost-saving measurements, still enabling you to run things like person detection and any other kind of event detection that you would want to monitor in your house.
All right. And finally, I want to quickly also mention Qualcomm is also an important design partner. So if you think about it from the perspective of different hardware vendors right now, we have an environment that is largely dominated by NVIDIA. And it makes it really hard for other vendors to get into this place where they can offer machine learning chips.
And so we actually also help both hardware vendors and customers that are interested to diversify their hardware platforms, use different hardware platforms while still running their same standardized type of machine learning application they're interested in. All right. If you want to learn more, please come by and stop at our booth.
We are interested in expanding our customer base who are interested in hearing about you, what kind of machine learning applications you have. And so thank you very much.
[APPLAUSE]
SPEAKER: Thank you, Lucas.