Humatics

Startup Exchange Video | Duration: 18:43
July 16, 2018
Video Clips
  • Interactive transcript
    Share

    DAVID MINDELL: My name is David Mindell. I'm the co-founder and CEO of Humatics and I'm a professor at MIT. I'm dual appointed in aerospace engineering and in history of science and technology.

    Our new company is called Humatics. We founded it in 2015. Its first four letters of human and the last four letters of robotics. And our mission is to revolutionize the way that we locate, navigate, and collaborate between people and machines.

    Humatics technology is based on work that I did in the 1990s using sonars undersea to do very high precision navigation of robots in the very deep ocean. And for years, I wanted to translate that technology into the terrestrial world in manufacturing and other above-the-surface applications. The RF technologies were moving very quickly for cell phones and whatnot, but it really wasn't ready for the kind of thing we are doing. Only in the last five years or so, partly driven by automotive radars and the component costs are coming down, it's only recently become possible.

    When I started the company, I really thought we were riding the Moore's law for radar, which was going to bring radar down in cost and really make small, short-range radar something we could really experiment and bring to bear with on the world of robotics and autonomous systems. Relatively quickly, thanks to my CTO, Greg Charvat, we're driving the Moore's law for radar. We are soon to make the very smallest fully-integrated radars that do all kinds of amazing things on chips.

    And our system is really not a radar in the traditional sense. It doesn't do blobology on reflected echoes. It's a transponder to transponder kind of system. But it uses a lot of the same core technologies. It's a great place to start a company like this because we draw on the expertise at Lincoln Labs and Raytheon, and there's a lot of great expertise in the Boston area for the microwave engineering world.

    And as a company, what we're really doing is bringing together this fairly exotic and difficult, and until now, expensive world of radio frequency and radar engineering, bringing that together with the world of robotics, and navigation, and all the stuff that that's a little bit more familiar to people that's exploding out of MIT and elsewhere. And there's really no other company that's bringing those two strains together in this way.

    [MUSIC PLAYING]

  • Interactive transcript
    Share

    DAVID MINDELL: My name is David Mindell. I'm the co-founder and CEO of Humatics and I'm a professor at MIT. I'm dual appointed in aerospace engineering and in history of science and technology.

    Our new company is called Humatics. We founded it in 2015. Its first four letters of human and the last four letters of robotics. And our mission is to revolutionize the way that we locate, navigate, and collaborate between people and machines.

    Humatics technology is based on work that I did in the 1990s using sonars undersea to do very high precision navigation of robots in the very deep ocean. And for years, I wanted to translate that technology into the terrestrial world in manufacturing and other above-the-surface applications. The RF technologies were moving very quickly for cell phones and whatnot, but it really wasn't ready for the kind of thing we are doing. Only in the last five years or so, partly driven by automotive radars and the component costs are coming down, it's only recently become possible.

    When I started the company, I really thought we were riding the Moore's law for radar, which was going to bring radar down in cost and really make small, short-range radar something we could really experiment and bring to bear with on the world of robotics and autonomous systems. Relatively quickly, thanks to my CTO, Greg Charvat, we're driving the Moore's law for radar. We are soon to make the very smallest fully-integrated radars that do all kinds of amazing things on chips.

    And our system is really not a radar in the traditional sense. It doesn't do blobology on reflected echoes. It's a transponder to transponder kind of system. But it uses a lot of the same core technologies. It's a great place to start a company like this because we draw on the expertise at Lincoln Labs and Raytheon, and there's a lot of great expertise in the Boston area for the microwave engineering world.

    And as a company, what we're really doing is bringing together this fairly exotic and difficult, and until now, expensive world of radio frequency and radar engineering, bringing that together with the world of robotics, and navigation, and all the stuff that that's a little bit more familiar to people that's exploding out of MIT and elsewhere. And there's really no other company that's bringing those two strains together in this way.

    [MUSIC PLAYING]

    Download Transcript
  • Interactive transcript
    Share

    If you think about it, GPS is a good example. GPS is kind of a miracle technology, I think. And it covers the whole globe and gives you xyz positioning anywhere on the face of the Earth.

    But the uncertainty circle, that big, blue circle you see on your phone when you pull out a map, can be 3, 6, 9 meters. And that's good enough to get you down the highway or to get an airplane to a new city. But almost all human work, and almost all robotic work, happens within a much smaller circle than 9 meters-- your average room, your average factory cell. Plus, GPS doesn't work indoors, really.

    And so what we're doing is bringing that same idea of navigation, but at the millimeter scale, indoors or outdoors, and totally independent of lighting conditions. But again, GPS is a good example because it was originally designed for military positioning and various weapons applications, and then ends up opening this huge, explosive world of novel applications that nobody really thought of when it was first created.

    We think of our micro-location system as a similar kind of platform technology where we're addressing some use cases now, very real problems in industry that we can solve. But there's also a whole world in our future of lots of different applications, some of which we've thought of and patented. Others, the users are going to think of and the market is going to think of.

    I don't think our system will probably ever be used solely in isolation. All robotic systems today uses mixes of sensors. They have their strengths and their weaknesses. But right now, there's no solution for millimeter scale, precise positioning in the harsh environment of a manufacturing world where there's strange lighting conditions and lots of different types of stuff around. And many of our customers have tried lots of different alternatives, and they've come to us because we have a solution to that problem.

    The Humatics microlocation system can pinpoint a transponder-- or actually, many transponders-- to a millimeter scale, often as low as one millimeter or smaller, position in space. We don't do any blobology, which is we don't look at a big dot cloud of echoes and try to apply all kinds of algorithms to get 80% certainty on what we're seeing. We see something, and we pinpoint. We know exactly what that thing is. It has a unique digital tag, and we know exactly where it is.

    And so it eliminates the blobology, which is difficult to bring up to a very high standard of reliability and certainty, and gives you very precise positioning, which requires that you know very precise positioning of what. If I tell you my hand is here to a millimeter, it's hard to say what do you really mean by my hand. What we do is we say, there's a particular point on your hand. We know exactly where that point is.

    Our system basically will navigate the relationships between people, places, and things. Sometimes I think about that as people, robots, and machines in an industrial setting. So there might be a small wristband that's on a person. We're building the transponders in the safety equipment that people already are wearing. You might put it on top-- on a robot, or also on a work piece or a mobile cart. And we engineer very high-precision relationships between these things.

    And that is enormously valuable because it provides-- it just improves the flexibility of all different kinds of production systems. It makes them more efficient to change. It makes them less prone to downtime for various reasons. And the whole industry 4.0 push is really about reducing the fixed capital, 10-year footprint of robotics and other systems, making them more flexible, more collaborative, less walled off from the human work. Fundamentally, Humatics is focused on bringing robots and other autonomous systems to work in human environments.

    [MUSIC PLAYING]

    Download Transcript
  • Interactive transcript
    Share

    DAVID MINDELL: We're feeling a tremendous amount of pull from the manufacturing industries for humatics solutions. Companies are under very high pressure to improve productivity, increase automation. And at the same time, they don't want to lay off their workers. They want to retrain their workers. They want to keep people involved.

    And I think the robotics industry is transitioning from a world where the innovation was led by the individual robots themselves. There are companies all around here that make wonderful robots. And I think it's now transitioning to where the action is the applications, the specific use cases, and the environment that allows those robots to work within human settings. You go to a modern factory today, the robots are still basically clockwork mechanisms. They do very repetitive tasks very precisely and very repeatedly, but with a great deal of capital investment and not in a very flexible way.

    I sometimes describe Humatics as a robotics company that's never going to build a robot. In fact, we just announced four of our new senior hires. All come from area robotics companies where they see that the robotic platforms themselves, the physical hardware, the mechanical hardware is largely becoming commoditized. We can buy hundreds of wonderful, really exciting, interesting robots. We hope to work with all those companies, because we're building the navigational envelopes that allow those robots to work with precision, safety, and collaboration in these very human environments.

    Our system provides these remarkably precise tracks of all different kinds of motion and spatial positioning. But the XYZ only gets you part of the way there. What's really exciting then is you look at the motion paths of either people or machines or parts moving through the factory. And there's just tremendously rich information in those motion paths.

    Our spatial intelligence platform then can gather those paths and bring whatever is happening in a daily sense to bear on a larger database. All the tools of machine learning and analytics can then be brought to bear on, how is this thing moving today? How is it moving as compared to how it moved last week?

    Or how are you doing with the work that you're doing compared to your own best performance in other domains? How can you give feedback to workers to enable them to have their own continuous improvement, whether it's in a pick and pack warehouse type setting or other types of really anything where there's human motion?

    You can extend that into training then. How do you understand who the best movers are, whether it's in almost any application, and then help other people learn on their own to rise to whatever level they choose to rise to? So thinking about all that data coming into a larger platform involves visualization. It involves analytics. It involves machine learning. And there's really a whole world opening up there.

    [MUSIC PLAYING]

    Download Transcript
  • Interactive transcript
    Share

    DAVID MINDELL: Our company actually has very close connections to Detroit. Our lead investor, Fontinalis Partners is from Detroit, actually chaired by Bill Ford of the Ford Motor Company. And obviously, there's a tremendous amount excitement about driverless cars, even though no one's quite sure how that story is going to play out yet. Whatever happens with driverless cars, they will need to live in new kinds of relationships to people and the city. Our technology enables them to have relative position information with a great deal higher reliability and accuracy than is possible right now with the traditional blobology based sensor systems.

    All of these advanced robotic systems are fusions of many types of sensors. I don't think our system will ever be the only sensor on a car. But it does enable, rather than saying, oh, I see a blob there, let's run it through an algorithm and try to figure out what it is, it enables an automobile, or a person, or a piece of infrastructure say, oh, I see your point there.

    I think about it sometimes like the flashing LED on the bike, but with very precise information about exactly where that is, how fast it's going, and what it is. Bicycles are very hard to recognize with cameras. You have daylight. You have nighttime. You have snow rain, sleet, covered up sensors, all that. Our system's immune to all those weather conditions.

    One of the things that Humatics is accomplishing as a company is bringing lessons that have been learned in extreme environments from robotics and automation. I think of them as the deep ocean, outer space, aviation, some military environments. These are all places where people have been using robots and autonomy for 40 and 50 years in many cases. A lot of lessons have been learned. I've learned some of them. Many others have learned others about how these things work, what's the best way to get them deployed in the field, what the reliability is, how you build trust.

    And the two founding scientific advisors of Humatics are two of my mentors from those realms. One of them is Robert Ballard, who found the Titanic, who I worked with at Woods Hole for many years. And the other is Dave Scott, who is an MIT graduate, who worked on the Apollo guidance computer for his graduate thesis, and then actually flew it and landed on the moon as the commander of Apollo 15. And they're both big thinkers in how navigation, how exploration can kind of contribute to the world that we live in today.

    We've hired people from the underwater navigation world-- my colleagues, other of my mentors are involved in the company. In some ways, the factory of today from a navigational standpoint looks a lot like the deep ocean does. There's no GPS. There's a lot of noise going on. You have a lot of uncertainty in the environment. You have a small number of fixed beacons you can work from. We're literally using algorithms and techniques from these other realms there.

    In my 2015 book, Robots Ourselves really addressed this question of what have we learned? And one of the lessons is that for autonomous systems to be useful and have an impact on the human world, almost by definition, and of course the economic world is the human world, they need to situate themselves within human environments. And the idea of a robot being off by itself in the desert is-- even when we put them on Mars, they're still very tightly related to human environment, human controls, human assumptions that go into the design of the robots. And the more we understand and see those human inputs, the more effective we can be.

    The story is full of robots that were designed to be fully autonomous, like the Predator, the famous drone that was used in Iraq and Afghanistan, was designed to be fully autonomous with no operator input at all. When it actually got deployed into the field, it had a big impact on the world, but it ended up taking 150, 175 people to operate it. There's nothing unmanned about it. It's a remote manned system.

    And many other systems bring new levels of decision making. It's not that there's nothing exciting going on in the world of AI and autonomy. But when we actually yoke those systems to provide human value in various settings, we need to think about how they're embedded in these human contexts. You see that story play out over and over again. You can do it after the fact when it tends to be expensive and very cumbersome, or I'm arguing that we need to think about building those relationships into the core autonomy of these systems as we deploy them into the field.

    And I think those ideas are getting out there. Some driverless car companies will say, our car is purely autonomous, operates with no input from anybody, which isn't really true anyway. There's tons of input in a driverless car. There are thousands of assumptions that the programmers put in about what constitutes a pedestrian, and how fast this is moving, and whatnot.

    But I think the more sophisticated auto companies are now saying, the car is your collaborator. The car is your friend. The car learns from your driving habits. The car works with the environment around you. The car draws on what's happening in the cloud.

    We don't have all the answers to how those technologies are going to work, but I think that's a more compelling story in a lot of ways. It doesn't mean the automation doesn't still help people in all kinds of ways and open up new vistas for how we use mobility, but it's a different way to think about it. If you look at aviation, that's how autopilots, autoland, all kinds of systems are used.

    [MUSIC PLAYING]

    Download Transcript
  • Interactive transcript
    Share

    DAVID MINDELL: People may be familiar with motion capture technology, which is used in Hollywood and high-end physiology laboratories. Very cool technology used for creating all kinds of neat animations and motion study. That technology basically requires a dedicated studio. Even the researchers who use it, you have to cover up the windows, you have to cover up your watch, you have to cover up your hair, in many cases, and put on this suit with all these dots on it.

    One way to think about what humatics is doing and what the Spatial Intelligence Platform will enable is something similar that is low cost, that works in all lighting conditions, indoors, outdoors. We talked to a lot of folks who use those motion capture systems now. They're dying to get their hands on our stuff, because it can really open new vistas to how the things are used, not just in the research laboratory, but out in the world for real applications.

    Our stuff's low cost. Think about-- sometimes I call it the democratization of motion capture. What happens when it's part of a video game? What happens when it's in a clinical setting, in medicine? What happens when it's in a lot of other places where those systems just can't work today?

    We're still at a point where autonomous systems are really not capable of living within human environments. The technology is not there yet. The laws aren't there yet. The regulations aren't there yet. The trust isn't there yet. But as those things evolve, there's going to be an ever greater need for tighter coordination, tighter collaboration. And if you don't know where stuff is, those things are all a whole lot harder and a whole lot less reliable. When you start to know exquisitely where things are around you, you can do an awful lot in that world.

    Download Transcript