Righthand Robotics

-
Interactive transcript
LAEL ODHNER: My name is Lael Odhner, and I'm the co-founder and CTO of RightHand Robotics. We are a small but growing robotics company in Somerville, Massachusetts working on the problem of robotic grasping and manipulation.
I came to MIT back in 1999, and I stayed as long as I could. I loved it. I did a bachelor's, a master's, and a doctorate in robotics and mechanical engineering. And during that time, I developed a fascination with human hands.
My own work was somewhat peripheral to the problem but largely related to the problem of how you can produce muscles dense enough to articulate something like the human hand. So your hand has a lot of bones in it and a lot of independent degrees of freedom, I think maybe 32 or 33, something like that.
But when I graduated, I decided I wanted to really move more in the direction of attempting to replicate the human ability to manipulate. So I spent five years down at Yale working as soft money faculty on a DARPA program, and our job in that program was to try to make a low cost, highly dexterous robot hand for disposal of dangerous objects-- bomb squad type scenarios. And we were placed on relatively tight cost constraints.
So that focused the mind marvelously. We put together these hands that had been sort of designed from the ground up around a small but very important set of basic tasks as far as picking things up between the fingers or picking things up in the whole hand. And we were able to just ace the tests that DARPA had set up for evaluating this kind of technology.
But we realized pretty quickly, as we had finished that project, that the real applications were not really military in nature. E-commerce was growing and still is growing at double digit rates. So we put together a pitch at the MIT 100K Competition. I think it was in 2012. And we made it to the semifinals in products and services, which is pretty respectable. I think that's one of the bigger divisions.
But in the process, I think the most important thing about it was that we got the attention of a number of large retailers and large robotics companies, including Kiva systems. The CEO, Mick Mountz, called us up afterwards and offered us a lot of really great advice.
And that pretty much convinced us, just seeing the amount of energy it generated just to throw the idea out there, that we really needed to refocus specifically on this and start a company dedicated to trying to grasp and manipulate everyday objects in the context of e-commerce order fulfillment.
-
Interactive transcript
LAEL ODHNER: My name is Lael Odhner, and I'm the co-founder and CTO of RightHand Robotics. We are a small but growing robotics company in Somerville, Massachusetts working on the problem of robotic grasping and manipulation.
I came to MIT back in 1999, and I stayed as long as I could. I loved it. I did a bachelor's, a master's, and a doctorate in robotics and mechanical engineering. And during that time, I developed a fascination with human hands.
My own work was somewhat peripheral to the problem but largely related to the problem of how you can produce muscles dense enough to articulate something like the human hand. So your hand has a lot of bones in it and a lot of independent degrees of freedom, I think maybe 32 or 33, something like that.
But when I graduated, I decided I wanted to really move more in the direction of attempting to replicate the human ability to manipulate. So I spent five years down at Yale working as soft money faculty on a DARPA program, and our job in that program was to try to make a low cost, highly dexterous robot hand for disposal of dangerous objects-- bomb squad type scenarios. And we were placed on relatively tight cost constraints.
So that focused the mind marvelously. We put together these hands that had been sort of designed from the ground up around a small but very important set of basic tasks as far as picking things up between the fingers or picking things up in the whole hand. And we were able to just ace the tests that DARPA had set up for evaluating this kind of technology.
But we realized pretty quickly, as we had finished that project, that the real applications were not really military in nature. E-commerce was growing and still is growing at double digit rates. So we put together a pitch at the MIT 100K Competition. I think it was in 2012. And we made it to the semifinals in products and services, which is pretty respectable. I think that's one of the bigger divisions.
But in the process, I think the most important thing about it was that we got the attention of a number of large retailers and large robotics companies, including Kiva systems. The CEO, Mick Mountz, called us up afterwards and offered us a lot of really great advice.
And that pretty much convinced us, just seeing the amount of energy it generated just to throw the idea out there, that we really needed to refocus specifically on this and start a company dedicated to trying to grasp and manipulate everyday objects in the context of e-commerce order fulfillment.
-
Interactive transcript
[MUSIC PLAYING]
LAEL ODHNER: The problem of picking is really, at its heart, a geography problem. You don't see a lot of e-commerce fulfillment centers or grocery distribution centers located in major cities. And that's because the modern fulfillment center is, say, a million square feet. They're very large buildings. And the land cost of putting them in the city would be very prohibitive. In fact, in order to save money on transport costs, they're often out along Interstate highways or at the intersection of interstate highways in rural areas.
And the problem this poses is that there is no labor available out there. They tend to be located in small towns. And after a few warehouses go up, pretty much, everybody who wants a job working in a fulfillment center, already has a job working in a fulfillment center. So it becomes very difficult come, say, the holiday rush for them to staff up and find additional people who want to work shipping peoples orders.
This, combined with growth in the industry. Is putting tremendous pressure. You see every year, news that Amazon or some other large retailer is upping their wages across the board. I think last year, Amazon declared that $15 was their minimum wage. And nobody's doing this out of the goodness of their heart, they're doing this because supply and demand is really, really a difficult problem for them.
So robotic picking hopefully offers a solution that will make everybody happy. I have never heard a single retailer say that they want to reduce the number of employees in their fulfillment centers. That's never their problem.
Right now, and I think for the foreseeable future, e-commerce looks like it's going to grow. They will always need more people. And part of their problem is trying to concentrate those people in jobs that are more fulfilling. Or that, frankly, people will want to stay in for multiple years.
So when we look to automate a process within someone's fulfillment center, we're primarily looking at jobs that are on the bottom of the list. Not things where people are doing a great deal of, say, quality control. Or things where-- if you can pick something up, say, and scan it in front of a scanner, so can a robot. And it will actually be a lot more ergonomic in time, I think, for the robots to do it. Because once you automate more, you can actually streamline the process.
There are something like 11 touches of a product by person in the course of fulfilling an order. And one of the reasons why we have to have so many touches is because there's potential for human error. Because humans have to perform every one of these steps. And so the people are then acting as a QA role, as well as doing the work itself. The more of this is streamlined into automated processes-- if you pick something up once, you verify its identity, and then it's automated all the way through-- that could potentially have huge efficiency implications.
-
Interactive transcript
LAEL ODHNER: Our system is a, well, we call it a grasping of clients. Basically, we design a machine vision system that images a container full of arriving goods. We integrate in a robot arm. And we make our own highly-customized end effectors. These end effectors really encompass a great deal of our technology. They use both suction and fingers in order to grasp and manipulate an object. And the combination of that and the machine intelligence surrounding where when you look at an objective do you think about trying to put a finger in order to pick it up or manipulate it.
That's a major research challenge that we think has been tackled in our case largely through integration of software and hardware. In other words, because we understand intimately how the hardware will behave when it comes in contact with an object, we can design a motion planning routine that's going to cause the whole system to work successfully end to end.
Well, when a camera photographs an object, when you're trying to pick something up just from a first impression, you don't see the whole object. You might not even see one side of an object completely, especially if they're all stacked or jumbled inside a container. And for us, the interesting problem then is to try to understand what you're looking at. So when someone hands you a container, you take an image. And you need to figure out, for example, segmentation. How do I figure out which of the things I'm looking at are distinct items to be picked?
Now that can be relatively simple, say in the case of cans of soup. Cans of soup, you'll see nice, circular tops when you look in on them in a box. Or you could have something like cardboard boxes that are stacked so closely that they appear to be one continuous surface. And trying to get a computer to figure out then if I get a presentation like this that's tricky where I have a bunch of very similar objects packed together so I don't know where one object ends and the next begins or where I have a bunch of amorphous objects, say clothing-- if I have clothing stacked in something, it may be wrinkled. How do you tell a wrinkle from the edge between two separate pieces of clothing?
This is the kind of machine vision problem that we are working on in order to handle objects without models. I'm pretty sure that in the future, people will expect a robot to be able to pick anything, say, lighter than 5 pounds regardless of what the package looks like or how it's structured. That's the level of versatility that people are going to expect, so that's what we're shooting for.
[MUSIC PLAYING]
-
Interactive transcript
[MUSIC PLAYING]
LAEL ODHNER: We are looking for customers with the largest volume of retail goods we can find. For example, just today, a Japanese company called Pal Tak-- they're a very large pharmaceutical distributor-- just announced that our robots are going to be used to ship their products from a new distribution center that's currently being built.
They have extremely exacting standards for how the packages are to look. There can be absolutely no cosmetic damage. It has to be handled very gently. And we currently have robots right now out there picking and fulfilling real orders with them in the run-up to the major deployment.
We've seen a lot of traction in the US from large retailers, also from third party logistics companies. So in other words, these are people to whom business is contracted for fulfillment if you're a small apparel, or cosmetics company, or something like that. Or sometimes, even a large one. Sometimes you outsource your fulfillment logistics.
And those companies used to operate on a strictly cost plus model. In other words, they were somewhat insensitive to the price of labor because they just passed it on in the form of their contracts. But with labor being as in demand as it is, they've actually become a very forward-looking segment of the market.
So third party logistics is a business where we have seen an increase in adoption on things like put wall sortation, where you're picking up items and putting them on shelves. Often, there's existing technology. People are familiar with systems that will electronically sort the items so you scan it, and it tells you where it needs to go.
So it's not a huge jump for people to go from using that kind of electronic assistance to fully automating the process. And we've seen a lot of excitement there.
The future of retail and supply chain logistics is going to revolve, very strongly, around how good people are and efficiently and effectively maximizing their labor pool. So a lot of people are guarding their R&D efforts very closely.
What we've seen, though, is that they are, although they're reluctant to talk about it, they are very eager to invest in it. And they actually do a great job. Companies you might think of as fairly slow movers are actually very aggressive at deploying new technology into their facilities.
-
Interactive transcript
[MUSIC PLAYING]
LAEL ODHNER: Well, the decision to come back to Boston and in particular, to be near MIT was a no-brainer. My co-founders and I have lived in this area for a while. They were at Harvard, I was a MIT student, graduate.
And when we went to go look for new employees, MIT is, of course, the first place we went.
So we tapped into our friends networks. We have a lot of friends who are veterans of other MIT-related startups. So our first employee was also an early employee at Akamai. Our other early employees were at MetaCarta and Cygnus, companies that were in the area and that had a lot of MIT people in their networks. And that has actually, been a tremendous benefit, especially in the early stage.
Partly because when nobody knows you, when you're just starting out and you just have a great idea, you rely very strongly on your friendships and on the credibility that having a good team provides. And MIT has really done a lot of that for us.
Well, just this past winter, we announced the closing of our B round. We raise $23 million from Menlo and from Google Ventures. And that for us was a major milestone. We now have the resources to really scale this technology up.
So we have just announced RightPick2, the second major version of our picking appliance or integrated picking system. And everything about our system now is new. Everything about our system is mass manufacturable. So we are able to scale up from dozens of systems to hundreds or thousands of systems. Everything can be ordered and assembled, and that, for us, is a major feat.
One of the things we've realized about the robotics industry, in general, is that it's one thing to do something once, it's impressive to do it 10 times. But really to do this at scale, we are going to be talking about fleets of hundreds or thousands of robots in order to really make an impact. So we are focusing. We're taking our investment and we're focusing it on turning this from a very interesting prototype into something that is going out everywhere by the thousand.
-
Interactive transcript
LA'EL ODHNER: Hi, my name's La'el Odhner, and I'm the co-founder and CTO of Right Hand Robotics. Right Hand Robotics makes a product called Right Pick, which is a fully integrated appliance for peace picking operations. In other words, the grasping and manipulation of individual consumer products for retail order fulfillment problems.
Our product is specifically designed to alleviate one of the biggest challenges that retailers face today, which is the non-availability of labor. It has become increasingly difficult to hire people to perform basic jobs inside a warehouse. And as e-commerce grows at double digit percentages, in some parts of the world, it's getting very hard to find the labor to support that growth on the back end.
Our product is designed to work in a number of common workflows, where we've seen people have a lot of staffing problems. So, for example, we're working in put wall sortation, and in sauder induction, and in auto bagger induction, and ASRS pick tending. So general goods to person workflows can be turned into goods to robot workflows with our product. And this makes our system a key to reducing the overall reliance on manual labor in retail.