
10.5.23-Showcase-Tokyo-Manus Robotics

-
Video details
Startup Lightening Talk
-
Interactive transcript
FAYE WU: Hi. Good afternoon, everyone. I'm Faye. I'm the cofounder and CTO of Manus Robotics, as already mentioned. I studied mechanical engineering at MIT. My research was focused on the design and control of wearable robots.
In fact-- I don't know if anyone noticed-- the video that was playing over here at the very beginning, talking about the MIT mission, spirit, people, ecosystem, I was actually in it for about 2 seconds with my giant red robotic fingers. I was actually very surprised to see myself up there. But very proud to be part of the MIT image.
So Manus Robotics was actually cofounded by my teaching advisor, Professor Harry Asada, his PhD students, including me, to develop wearable technologies that can greatly enhance human functionality and productivity. So today, I'm going to tell you a little bit about our latest project, a wearable sensor, not robots this time, that can enable intuitive and effective gesture-based human-machine interaction. This project is currently being supported by the National Science Foundation.
So according to a study in 2022, we on average spend over seven hours a day interacting with various devices around us, thus thousands, if not hundreds of thousands, of keystrokes, button presses, screen taps, waving at the camera, whatever it is that we do. We don't really notice all the work involved in controlling and interacting with these devices, because we're very used to it, until something goes wrong.
I'm sure many of us here have had some experience yelling at our Alexa or Siri for misinterpreting our commands or providing unwanted responses when we didn't even talk to it. And it can be extremely frustrating when a device we rely on cannot behave predictably because it has trouble understanding the user's intent.
And in extreme cases, when using a device is more trouble than it's worth, these frustrations can even lead to device abandonment, one example being assistive devices for people with disabilities. Many of them use electromyography, or EMG, to detect weak electrical signals coming from the muscle to control their actions.
And because EMG is inherently noisy, easily corrupted, and just, in general, takes a long time to learn to use well, it's not very surprising to see that almost 45% of these end up in cupboards, collecting dust rather than really helping the people in need improve their quality of life.
The latest developments in human-machine interface have been focusing on voice- and vision-based solutions primarily because of the rise of AI in recent years. But there are still many practical problems remain to be solved. In particular, the privacy issue commonly associated with these solutions can be a huge hurdle towards their general acceptance and market adoption. After all, nobody really loves to be monitored by someone 24/7 and have the potential of that company sell their private data to who knows where for even more exploitation.
So with our experience, designing wearable robots and sensors at MIT, we developed Hemyo as a low-cost and very compact wristband sensor to detect hand gesture commands and help make device interaction more accessible, more intuitive and safe. With our simple API, the sensor can be integrated very easily into many different applications, from assistive devices to industrial IoT, smart home automation, and even AR/VR.
This innovative technology utilizes modified near-infrared spectroscopy to optically measure local blood flow changes caused by muscle contractions from different layers of tissue around the arm. After we collect a particular set of blood flow patterns from a gesture during calibration, we use generative networks to create more patterns just like that but that's more curated and personalized to this specific user, and use all of these data to help train our algorithm and convert the blood flow patterns into unique gesture codes that are unaffected by natural variations in gesture performance.
And that's why we're able to achieve over 98% gesture detection accuracy with just a 30-second calibration process. So far, we can distinguish up to 15 gestures at the same time.
A unique advantage of this sensor is that it can be actively controlled by tuning the light intensity, the pulse pattern, even receiver sensitivity such that we can detect blood flow from deeper layer tissue and even through clothes and still get a signal that's of higher quality and granularity compared to EMG, without going through the extensive signal processing and user training that's required by EMG.
On top of that, our algorithm can be trained 600 times faster and requires 1,000 times fewer memories compared to conventional machine learning solutions in this area. And so we're able to run everything very efficiently on a small edge computing device. And the best of all, there is no privacy concern when it comes to using Hemyo on a daily basis.
So we have a few tested with impaired stroke survivors and spinal cord injury patients from eight different facilities in the greater Boston area. All participants have responded favorably to using a Hemyo-enabled wearable gripper to help them perform simple activities of daily living.
And with this very exciting result, we have started to work with top exoskeleton and prosthetics manufacturers from Europe and US to develop a version of Hemyo that can be very easily integrated into their product. And at the same time, we're engaged in conversations with companies in the AR/VR and consumer electronics areas to explore other business opportunities for this sensor.
So as for our activities in Japan, we're currently evaluating sensors from two Japanese companies, including KM, who have representatives in the audience today, and with the goal of work together with them and codevelop an upgraded version of Hemyo hardware that has their technology inside. And we're interested in engaging in more collaboration opportunities with other Japanese companies, especially in the sensor and hardware manufacturing sectors.
And of course, we're looking for more partners and customers who are looking for upgrading and adopting new HMI for their products or processes. So if you're interested and curious to know more about us, please come talk to me at our booth. Thank you.
-
Video details
Startup Lightening Talk
-
Interactive transcript
FAYE WU: Hi. Good afternoon, everyone. I'm Faye. I'm the cofounder and CTO of Manus Robotics, as already mentioned. I studied mechanical engineering at MIT. My research was focused on the design and control of wearable robots.
In fact-- I don't know if anyone noticed-- the video that was playing over here at the very beginning, talking about the MIT mission, spirit, people, ecosystem, I was actually in it for about 2 seconds with my giant red robotic fingers. I was actually very surprised to see myself up there. But very proud to be part of the MIT image.
So Manus Robotics was actually cofounded by my teaching advisor, Professor Harry Asada, his PhD students, including me, to develop wearable technologies that can greatly enhance human functionality and productivity. So today, I'm going to tell you a little bit about our latest project, a wearable sensor, not robots this time, that can enable intuitive and effective gesture-based human-machine interaction. This project is currently being supported by the National Science Foundation.
So according to a study in 2022, we on average spend over seven hours a day interacting with various devices around us, thus thousands, if not hundreds of thousands, of keystrokes, button presses, screen taps, waving at the camera, whatever it is that we do. We don't really notice all the work involved in controlling and interacting with these devices, because we're very used to it, until something goes wrong.
I'm sure many of us here have had some experience yelling at our Alexa or Siri for misinterpreting our commands or providing unwanted responses when we didn't even talk to it. And it can be extremely frustrating when a device we rely on cannot behave predictably because it has trouble understanding the user's intent.
And in extreme cases, when using a device is more trouble than it's worth, these frustrations can even lead to device abandonment, one example being assistive devices for people with disabilities. Many of them use electromyography, or EMG, to detect weak electrical signals coming from the muscle to control their actions.
And because EMG is inherently noisy, easily corrupted, and just, in general, takes a long time to learn to use well, it's not very surprising to see that almost 45% of these end up in cupboards, collecting dust rather than really helping the people in need improve their quality of life.
The latest developments in human-machine interface have been focusing on voice- and vision-based solutions primarily because of the rise of AI in recent years. But there are still many practical problems remain to be solved. In particular, the privacy issue commonly associated with these solutions can be a huge hurdle towards their general acceptance and market adoption. After all, nobody really loves to be monitored by someone 24/7 and have the potential of that company sell their private data to who knows where for even more exploitation.
So with our experience, designing wearable robots and sensors at MIT, we developed Hemyo as a low-cost and very compact wristband sensor to detect hand gesture commands and help make device interaction more accessible, more intuitive and safe. With our simple API, the sensor can be integrated very easily into many different applications, from assistive devices to industrial IoT, smart home automation, and even AR/VR.
This innovative technology utilizes modified near-infrared spectroscopy to optically measure local blood flow changes caused by muscle contractions from different layers of tissue around the arm. After we collect a particular set of blood flow patterns from a gesture during calibration, we use generative networks to create more patterns just like that but that's more curated and personalized to this specific user, and use all of these data to help train our algorithm and convert the blood flow patterns into unique gesture codes that are unaffected by natural variations in gesture performance.
And that's why we're able to achieve over 98% gesture detection accuracy with just a 30-second calibration process. So far, we can distinguish up to 15 gestures at the same time.
A unique advantage of this sensor is that it can be actively controlled by tuning the light intensity, the pulse pattern, even receiver sensitivity such that we can detect blood flow from deeper layer tissue and even through clothes and still get a signal that's of higher quality and granularity compared to EMG, without going through the extensive signal processing and user training that's required by EMG.
On top of that, our algorithm can be trained 600 times faster and requires 1,000 times fewer memories compared to conventional machine learning solutions in this area. And so we're able to run everything very efficiently on a small edge computing device. And the best of all, there is no privacy concern when it comes to using Hemyo on a daily basis.
So we have a few tested with impaired stroke survivors and spinal cord injury patients from eight different facilities in the greater Boston area. All participants have responded favorably to using a Hemyo-enabled wearable gripper to help them perform simple activities of daily living.
And with this very exciting result, we have started to work with top exoskeleton and prosthetics manufacturers from Europe and US to develop a version of Hemyo that can be very easily integrated into their product. And at the same time, we're engaged in conversations with companies in the AR/VR and consumer electronics areas to explore other business opportunities for this sensor.
So as for our activities in Japan, we're currently evaluating sensors from two Japanese companies, including KM, who have representatives in the audience today, and with the goal of work together with them and codevelop an upgraded version of Hemyo hardware that has their technology inside. And we're interested in engaging in more collaboration opportunities with other Japanese companies, especially in the sensor and hardware manufacturing sectors.
And of course, we're looking for more partners and customers who are looking for upgrading and adopting new HMI for their products or processes. So if you're interested and curious to know more about us, please come talk to me at our booth. Thank you.