6.2023-London-Mobilus

Startup Exchange Video | Duration: 7:57
June 20, 2023
  • Interactive transcript
    Share

    JORDAN MCRAE: Hi. My name is Jordan McRae. I'm the CEO and founder Mobilus Labs. I was a class 2005 MIT aeroastro in terms of our MIT link background in space and ocean robotics, which sort of led me down the path to Mobilus Labs. I've been joined by an amazing CTO that worked on the Amazon Alexa voice platform here for the solution architect for Europe. And the rest of the amazing team behind us with hardware and software capability to build intelligent systems in extreme environments.

    And that is what we do. We build a voice communications platform to solve what I call the last meter problem of digital transformation. We work a lot with customers, large organizations, particularly in the energy and manufacturing sector. And they've invested billions of dollars in all of the great technology that we've been talking about today. AI, cloud based systems, all for the efforts of improving, primarily, productivity, resource management. And sometimes, in instances, safety, and communication, and collaboration.

    They all run into a pretty significant bottleneck when it comes to the human in the loop. All of that investment has gone towards to improving something or taking some action, doing some kind of analysis. But when action needs to be taken, it quite often falls on a human to do that. And quite often, a human in the field or a non-ideal environment. And so what you get is all of that investment goes into your pocket. And what you really want it to do is go to your brain.

    And so that is what we refer to as that last meter problem. And this can be done fairly effectively through communication-- through voice to that human being. And we know that effective communication can have huge improvements and impacts. 20% improvement in safety in many of these environments that our customers work in. 20% reduction in downtime, which is a huge aspect of their sort of cost structure and PNL, and 10% reduction in cost.

    So our solution is a hardware-software based platform. The helmet that I'm wearing is not just because I'm a safety nut. This is our first hardware product. It's actually the second generation of a two-way bone conduction device that I'll give you a little bit more of an explanation on. And then very happy to give demos after the event.

    We build a software platform that goes behind that, leveraging what is an improved and superior voice communication signal. And then we have a little bit of other hardware that's in the works at the moment that creates a hybrid network. In fact, the whole approach is to sort of rethink where voice comes from. If voice is going to be a UI, we believe heavily in the idea of conversational workflows being a key part of the next UI, or user experience in this next Industrial Revolution with all of the other technologies that are arriving.

    And we want something to be hands free, ear free, resilient to various environments, reliable and frictionless. So mobiWAN, the bone conduction headset, I'll give you a little bit of my side profile so you can see this. For those of you that are not familiar with bone conduction, what this device is able to do, you'll see these small black modules that are sitting behind my ear. This is sitting on my mastoid bone.

    And I can receive audio communications-- someone's voice through small micro vibrations on the surface of my head that transmit through my skull to my inner ear, the cochlear. And that's how we hear. Typically, you use your outer ear and your middle ear to transfer air vibration into that cochlear. We bypass that whole process and go directly to the cochlear through vibration.

    For first time users-- and you can fact check me on this by coming and doing a demo-- it's almost like having a voice inside your head, because you don't have the physical rationale of having something in your ear or above-- or over your ear. You just hear the person's voice inside your head. Now where we differentiate ourselves with respect to bone conduction is we can do this in both directions.

    So right now, you hear me by transmitting vibrations into air, and you pick that up through your ears. But I'm also transmitting vibration back through my jawbone to my skull to the cochlear. And this same device can pick up those vibrations that I'm transmitting through my body, and we can use this device as a microphone. So it's used both as a microphone and a speaker.

    And the key value proposition there is that I can be in a very noisy environment right now and I can put earbuds in or ear defenders on, and still hear you perfectly well through this device. And likewise, because I'm isolating my voice through bone vibration, as opposed to trying to capture my voice in that noisy environment and then remove the noise, you get a much cleaner signal, regardless of how much environmental noise is around me.

    And so I'll give you a sample of this. This is some work that we've done with Chevron as one of our sort of early adopter customers. We're going to go through-- you'll hear my voice on this audio sample. The first sample will be a standard microphone in an environment of about 100 dB. And then you'll hear the mobiWAN in that same environment, they were recorded exactly the same time.

    Don't worry about the words that are being said, there's something called Harvard sentences. So naturally, they don't make any sense. But the whole idea is to test the full range of human speech.

    JORDAN (ON RECORDING): The birch canoe slid on the smooth plane. It's easy to tell the depth of a well. These days, a chicken leg is a rare dish.

    JORDAN MCRAE: Again, that's 100 dB in the ISO max facility at the El Segundo refinery in California. This is exactly that same speech through the mobiWAN one.

    JORDAN (ON RECORDING): The birch canoe slid on the smooth plane. It's easy to tell the depth of a well. These days, a chicken leg is a rare dish. Through the heat to the dark blue background.

    JORDAN MCRAE: So this is a second generation of our product. We had some really great progress with our first generation. We landed a nice contract with Trimble and Microsoft, where the exclusive audio solution to the HoloLens 2 for industrial environments for exactly these reasons. We've just launched in the end of 2021. We were, I think, Time Magazine best invention along with the COVID vaccines.

    And since then-- my mom really loved that by the way, actually. And since then, we've had a nice spike of activity, again, within this industrial sector of oil and gas, chemical manufacturing. Anywhere where there's dangerous, noisy environments where teams are trying to collaborate effectively and safely. And I'll skip this.

    But basically, the future for us is about this conversational workflow interface. So it doesn't matter to us if there's a human as the endpoint or a voice enabled intelligence, the idea is that-- it's pretty simple actually. Garbage in, garbage out. Your algorithms can be as complicated as you want. But if the signal is really, really dirty, they're going to struggle to provide you a clear performance on any of the types of transcription or translation, or anything like that you do afterwards.

    So to get a really good performance on the output of these digital workflow applications with respect to voice, you need really good signal and really good infrastructure behind it. So I'll leave you at that. I would love to give you a demo in the salon. And we'd love to connect with people with useful applications for frontline worker or connected worker applications with voice. Thank you.

  • Interactive transcript
    Share

    JORDAN MCRAE: Hi. My name is Jordan McRae. I'm the CEO and founder Mobilus Labs. I was a class 2005 MIT aeroastro in terms of our MIT link background in space and ocean robotics, which sort of led me down the path to Mobilus Labs. I've been joined by an amazing CTO that worked on the Amazon Alexa voice platform here for the solution architect for Europe. And the rest of the amazing team behind us with hardware and software capability to build intelligent systems in extreme environments.

    And that is what we do. We build a voice communications platform to solve what I call the last meter problem of digital transformation. We work a lot with customers, large organizations, particularly in the energy and manufacturing sector. And they've invested billions of dollars in all of the great technology that we've been talking about today. AI, cloud based systems, all for the efforts of improving, primarily, productivity, resource management. And sometimes, in instances, safety, and communication, and collaboration.

    They all run into a pretty significant bottleneck when it comes to the human in the loop. All of that investment has gone towards to improving something or taking some action, doing some kind of analysis. But when action needs to be taken, it quite often falls on a human to do that. And quite often, a human in the field or a non-ideal environment. And so what you get is all of that investment goes into your pocket. And what you really want it to do is go to your brain.

    And so that is what we refer to as that last meter problem. And this can be done fairly effectively through communication-- through voice to that human being. And we know that effective communication can have huge improvements and impacts. 20% improvement in safety in many of these environments that our customers work in. 20% reduction in downtime, which is a huge aspect of their sort of cost structure and PNL, and 10% reduction in cost.

    So our solution is a hardware-software based platform. The helmet that I'm wearing is not just because I'm a safety nut. This is our first hardware product. It's actually the second generation of a two-way bone conduction device that I'll give you a little bit more of an explanation on. And then very happy to give demos after the event.

    We build a software platform that goes behind that, leveraging what is an improved and superior voice communication signal. And then we have a little bit of other hardware that's in the works at the moment that creates a hybrid network. In fact, the whole approach is to sort of rethink where voice comes from. If voice is going to be a UI, we believe heavily in the idea of conversational workflows being a key part of the next UI, or user experience in this next Industrial Revolution with all of the other technologies that are arriving.

    And we want something to be hands free, ear free, resilient to various environments, reliable and frictionless. So mobiWAN, the bone conduction headset, I'll give you a little bit of my side profile so you can see this. For those of you that are not familiar with bone conduction, what this device is able to do, you'll see these small black modules that are sitting behind my ear. This is sitting on my mastoid bone.

    And I can receive audio communications-- someone's voice through small micro vibrations on the surface of my head that transmit through my skull to my inner ear, the cochlear. And that's how we hear. Typically, you use your outer ear and your middle ear to transfer air vibration into that cochlear. We bypass that whole process and go directly to the cochlear through vibration.

    For first time users-- and you can fact check me on this by coming and doing a demo-- it's almost like having a voice inside your head, because you don't have the physical rationale of having something in your ear or above-- or over your ear. You just hear the person's voice inside your head. Now where we differentiate ourselves with respect to bone conduction is we can do this in both directions.

    So right now, you hear me by transmitting vibrations into air, and you pick that up through your ears. But I'm also transmitting vibration back through my jawbone to my skull to the cochlear. And this same device can pick up those vibrations that I'm transmitting through my body, and we can use this device as a microphone. So it's used both as a microphone and a speaker.

    And the key value proposition there is that I can be in a very noisy environment right now and I can put earbuds in or ear defenders on, and still hear you perfectly well through this device. And likewise, because I'm isolating my voice through bone vibration, as opposed to trying to capture my voice in that noisy environment and then remove the noise, you get a much cleaner signal, regardless of how much environmental noise is around me.

    And so I'll give you a sample of this. This is some work that we've done with Chevron as one of our sort of early adopter customers. We're going to go through-- you'll hear my voice on this audio sample. The first sample will be a standard microphone in an environment of about 100 dB. And then you'll hear the mobiWAN in that same environment, they were recorded exactly the same time.

    Don't worry about the words that are being said, there's something called Harvard sentences. So naturally, they don't make any sense. But the whole idea is to test the full range of human speech.

    JORDAN (ON RECORDING): The birch canoe slid on the smooth plane. It's easy to tell the depth of a well. These days, a chicken leg is a rare dish.

    JORDAN MCRAE: Again, that's 100 dB in the ISO max facility at the El Segundo refinery in California. This is exactly that same speech through the mobiWAN one.

    JORDAN (ON RECORDING): The birch canoe slid on the smooth plane. It's easy to tell the depth of a well. These days, a chicken leg is a rare dish. Through the heat to the dark blue background.

    JORDAN MCRAE: So this is a second generation of our product. We had some really great progress with our first generation. We landed a nice contract with Trimble and Microsoft, where the exclusive audio solution to the HoloLens 2 for industrial environments for exactly these reasons. We've just launched in the end of 2021. We were, I think, Time Magazine best invention along with the COVID vaccines.

    And since then-- my mom really loved that by the way, actually. And since then, we've had a nice spike of activity, again, within this industrial sector of oil and gas, chemical manufacturing. Anywhere where there's dangerous, noisy environments where teams are trying to collaborate effectively and safely. And I'll skip this.

    But basically, the future for us is about this conversational workflow interface. So it doesn't matter to us if there's a human as the endpoint or a voice enabled intelligence, the idea is that-- it's pretty simple actually. Garbage in, garbage out. Your algorithms can be as complicated as you want. But if the signal is really, really dirty, they're going to struggle to provide you a clear performance on any of the types of transcription or translation, or anything like that you do afterwards.

    So to get a really good performance on the output of these digital workflow applications with respect to voice, you need really good signal and really good infrastructure behind it. So I'll leave you at that. I would love to give you a demo in the salon. And we'd love to connect with people with useful applications for frontline worker or connected worker applications with voice. Thank you.

    Download Transcript