Akselos

Startup Exchange Video | Duration: 23:36
September 26, 2016
Video Clips
  • Interactive transcript
    Share

    DAVID KNEZEVIC: I'm David Knezevic, and I'm the CTO of Akselos.

    Akselos is a simulation technology company that's a spin off from MIT. Our product is, at its core, a-- it's a result of over 10 years of research in the mechanical engineering department at MIT, also joined with many other universities around the world. And what we provide is accelerated simulation software, and also simulation software that scales up to much, much larger infrastructure than has been possible with conventional approaches.

    So the basic situation is that simulation is a major part of engineering workflows across many, many different industries, because engineers need to simulate the systems that they're designing in order to ensure that they're going to behave correctly under all operating conditions, and often, under very extreme operating conditions in many industries. We deal with oil and gas and wind energy, with offshore structures, for example, where these structures are deployed for decades at a time, and they're subjected to extremely challenging situations over that period. They need to ensure that these structures are robust and reliable over a period of decades.

    So the traditional way engineers have ensured this is using computational tools, among other things of course, but one key aspect of engineering workflows is computational modeling to simulate these structures under all the different scenarios that are relevant. But a major problem that occurs in practice is that the simulation technology that's traditionally been used, which is called finite element analysis, or FEA as it's widely referred to, doesn't scale very easily to large infrastructure. It basically hits a computational wall, because the computational cost grows very, very quickly. And even with rapid advances in computational power over the previous decades, it's still a major wall in any-- in the size and complexity of the modeling that can be done in a practical way.

    So at Akselos, we saw this as an opportunity. We-- I mean, myself and my colleagues at MIT-- I was a postdoc at the time-- were working in professor Tony Patera's research group on ways to accelerate this type of simulation and scale it up to the largest infrastructure, you know, entire oil tankers, entire offshore wind farms and things like this, where trying to apply conventional FEA to that really just didn't work. So with our technology, we can basically apply our-- we can create what we call Akselos digital twins of these very complex pieces of infrastructure, model them in full detail, and with the speed that engineers require for iterative design and all these kinds of other workflows that are very important in their day to day practice.

    Akselos's unique value proposition, our unique advantage is accelerated simulations, and we call this technology our RBFEA. So instead of conventional FEA, we have RBFEA, which stands for reduced basis FEA. And reduced basis methods are what was the core R&D that was done at MIT that I was referring to earlier.

    So the speed up we provide compared to FEA is fully due to the RB the reduced basis. It's completely an algorithmic concept, a way to accelerate FEA. A way you can think about it is that conventional FEA basically is very generic, it models systems under-- in a very generic way, and it can-- it basically doesn't take into account the specificities of the particular system and the particular scenario that it's under.

    So for example, the way FEA works is it basically uses elements. FEA stands for finite element analysis. So the elements that it uses are completely generic. They're cubes, and tetrahedron, and triangles. They have no physics built into them.

    In the RB concept, we build on top of conventional FEA. It's still FEA under the hood, but with a layer on top that takes into account the actual physical situation, the physical modeling, the physics of the system that's actually being modeled in order to build components, where each component actually represents the physical system that's part of the structure that you're modeling. And therefore, you embed all this extra data into these components, and you have to store extra data, but as a result, you obtain, typically, 1,000 times or more speed up for large infrastructure. So it is purely an algorithmic acceleration.

    We also provide a cloud based infrastructure and leverage HPC cloud computing, parallel computing to the fullest extent, because we want to make sure our platform is the most powerful for large scale infrastructure modeling. But the speed up is purely-- the speed up we quote is purely an algorithmic speed up, because obviously, you can apply the same HPC techniques to conventional FEA. So we don't see that as a unique advantage. The unique advantage is the RBFEA that we provide.

  • Interactive transcript
    Share

    DAVID KNEZEVIC: I'm David Knezevic, and I'm the CTO of Akselos.

    Akselos is a simulation technology company that's a spin off from MIT. Our product is, at its core, a-- it's a result of over 10 years of research in the mechanical engineering department at MIT, also joined with many other universities around the world. And what we provide is accelerated simulation software, and also simulation software that scales up to much, much larger infrastructure than has been possible with conventional approaches.

    So the basic situation is that simulation is a major part of engineering workflows across many, many different industries, because engineers need to simulate the systems that they're designing in order to ensure that they're going to behave correctly under all operating conditions, and often, under very extreme operating conditions in many industries. We deal with oil and gas and wind energy, with offshore structures, for example, where these structures are deployed for decades at a time, and they're subjected to extremely challenging situations over that period. They need to ensure that these structures are robust and reliable over a period of decades.

    So the traditional way engineers have ensured this is using computational tools, among other things of course, but one key aspect of engineering workflows is computational modeling to simulate these structures under all the different scenarios that are relevant. But a major problem that occurs in practice is that the simulation technology that's traditionally been used, which is called finite element analysis, or FEA as it's widely referred to, doesn't scale very easily to large infrastructure. It basically hits a computational wall, because the computational cost grows very, very quickly. And even with rapid advances in computational power over the previous decades, it's still a major wall in any-- in the size and complexity of the modeling that can be done in a practical way.

    So at Akselos, we saw this as an opportunity. We-- I mean, myself and my colleagues at MIT-- I was a postdoc at the time-- were working in professor Tony Patera's research group on ways to accelerate this type of simulation and scale it up to the largest infrastructure, you know, entire oil tankers, entire offshore wind farms and things like this, where trying to apply conventional FEA to that really just didn't work. So with our technology, we can basically apply our-- we can create what we call Akselos digital twins of these very complex pieces of infrastructure, model them in full detail, and with the speed that engineers require for iterative design and all these kinds of other workflows that are very important in their day to day practice.

    Akselos's unique value proposition, our unique advantage is accelerated simulations, and we call this technology our RBFEA. So instead of conventional FEA, we have RBFEA, which stands for reduced basis FEA. And reduced basis methods are what was the core R&D that was done at MIT that I was referring to earlier.

    So the speed up we provide compared to FEA is fully due to the RB the reduced basis. It's completely an algorithmic concept, a way to accelerate FEA. A way you can think about it is that conventional FEA basically is very generic, it models systems under-- in a very generic way, and it can-- it basically doesn't take into account the specificities of the particular system and the particular scenario that it's under.

    So for example, the way FEA works is it basically uses elements. FEA stands for finite element analysis. So the elements that it uses are completely generic. They're cubes, and tetrahedron, and triangles. They have no physics built into them.

    In the RB concept, we build on top of conventional FEA. It's still FEA under the hood, but with a layer on top that takes into account the actual physical situation, the physical modeling, the physics of the system that's actually being modeled in order to build components, where each component actually represents the physical system that's part of the structure that you're modeling. And therefore, you embed all this extra data into these components, and you have to store extra data, but as a result, you obtain, typically, 1,000 times or more speed up for large infrastructure. So it is purely an algorithmic acceleration.

    We also provide a cloud based infrastructure and leverage HPC cloud computing, parallel computing to the fullest extent, because we want to make sure our platform is the most powerful for large scale infrastructure modeling. But the speed up is purely-- the speed up we quote is purely an algorithmic speed up, because obviously, you can apply the same HPC techniques to conventional FEA. So we don't see that as a unique advantage. The unique advantage is the RBFEA that we provide.

    Download Transcript
  • Interactive transcript
    Share

    DAVID KNEZEVIC: So the question of when does Akselos's simulation software provide the most value, so what we see is that the software is extremely valuable, both in the upfront design phase of a project, but also in the post deployment or existing infrastructure phase of an asset's lifetime. But the value proposition is very different, but equally compelling in both cases. In the upfront design phase, you can save a huge amount over-- so this is actually a major focus in the oil and gas industry at the moment, is CapEx reduction. So what they've urged, that it is a lean design.

    There's a strong opinion in the industry that a lot of things are extremely overdesigned, because you have excessive safety factors, because you don't really understand what's happening or what might happen to a structure, because you've got overly conservative simplistic models of what's happening. If you actually model things accurately, you can understand the risks much more precisely and reduce these safety factors. Still be, obviously, extremely safe and ensure that in any scenario that occurs you're extremely, extremely safe and definitely within any sort of guidelines that are provided by the industry.

    But if you reduce the overdesign by 10%, that's a huge cost saving, both in the upfront construction, and also in the lifetime of the asset as you do maintenance. If it's a leaner design, the maintenance is cheaper over the entire lifetime. So this CapEx reduction upfront leads to OpEx reduction over the lifetime of the asset, just in a very natural way. So the ability to simulate these large systems in detail really naturally leads to major cost savings in that way. So that's in the upfront phase.

    Also, in the deployment phase-- obviously, I've been talking about this concept of an Akselos digital twin linked to sensor data. And that very naturally leads to major cost savings. If you can detect and react to any issues in an asset in a predictive and precise way, you can reduce downtime. You can reduce physical maintenance when people have to go out to these assets and do physical maintenance, which is extremely expensive, both because you have to shut down the production, but also any kind of physical intervention is extremely costly.

    So if you've got a system in place that enables greater insight based on constant sensor monitoring and constant digital twin upgrade, leading to very precise understanding of the implications of the current state of your asset over its 20, 30, 40-year lifetime, over that period of time, you can save huge amounts of money. And that's exactly where we're seeing a major interest from large industry players, who have lots of infrastructure, they see these costs all the time, and they're very interested in upgrading their technology to enable a much more effective maintenance and operation regime based on these technologies.

    In terms of modifications to a major piece of infrastructure-- that happens all the time-- and our software is perfectly suited to that. One of the advantages of our software compared to conventional approaches is that it's component based. So it's very modular. If there's a change in a part of a model, you can just remove a bunch of components and replace it with different ones that take into account that change.

    And again, parameters are also a very relevant concept in that situation. So you can modify models by just changing parameters and resolving very, very quickly. So it's extremely well-suited to reconfiguration of physical assets. When you change your physical asset or when you want to figure out what kind of change would be a good idea, it makes sense to do it to your digital twin first, understand the impacts on reliability, and so on-- throughput, safety, these kinds of things.

    And then obviously, you can model it all out first. And then you can do it to the digital twin. Or you can do the reverse. Any changes that occur to the physical asset can easily be incorporated into the computational model.

    I mean, that's not unique to what we provide. But we facilitate it, our software, because it's component-based, it's modular, it's parametrized. And the fact that the solves can be done extremely quickly, all that comes together to mean that it's extremely well-suited to reconfiguration and resolving and reanalysis of these critical assets.

    And I guess I should emphasize the types of assets we talk about, we're working with power systems companies, so gas turbines, vibrations of gas turbines; wind turbines; offshore structures; oil and gas structures; ships, so marine, mining infrastructure-- so a very wide range. We're looking at aerospace and automotive as well. So it's a very, very wide range of application areas, where critical engineering decisions have to be made, and you need the best analysis tools to understand exactly what's happening in the design.

    Download Transcript
  • Interactive transcript
    Share

    DAVID KNEZEVIC: In terms of detection, how does the Akselos platform help detection of failure modes? Well, I can give you the one of the examples we're working on currently with a partner in Denmark, which is to instrument offshore assets. For example, we've made a wind turbine model with them. So they had data for a wind turbine. We created the a digital twin of a wind turbine.

    And the idea is you put the accelerometer's sensors on the model. And you basically have signatures that you're looking for. So if a certain signal occurs, you know that that matches with certain behavior that you're worried about. And then you can obviously run algorithms and update your model such that they match the sensor data. And then basically, you're able to identify any issues with the actual asset in that way.

    So basically, the idea is to calibrate the model based on the data you have. And once you've calibrated it, that's what enables you to diagnose what's really going on. So as I mentioned, accelerometers are one key tool. You put your arm accelerometers on a structure, the structure's vibrating. And then based on the particular vibration, the particular environmental load that you've detected at that moment, you're able to calibrate your digital twin.

    And then you set all the parameters such that you're not saying, OK, this is the current state of the asset. Then you run 1,000 different simulations for 1,000 different scenarios, which is very quick, because of our fast technology. And it's really the 1,000 extra simulations that you run that allow you to diagnose what's wrong. You've got a digital twin that is calibrated. Then you can use that for diagnostic purposes.

    So the sensors are really for the calibration. And then once it's calibrated, you've got a model that you can then use to understand what would happen now if a storm occurred? What would happen now if the wind doubled? What would happen now if I doubled the throughput in this asset? That's really what you're able to do is you've got the calibrated model, you can then use that for these risk assessment purposes.

    And that's really a unique capability that we provide, because no other software can handle this large fast modeling of these critical assets where this calibration is computationally intensive. Calibration is essentially an optimization problem. So the way you calibrate something is you take measurements, and then you've got your computational model of the system. And you want to match your computational version of those measurements to the physical ones.

    And that requires iteration. You have to you have to modify your model many, many times, maybe 1,000 times until you get a good match with the measurements. And if each one of those modifications and re-analyses takes hours, days then it's completely impossible to do this calibration. So what we provide are these fast parameterized models which can be modified easily and solved easily, which means this calibration phase is extremely quick.

    So that's why we see our technology as being really complementary fit with the rise in the internet of things, the industrial internet of things, and the sensor technologies that are really predominant today. When you create a three dimensional geometry-- so the question is, what is meshing? So it's really a core part of finite element analysis.

    So I referred earlier to the way you find element analysis is based on building blocks that are just geometric and don't have any physical representation of what's really happening. Meshing is the process of creating those geometries out of those building blocks. So we use meshing the same way every FEA does, but then we build an RB layer on top, which provides acceleration.

    So I guess one thing I do want to emphasize is that the reduce spaces technology we provide is very much complementary with conventional FEA. We see conventional FEA as an important piece in the value chain. We provide cloud based conventional FEA in a fully scalable, fully parallel way. We also provide the unique RB-FEA solver. So we provide this interplay between the conventional approach and the unique Akselos approach, which we think is a very nice way to provide an end to end analysis tool.

    Download Transcript
  • Interactive transcript
    Share

    DAVID KNEZEVIC: So our opinion, our point of view is that the computational intensity of FEA has led to all sorts of limitations in how it's used. OK? So if you're trying to use FEA earlier in the design process, it's a real pain, because it's-- if you're trying to do detailed simulations of a large system, it's very, very slow, OK?

    So the way people traditionally get around this with FEA is they either do detailed analysis of small parts of an overall system, or they do very, very coarse analysis of the overall system. But there is no ability to combine those two together and provide detailed analysis of the whole system. So that's what we enable. We enable fast detailed analysis of large systems, because we have RBFEA.

    And the other key point is our models are parameterized, OK, so they're built up of components and each component has parameters in it. So what that means is you can click on a component in the Akselos graphical user interface and you can change a number like the length, or the density, or the stiffness, or the Poisson ratio, or the curvature of a component. You can change its geometric or its material properties. And what this means is it's very easy to update an Akselos model of a system, and that's extremely valuable when it comes to iterative design. You have a basic idea of what your system should look like, but you want to simulate it hundreds or thousands of times to hone in on what a better design is.

    And this should really be done early in the design process, but with FEA, it's basically computationally intractable to do that. Because to modify a model-- if you modify it, you then have to go through the whole FEA solve all over again, which is 1,000 times or more slower than our solve. So the approach we provide is to setup a component based model and then go through iterative design optimization once the model is set up, and that can really help to hone in on good designs much more efficiently than with a conventional FEA. So basically, it boils down to workflow constraints with the conventional approach.

    And then in terms of can this speed up that we provide also benefit in the post deployment phase? Absolutely. That's actually where we're currently focused. We see an upfront design using our software in the upfront design phase as extremely interesting and beneficial, and we've worked with customers to use it in that fashion, but we're also very focused on the post deployment phase where we provide or company creates its own Akselos digital twin of a physical asset that has been deployed.

    And the key thing about an Akselos digital twin, again, is that assets are deployed for decades, and over the lifetime of the asset, it will change. It will get corrosion, it will get damage. If you're talking about offshore structures, sometimes a ship will bump into one of the pylons, it will get bent. You want to be able to easily update your model so that it incorporates all of those changes that are happening over the lifetime of the asset. And anytime there's a change, you rerun 1,000 simulations on it on all the different scenarios you're worried about. You know, a storm, you know, big waves all these kinds of different things, you simulate all these different scenarios, and then you understand if the current state of the asset is safe or if you need remedial action, you need maintenance and these kinds of things.

    And going further with that, where we're very focused on at the moment, we have a major R&D-- a European R&D grant for focusing on at the moment is incorporating these digital twins, the Akselos parameterized RBFEA digital twins with sensor data. So you-- the idea of course, is that many assets already, critical assets, already have sensors on them, accelerometers, strain gauges, these kinds of things. They're internet connected. So industry 4.0, internet of things, these kinds of-- these concepts play very nicely into what we're focusing on, because there's many sensors out there, they're internet connected.

    This data can be fed into our cloud based servers that host our digital twins, and the digital twins can be automatically calibrated based on that sensor data. So the idea is to have a real time updating digital twin that takes into account any significant changes in your physical asset, and then you can run analysis on the updated calibrated digital twin. And then, obviously, understand any risks and have very good intelligence about what you should do about these risks.

    Download Transcript
  • Interactive transcript
    Share

    DAVID KNEZEVIC: The current status of the company is we have a team of 17 people. So, many great people, many high-end people with experience in research, high-end numerical methods from MIT and other universities, as I've mentioned. We have an office in Boston, an office in Lausanne, Switzerland, and an office in Ho Chi Minh City, Vietnam.

    And the reason for the split, originally, was because the three co-founders were myself, Phuong Huynh, and Thomas Leurent. Thomas is Swiss and he moved to Switzerland and set up an office there. And Phuong Huynh is Vietnamese and he set up an office there. And each office has a different role. So the office in Vietnam is what we call the production team. And they create these digital twins that we're talking about. And they're really like an internal engineering team that uses our software to deliver projects to clients. The headquarters is actually based in Switzerland, so Thomas is there, and that's the business center. And then, we do a lot of development in Boston.

    So, yeah, we have a established team and we believe very high in technology. By this point, it's been a significant amount of R&D and product development to get where we are today. But we still see ourselves as-- we're very excited to be part of the STEX program.

    We see the STEX program as an incredibly-- well, first of all, we're honored to be to be part of it. We see it as a great honor that the STEX team would see us as a high-end technology that they want to promote to their member companies. That's a real honor. And we, personally, at Akselos, we see it as a great way to continue our outreach to companies that we believe we can help.

    So we see companies struggling with infrastructure maintenance and this is-- it causes environmental issues, environmental hazards, risks, all sorts of things that we believe our technology can help. And in order to help in the most effective way, we want to connect with the most innovative and the most-- and the companies that face the biggest challenges.

    And STEX-- and the ILP, in general-- has links with many companies of those types-- the innovative companies that want to be participating and partnering with MIT. And so we feel very lucky to be part of this program so we can connect with these companies and bring value to them and continue to build the Akselos product to ensure many-- it can benefit as wide a group as possible.

    We joined STEX when it was originally begun. We, again, saw that as a wonderful program where we could continue to link with the ILP and continue to connect with ILP members in a very systematic way. And then, more recently, we were thrilled to be told that we were going to be part of the inaugural STEX25 group.

    So our understanding of the STEX25 is simply that it's a smaller group of companies where the ILP has identified that that group is being particularly relevant to industry and going to push the-- provide a path to connecting with major companies and putting a bit more resources behind those connections because they're seen as strategically relevant startups that really could benefit the ILP member community. So we see it as a-- extra resources behind the STEX program but in a more targeted group of companies. And we're very delighted to be part of that targeted group.

    Download Transcript