Interpretable AI

-
Interactive transcript
JACK DUNN: I'm Jack Dunn, co-founder of Interpretable AI.
DAISY ZHUO: I'm Daisy Zhou. I'm also a co-founder at Interpretable AI. At Interpretable AI, we build AI solutions that are fully explainable, but also achieve the same level of state of the art performance comparable to deep learning systems.
JACK DUNN: At our company, we believe that in order for AI to unlock its full potential, it needs to be fully explainable and understood by the stakeholders. For instance, if a doctor is using an AI tool to generate recommendations for surgeries, they need to be able to understand why these surgeries are being prescribed and not-- they can't just blindly follow the AI.
DAISY ZHUO: Another example would be at a retail company when the AI tells you what the assortment planning should be, the stakeholder really needs to understand why it's making that recommendation and what features of this stores lead to that decision before he can fully trust it and put it in full deployment.
JACK DUNN: The interpretability of our models allows the stakeholder to transparently inspect the predictions that are being made and check whether this aligns with their intuition. They can suggest new variables, other factors to consider to improve the performance of the model, and really work together with the machine to give the best outcome possible.
DAISY ZHUO: So in the case of the doctors, when a doctor can look into the model and to see certain variables that are aligned with their medical expertise, they will really believe that the output predicted probability or predicted intervention would be actually useful for the patient for both safety and efficacy.
-
Interactive transcript
JACK DUNN: I'm Jack Dunn, co-founder of Interpretable AI.
DAISY ZHUO: I'm Daisy Zhou. I'm also a co-founder at Interpretable AI. At Interpretable AI, we build AI solutions that are fully explainable, but also achieve the same level of state of the art performance comparable to deep learning systems.
JACK DUNN: At our company, we believe that in order for AI to unlock its full potential, it needs to be fully explainable and understood by the stakeholders. For instance, if a doctor is using an AI tool to generate recommendations for surgeries, they need to be able to understand why these surgeries are being prescribed and not-- they can't just blindly follow the AI.
DAISY ZHUO: Another example would be at a retail company when the AI tells you what the assortment planning should be, the stakeholder really needs to understand why it's making that recommendation and what features of this stores lead to that decision before he can fully trust it and put it in full deployment.
JACK DUNN: The interpretability of our models allows the stakeholder to transparently inspect the predictions that are being made and check whether this aligns with their intuition. They can suggest new variables, other factors to consider to improve the performance of the model, and really work together with the machine to give the best outcome possible.
DAISY ZHUO: So in the case of the doctors, when a doctor can look into the model and to see certain variables that are aligned with their medical expertise, they will really believe that the output predicted probability or predicted intervention would be actually useful for the patient for both safety and efficacy.
-
Interactive transcript
DAISY ZHOU: Our technology at Interpretable AI is based on years of research at MIT. So our core product, optimal decision trees, is able to produce a simple decision path that humans can follow that mimics the human decision making process while maintaining the same level of performance similar to deep learning systems.
JACK DUNN: So traditionally, practitioners have had to choose between models that are interpretable and models that have good performance. So the interpretable models that have existed out there in the world, for the last 30 years, don't reach the same level of performance as deep learning or boosting. Optimal decision trees harnesses the power of modern optimization and years of research at MIT to bridge that gap and simultaneously delivers full interpretability at the state of the art performance that we've come to know from deep learning and boosting.
DAISY ZHOU: So going to the health care model for surgical risk prediction, on the one hand, you have these very simple logistic regression model that does not capture all the nuance and that non-linearity between the variables, and the predictive power is generally poor. On the other hand, you have these deep learning systems and they give you a pretty good prediction, but there's no way to really look into why it's making that prediction. You don't really know what are the important variables, how the variables interact with each other.
Doctors cannot really inspect whether this thing aligns with their medical training or not. So we go right and we take both benefits from both the simple approach, where people can look into, and also the high performance from the deep learning system, and we generate a simple tree where the doctor can follow the first split on a variable that is aligned with their variable and continue on so that it comes to a prediction and they know exactly what's the logic behind why the model made that prediction.
JACK DUNN: The thing that sets Interpretable AI apart from other approaches to interpretability and artificial intelligence is that we build models that are interpretable from the beginning and really push the performance to the limits. Other approaches just take these existing neural nets and try and make them interpretable after the fact. And for that reason, they never really achieve true interpretability, they're just trying to explain what their model is doing. Whereas our models, we can explain, from the get go, exactly how they are.
DAISY ZHOU: So for instance, there are people who try to explain a risk scoring system in banking, so they train a deep neural network and they predict this person has 80% chance of default. And at that point, they try to put some interpretability using these kind of post hoc analysis approaches based on the result, but their explainability is still at a very local level. It is for people who are already in that group, given their demographic and their past history, then trying to see the impact of certain variables within that very narrow group of people. Whereas if you had, under our optimal decision trees type approach, you can see globally why that person falls in that group, and what's the decision that was being made to segment people the way how our model did it.
So our interpretability, as Jack was saying, is really built from the ground up. It's global and can be explained it everyone in the population.
JACK DUNN: So we met in the Operations Research program at MIT when we were doing our PhDs, and we saw that there's a wealth of research coming out of the group there in particular, but also the wider research community that's just not making it to the business world. There's so many different techniques and methods and ideas that have real business value, but they kind of just get published in academia and that's the end of it. So we decided that we really want to take some of these ideas that can really change the way the business world operates and bring them out of the world to create that value.
DAISY ZHOU: The implementation-- good implementation of a good idea is actually very difficult and was not really incentivized so much in academia. So that's why a lot of times great idea just stops at the paper publications that, for us, since we really believe in the business impact of making this algorithm to perform at the level that it should, it really takes a lot more effort, engineering effort, a lot more design for that to work. It is something that we're good at and that we really strive to do.
So take our Interpretable AI software modules for example, it's the two product that came out of our respective doctoral thesis research. And we put a lot of effort in making it scalable, making it fast, make it work with real data, given all of the nuance and complexity in the data set. That's why, even just during our research and consult research engagement at our PhD project, these software modules have seen a lot of impact. So we believe taking the same approach and doing it to some of the other interpretable methods coming out of research can deliver just as much impact.
-
Interactive transcript
JACK DUNN: So unlike a sort of traditional PhD program, our PhDs spent a lot of time consulting and working with real businesses on real problems. We weren't just locked away in the office. Yeah, working on our theory. We were actually out there driving our research based on what delivers value to companies. And that's something that's really unique to MIT. We're driven entirely by-- well, not entirely-- making sure that the work we do can actually create value out there in the world.
DAISY ZHUO: So for example, these software modules that we've developed-- was motivated by a lot of the consulting engagements that we were working at, at the time. There was the health care applications. There's the insurance work that we really see the need from the businesses for interpretability and for scalability.
And that's why we designed our method to fit their need and fit the broader need of the market. On the other hand, having these developed methods, and going back to their business problems, applying it gives us more ideas on how to further improve it. So MIT and this whole culture of working together with companies, the real problem-- really help us to speed up our methodological development that will make an impact.
We launched in June 2018. And since then, we've already had a number of successes. One of the examples is that we've been working with a large insurance company-- an insurance data company, that they had a lot of challenges in managing their data and in generating interpretable understanding of people's risk.
We were able to take over that problem and help them from the beginning to the end from the data pipelining, the data cleaning, missing data imputation, all the way to not only generating understandable and well-predicted probabilities for risk, but also optimizing what the best insurance providers that their people should have in all of the high-stake questions.
JACK DUNN: Another example is we've recently secured a partnership that will allow us to bring our technology to a number of large retailers in the US and globally. So our technology will allow them to really get into the data and understand what drives sales at their store, how they should prepare when new products are coming out to be released, how they should stock them, where they should ship them and their warehousing, and really, just allow them to harness the power of AI without wondering whether they can trust what it's doing. They can see, because of the interpretability, exactly what's going on.
DAISY ZHUO: So another example would be a cancer mortality predictor that we have developed together with oncologists at Dana-Farber. We were able to predict very accurately, what's the mortality rate for a cancer patient coming into the system? And not only this is just as accurate as some of the other predictor models out there.
The doctors actually sit down with us for the whole time to go through every single variable, all of the values, and all of the logic in terms of making that prediction, completely get on board and agree with their intuition. And that's when we started launching this product. So in the next year or so, we hope to have this product used across a number of large cancer hospitals in the US.
Yeah. And not only this helps the doctor to validate their intuition, I think this is even more important for the patient to understand why they're seeing the numbers in their counseling session with the doctor. So when the patient can understand why certain predictions are made, not only the patient will have a better sense, what's the outlook, but more importantly, the patient will know what the options are and what are the outcomes for each.
So for example, if the patient has a higher chance of mortality under a certain treatment, maybe it is not the best idea to go on this expensive treatment that will lower your quality of life and will increase burden for your family. Perhaps the other option of going on less aggressive treatment or even going to a terminal care facility-- would it be a better option, considering all of the potential outcomes?
[MUSIC PLAYING]
-
Interactive transcript
JACK DUNN: We're currently working on something that we think has a lot of future potential. We have a collaboration with Massachusetts General Hospital with the surgery department there. And what we've done is developed a risk calculator for their surgery patients, in particular emergency surgery. So when somebody comes to the ER, the surgeon needs to quickly make a snap call on what type of surgery this patient needs, if any, and whether they should proceed, what the risks are. And because this person's in a bad state, they need to make this decision fast.
So what we've done is developed a tool, a calculator that, with a series of four to 10 questions, can accurately predict that patient's risks of the surgery and helps the doctor understand whether they should proceed with the surgery or not.
It's currently being used daily at Mass General Hospital by their surgery department. And we're hoping that we can roll it out wider across many hospitals in the US and further afield to really drive the value and make sure that we can address this tough life or death problem.
DAISY ZHUO: We build a wide range of AI in a variety of industries to solve their hardest problems. And in the past, that we have built products that are specifically addressed for the health care problems of outcome prediction, the issues in retail for assortment optimization, in cybersecurity for malware detection, and in a number of other cases.
What we can really do, because of our technology, is so our technology covers a broad range of problems that it can solve. We can really tackle any of the business problems to take it from the data, whether it's nice, clean, or messy, all the way to good predictions and high value prescriptions. So we believe that we can really quickly expand and adapt our product portfolio to fit the need with the market that needs the most.
JACK DUNN: So we believe our interpretable AI algorithms are something unique in the market, delivering the combination of interpretability and performance that nobody else offers. We also, having developed the algorithms from the ground up ourselves, intimately understand the details of these algorithms. And that lets us iterate faster and make tweaks. When we see them not performing to the best of their abilities, we can go back to the data, do the right transformations, and really, in a fast and efficient manner, deliver a solution that maximizes the business value.
DAISY ZHUO: The fact that we have developed these algorithms ourselves, are intimately familiar with them, and are able to tweak to reach their best performance means that we can really be the best solution for a specific industry and its hardest problem.
Yeah. So in the example of the cancer mortality predictor, when we see the tree, we saw that weight is a very important variable that shows up multiple times, but the tree was not quite at the level of performance that we wanted to see. So we sat down with the doctors together, and then we figured out a way to smartly engineer these new variables that captured actual temporal trending weight.
We generated these new variables like changing weight or momentum in a changing weight, and insert that back into the model. And quickly, because of our understanding of the data better and our better way to retrain the model with the more important variables, we reached a level of performance that's way better than anything else that was already existing.
JACK DUNN: So one of the best industry fits for our technology are areas like banking and insurance, where there's high regulatory requirements. And traditionally, AI technology hasn't been used in these industries because there's a very high bar of being able to explain and justify it to the regulators, that what you're doing follows the rules and you understand what's going on.
For instance, if you apply for a loan from a bank and the bank denies you, they need to give you a credible and intuitive explanation for why you've been denied. It's not enough for the bank to just say because our black box AI method told you we can't give you a loan. And for that reason, our interpretable methods can still be used by the bank because we can output the series of variables and decisions that led to the loan being denied while still giving the bank the ability to harness the increased predictive power of AI in general.
DAISY ZHUO: As these industries start to see the advantage of our unique approach-- which, on the one hand, give banks better risk estimates for each person, but also from the consumer side, making it more fair, making the consumer understand the decisions better-- that will allow this industry to engage more customers and grow better.
So we're hoping that in the long run, by having these collaborations and partnership with banks and other companies in these industries, will help the industry grow as a whole.
-
Interactive transcript
[MUSIC PLAYING]
JACK DUNN: At Interpretable AI, we turn data into trusted action. We all know that the future of business is in artificial intelligence. But in order for AI to reach its full potential, we need to have human and machine working together. For that to happen successfully, the human really needs to understand and validate and trust what the AI is producing.
Interpretable AI is the result of years of AI research at MIT, and we have interpretable methods that deliver solutions you can understand, and that have the same level of performance as methods like deep learning. We have a proven track record of delivering business value in a variety of industries, such as banking, insurance, cybersecurity, retail, and health care. We're excited to tackle your toughest problems and give you a solution that you can understand.