Breaking down bias in AI applications

STEX25 Startup:
April 4, 2022 - June 30, 2024

 

By: Eric Bender

Applications built on machine learning algorithms are entrenching themselves everywhere, but they often bring major weaknesses that aren't discovered until after they are deployed. Some of the most obvious examples of faulty or unfair AI applications crop up in the news, such as when autonomous cars strike pedestrians. Other damage is inflicted quietly, for example by financial applications that discriminate against people of color. 

Often what drives these flaws is built-in bias, due to a lack of breadth and balance in the datasets used to train the AI software, says Elaheh Ahmadi, co-founder and chief executive officer of Themis AI, a STEX25 startup. 

Her company offers a tool to help companies automatically design machine learning models that weed out this bias. "We have a technology that can be applied to many different scenarios and still give you the results that you want," Ahmadi says. Moreover, unlike most competing software, the Themis AI algorithm not only identifies bias but finds ways to remove it. 

Giving AI a fair shot 

Themis AI was born in the lab of Daniela Rus, MIT professor of electrical engineering and computer science, where Alexander Amini, Themis co-founder and chief science officer, and his collaborators came up with an elegant mathematical solution for removing bias from AI datasets. 

Their first application was in autonomous vehicle navigation. One huge challenge in this field is the relatively small number of data samples for cars in less common situations such as turning sharply or traveling in fog. The MIT model performed unusually well in automatically detecting and handling these difficult scenarios—without requiring additional training or datasets. 

The de-biasing algorithm also did extremely well in facial detection, at a time when AI models from Facebook and other global players were found to work quite poorly on faces of people of color. The researchers uncovered many potential biases in the open-source datasets being used to train facial detection applications, including a lack of samples of Black faces, people wearing glasses and people seen at different angles. Again, the MIT algorithm could not only automatically detect these biases but make sure that the model could work around them. In 2019, Amini and colleagues showed that the algorithm decreased "categorical bias" for features such as skin color compared to state-of-the-art facial detection models. 

Ahmadi, Amini and co-workers also applied the anti-bias technology to the clinical studies of drugs. Analyzing trials between 2004 and 2020, they could double the accuracy of predicting success for these extremely expensive studies. "That was the first time it clicked for me that we were solving a very difficult mathematical problem, and being able to solve it well has an impact in saving people's lives," she says. 

Bringing down the bar for bias 

When a machine learning model is trained to look at specific features in a set of data, it typically ends up learning much more from heavily sampled scenarios (like driving down a highway on a sunny day) and much less from less common scenarios (like taking an extremely sharp turn). 

Fixing this problem begins with extracting what features in the dataset the model cares about, and what features are difficult for the model. This is no small feat, Ahmadi says. 

Sometimes this problem can be solved by manually labeling the data. But even if a company has the resources to do so, that's an extremely labor-intensive process and not a favorite pastime for data scientists. "Also, humans may be blind to some of the features that the model could care about," she points out. 

Themis AI software automatically figures out what features the AI model cares about. Next it identifies the scenarios that are difficult for the model and makes sure that the features for these scenarios are learned properly so the model can handle them acceptably.

You want to make sure that your model is performing uniformly well around all of those groups, rather than having a huge imbalance in performance

For instance, financial models might analyze groups by race, zip code or income level. "You want to make sure that your model is performing uniformly well around all of those groups, rather than having a huge imbalance in performance," says Ahmadi. If the model is successful for 90% of one group and 20% for another group, that model needs to be fixed. 

Companies are justifiably concerned about meeting regulations on machine learning applications designed to assure fairness, which are already relatively well-established in Europe. But regulatory conflicts, legal risks and bad publicity aren't all of the potential downsides, she emphasizes. 

Take the example of the historical data that companies employ to determine who should get a loan and what the applicants should be charged. All too often, AI programs amplify the racial biases that were endemic in the past, with a very negative social impact, Ahmadi points out. But companies also lose substantial numbers of clients who actually are good candidates for loans. 

We can uncover biases in facial detection, loan predictions and predictions of disease from chest X-rays

Crucially, the Themis AI algorithm doesn't care what tasks are covered or what datasets are employed. "We can uncover biases in facial detection, loan predictions and predictions of disease from chest X-rays," Ahmadi says. 

From solution to startup 

Impressed by the power of the anti-biasing algorithm, in August 2021 Rus and her graduate students made a pitch to The Engine, MIT's tough-tech venture organization. The Engine gave them funding for a year of technology development. Their startup, named for the Greek goddess of justice, launched the following month. The co-founders now are raising their next round of funding, expanding their engineering team and adding employees with expertise in AI regulation and in software markets. 

By year-end, Themis AI plans to launch a de-biasing platform product that will lead companies creating machine learning models through three steps. The first step is to identify the vulnerable areas within the model's data and the potential risks. Second, the software will leverage that knowledge while training the model. Third, the application will certify that the model is appropriately robust and fair. (Importantly for customers, bias can be assessed by established open guidelines.) 

Themis AI is collaborating with several design partners, in labeling, financial, human resources and medical applications. "They're letting us know what they care about, what they want to see, how they want to interact with the product, how it should be built and how it should be integrated into their pipeline," Ahmadi says. The company also wants to collaborate with machine learning operations companies, which provide many tools to help in building or deploying applications but often struggle with bias problems that are only discovered after the applications are deployed. 

As widespread as machine learning software has become, Ahmadi notes, the software still acts like a black box. "Trying to figure out how the black box works and trying to minimize all of the risks that come with it is a hard problem," she says. "I don't think that's something that every company should do on their own." 

"In order to build an AI model that is robust and fair and doesn't put your company or human life at risk, you don't need to hire a lot of data scientists, " she adds. "You don't need to hire a lot of risk managers. You just need, with the resources that you have, to build technology that doesn't have any of these issues. And that's what we're bringing."