Your algorithm is a racist, sexist law suit waiting to happen and how to fix it

Matt Keyes
7 min readMar 25, 2021

Recent news has not been kind to the algorithmic industrial complex. Some of our finest names have suffered embarrassment and regulatory action. Here is one headline no one wants to see.

UnitedHealth’s Optum faces both headlines like this and regulatory action for an algorithm that provided better care to white people than black people, potentially affecting lives.

Late in October, a study in Science flagged the fact that UnitedHealth’s platform Optum was providing less care to black patients who were sicker than white patients. This was not a great day for the PR department. An unintentionally racist algorithm had potentially effected the health outcomes for thousands of already disadvantaged patients. Within two days the New York Department of Financial Services took regulatory action.

Two weeks later David Heinemeier Hanson published a colourful series of Tweets about how Goldman Sachs and Apple Card gave him 20 times the credit limit of his wife even though they have the same financials and her credit score was higher than his.

A number of other people came forward with similar experiences including, in a wonderful stroke of bad luck for Apple’s PR department, the co-founder of Apple himself Steve Wozniak.

That weekend the story of Apple and Goldman was the top read article on Bloomberg and the N.Y. Department of Financial Services initiated an investigation.

Total time lapsed between Tweet and regulatory action: 48 hours.

Amazon ran into a similarly sticky situation recently when a hiring algorithm actually invented new ways to be sexist. Algorithms are not sexist themselves, they just reflect back to us our own biases. If you train an algorithm on 10 years of data on hiring and promoting, it will accurately reflect the fact that being male makes you more likely to be promoted. The data science team expected this and made the algorithm blind to the gender of the applicant. The algo then figured out new ways to discriminate against females based on everything from the language they used in their resumes to the clubs they participated in (like Women’s Lacrosse). They didn’t solve for sexism, they just buried the problem deeper and accidentally legitimated it with data science.

What about me?

The point here is not that Goldman Sachs, Apple, and Amazon are bad at data science or biased— quite the opposite. They are the best of the best. Amazon actually leads its peers in gender diversity and does some of the best data science in the world. There’s a whole world of shoddy algorithms from less skilled and less well-intentioned practitioners of AI causing even worse damage detailed in books like Weapons of Math Destruction.

The point is that if even the best-intentioned, best-funded, and most-skilled teams are struggling with this and failing publicly, the rest of us are in real trouble.

Goldman, Apple, and Amazon have $950 billion in cash equivalents between them. What’s your budget?

Why do bad things happen to good organisations?

The scary thing is not that faceless organisations are racist or sexist. The much scarier thing is precisely that even organisations which are not racist or sexist can’t prevent their algorithms from disadvantaging females and minorities. Law suits and regulatory action are a just result.

Amazon’s case is a classic example not of an algorithm being inaccurately sexist but rather the algorithm accurately reflecting back to us our own biases inherent in the data it was trained on.

“Amazon’s recruiting engine went to great lengths to identify and weed out women. A women’s college in the education section of a resume was an automatic demerit. By contrast, the presence of typically male vocabulary, such as “executed,” was a point in favor. These are just two examples of how computers can sift through data to find proxies for the qualities that they want to seek or avoid. What seems like offhand, irrelevant information correlates to things like gender, race, and class.” — Cathy O’Neil

In UnitedHealth’s case although the result was racist, the algorithm was actually not perpetuating human bias per se. Because the model was designed as much to reduce costs as it was to provide better care, it ranked the severity of illness by how much that illness was costing. Because the black patients consume less costly health care, the algorithm ranked them as less impacted by their illnesses and allocated them less care while allocating more care to caucasians who were less sick.

The algorithm was successful in that it solved for the task at hand: improve care while reducing cost. The unintended consequence was that it allocated less to black people because they were the most efficient consumers of health care. Ironically in an attempt to make health care more cost-effective it punished the most cost-effective consumers of health care.

We do not know what went wrong with Goldman and Apple, but one can imagine benign explanations. David Hanson and Steve Wozniak are not only male, they also happen to be famous. If the algorithm looked at non-financial data there are a thousand clues that Steve and David are celebrities who might have both extraordinary spending habits and abilities to repay their debts beyond those indicated by their credit score. Accordingly it might logically give celebrities higher credit limits than non-celebrities. In this case the algorithm didn’t discriminate because they are men and their wives are women, but rather because they are famous and their wives are not. Gender would’ve played no part.

Were the model to make this distinction between two women of equal financial means, we might applaud it as savvy business. Unfortunately, applause has not been Goldman’s experience so far.

Ways NOT to fix this

Amazon took a common path. In order to ensure their model wasn’t sexist they removed features that indicate gender (“features” are the data inputs a machine learning model uses to make predictions) such as sex and name. This doesn’t solve the problem, it just makes it harder to diagnose and fix. In fact, if you don’t think your algorithm is sexist (again, it probably is) this makes the problem worse by lending a veneer of objectivity to biased decisions.

“Racist/Sexist AI isn’t a risk, it is a guarantee if used by inexperienced teams. AI will naturally learn our bias and amplify it.” Ben Taylor, Cheif AI Officer at Zeff.ai

This is not just ethics, it’s likely female candidates who would have been more profitable employees were not hired or promoted and Amazon lost value that may have gone to a competitor. Compensating for bias against women and minorities is not just good for the world, it’s good for business.

Unfortunately and unsurprisingly, the most common response to this ethical and business problem is to intentionally ignore it all-together!

Most companies are “trying to pretend that such problems don’t exist, even as they double and triple down on recruiting, firing, or other human-resources algorithms, and even as they sell or deploy credit, insurance and advertising algorithms. I know this because I run a company that audits algorithms, and I have encountered this exact issue multiple times.

“Here’s what happens. An analytics person, usually quit senior, asks if I can help audit a company’s algorithm for things like sexism or other kinds of bias that would be illegal in its regulated field. This leads to a great phone call and promises of more and better phone calls. On the second call, they bring on their corporate counsel, who asks me some version of the following question: What if you find a problem with our algorithm that we cannot fix? And what if we someday get sued for that problem and in discovery they figure out that we already knew about it? I never get to the third call. In short, the companies want plausible deniability.” — Cathy O’Neil

Why are companies ignoring this problem? Because they are so scared they can’t fix their algorithm that they’d rather not even know about its biases! All this so that they will be found less liable in the law suits that now seem destined to happen! At least Amazon tried to fix it and voluntarily blew the whistle on itself when it failed.

Finally the good news

The good news is that just as these problems are growing in prominence, new tools and methods to solve them have been developed. Companies like Faculty can solve this in three ways.

  1. Remove bias
  2. Make every decision explainable
  3. Ensure algorithms can be fixed without impacting performance

Rather than trying to remove bias by making a model blind to race and gender — essentially hiding it, new techniques like adversarial GANs can be leveraged to confront and reduce the bias directly. However, in the tight economy of data science there are always trade offs. Reducing the bias in a model will often have an impact on performance.

Pick your poison. Your model can be less biased but that means it will be less performant. Your model can be more performant but that means it will be less explainable, more of a black box.

From a technical perspective this can be fixed. Faculty.ai developed methodologies to correct for this that are fair and explainable, they then developed ways to see the trade-offs you are making between fairness and performance so you can maximise for both. Fairness at minimal impact to performance, the best of both worlds.

The good news is that this is not just good ethics, it’s good business. By correcting for bias in hiring algorithms you’re not just reducing the amount of sexism in the world, you’re getting better candidates your competitors are not. By correcting for bias in lending algorithms you are not just reducing the amount of racism in the world, you are getting better customers your competitors are not. Less structural racism & sexism, more revenue & profits, the best of both worlds.

Further Resources

Weapons of Math Destruction by Cathy O’Neil

Cathy O’Neil at Bloomberg and on her blog mathbabe

Rachel Thomas, Director of UCF Center for Applied Data Ethics

Faculty Blog

--

--

Matt Keyes

Come for the terrorism, stay for the dating advice. Cheers to your well-rounded life!