
Over the past couple of years, we’ve become much more aware of the ways in which bias can negatively influence our AI algorithms, such as with AI gender bias. Despite companies’ best efforts to prevent bias in their models, however, bias is always present in our datasets.
Just this week, Apple and Goldman Sachs came under fire for the way in which the algorithm responsible for deciding customers’ credit lines seemed to illegally discriminate against women.
The first to sound the alarm of these issues on Twitter was prominent software developer, David Heinemeier Hansson. On November 8th, Hansson tweeted that his wife, despite having a better credit score, was offered a lower credit line than he was. Apple co-founder Steve Wozniak shared a similar experience: despite having similar credit histories and shared bank accounts, Wozniak received a credit line 10x larger than his wife’s.
In response to mounting criticism, Goldman Sachs has announced, “In all cases, we have not and will not make decisions based on factors like gender.” It is, in fact, extremely unlikely that gender is an input in Goldman Sachs’ algorithm. However, as we have seen, machine learning algorithms excel at finding latent variables. It is entirely possible that through one, or a combination, of included variables, the model is able to infer gender.

Or perhaps the historical bias is influencing the algorithm’s decision. As recently as the 1970s, women in the US could be denied credit cards unless they had a male to cosign, possibly limiting the amount of data available. Additionally, women’s wages were frequently discounted by up to 50% before 1974, affecting their credit limits. As a result, it’s possible that job data could negatively influence one’s credit line, if they work in an industry that has been traditionally perceived as dominated by women: nursing or early education, for example.
Historical bias is a fundamental issue and exists despite the perfect sample size selection. Professor Rachel Thomas, on how to mitigate issues caused by historical biases, shared that we must “work closely with domain experts and with people impacted by the software.” For instance, if Goldman Sachs’ algorithm is influenced by historical bias to unfairly issue women lower credit, then intervention is necessary to remedy and correct the AI software. In fact, this intervention and strict regulations can be a way to oversee AI software and protect human civil rights. It is up to us to protect our own human rights and bring up concerns about AI ethics.
Bias in AI algorithms is a popular topic within the data science community. Join us in Boston for ODSC East 2020 from April 13–17 and hear from industry experts on ways to prevent, mitigate, and solve problems relating to bias.
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.