We could think about "biased" algorithms as complex, rogue entities that have developed a prejudice. And treat them like they are evil.
But the reality is more mundane: they’re just lazy.
If you give an algorithm a goal like "predict who will repay this loan", it will look for the fastest, most efficient path to that answer. It doesn't care why the answer is right; it just wants to score points.
Unfortunately, the "fastest" path might focus on personal characteristics rather than behaviours.
Access to more data can just create more shortcuts, especially if the data appears to be relevant.
Imagine you are trying to predict someone's health. You could do the hard work of analysing their blood work, exercise habits, and genetic history.
Or, you could just look at their shoe size.
Statistically, people with smaller feet (children) have different health profiles to adults (typically with larger feet). This is a correlation, and may be accurate enough to be useful in a low-stakes guess.
But it’s a terrible way to practice medicine.
Our credit and insurance models do this all the time. They find a "shoe size" variable and latch onto it because it correlates with risk. They’re not trying to discriminate. They’re just taking shortcuts. It’s sometimes easier to find a proxy for risk than it is to find the actual driver of risk.
So, how should we frame all of this?
Perhaps we stop treating "fairness" purely as the right thing to do (even though it is).
Instead, we could treat it as a competence issue.
If a model is rejecting customers based on a "lazy" correlation, rather than actual behavior, it’s making poor decisions.
A well designed model that minimises bias produces more accurate risk scores.
Improving fairness leads to better models overall.
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.