Fairness in algorithmic systems is a multi-faceted, and developing, topic.
As technology advances and algorithms play a greater role in decision-making, the need for fair treatment is increasing in importance.
However, there is currently no definitive guidance or universally accepted framework for achieving algorithmic fairness. Researchers, ethicists, and industry professionals are actively exploring this, grappling with the challenges of balancing technical efficacy and business realities with ethical and moral considerations.
This article aims to shed light on the distinctions between equal and equitable treatment in algorithmic systems, while acknowledging that our understanding of fairness is still developing and subject to ongoing debate.
In a previous article, we explored ten key aspects to consider when scoping an algorithm integrity audit.
One aspect was fairness, with this in the description: "...The design ensures equitable treatment..."
This raises an important question. Shouldn't we aim for equal, rather than equitable treatment?
First, let's distinguish between fairness and algorithmic fairness.
When we're conducting an algorithm integrity audit, we examine multiple aspects, including algorithmic fairness and ethics. These are distinct but related concepts:
While interconnected, we often evaluate these aspects separately to ensure thorough analysis. Algorithmic fairness looks at how the algorithm processes inputs and produces outputs without bias. The ethical evaluation considers whether the algorithm's purpose and overall impact align with moral and societal values.
A fair algorithm should not discriminate or produce biased results against any specific group based on protected attributes. But it will still consider relevant circumstances.
A range of relevant circumstances feed into the design of the algorithm. This is what the algorithm uses to make an eligibility decision or producing a calculated result.
We’ll use the term “features” – commonly used to describe input variables.
You may know, or hear, about “feature engineering” – this is simply the process of selecting, transforming, and creating relevant features for a machine learning model. A crucial step in developing effective algorithms.
A relevant circumstance is depicted as one or more features within the algorithm.
Examples of features typically used include:
Now, without these, there would be no need for the algorithm – everyone just gets the same result. We need these differentiators for the algorithm to exist in the first place.
However, some of these may be challenging. Consider these examples:
Some background before we detail what the issues with these are.
It's important to note that protected attributes, and proxies for protected attributes, may be included as "relevant circumstances", depending on the context. For instance, in insurance, age is sometimes considered a relevant factor for risk assessment; but using it must be carefully justified and documented.
There needs to be a clear, justifiable reason, with safeguards in place to prevent unfair bias.
There is authoritative guidance for insurance companies in Australia from the Australian Human Rights Commission, in partnership with the Actuaries Institute (Institute of Actuaries in Australia) [Footnote 1]
The ethical implications of using "relevant circumstances" would be evaluated separately as part of a broader ethical framework.
Let's revisit the examples from earlier:
So, should they be used at all?
There is an argument for, and one against, but that’s part of the ethical debate.
Once that is settled, the decision flows to the algorithmic design.
Given this, if all inputs are treated equally, the algorithm becomes ineffective. Applying the same rules to everyone, regardless of their circumstances, would yield identical results, which is counterproductive.
Equitable treatment involves considering relevant circumstances, but, importantly, without factoring in protected attributes.
Unless a specific exemption applies, the algorithm should avoid:
In practice, implementing fair algorithms requires several activities.
For instance, a bank implementing a loan approval system might:
We spoke earlier about feature engineering. The goal of feature engineering is to improve the model's performance by providing it with the most informative and relevant inputs.
However, in the context of algorithmic fairness, feature engineering must be done carefully to avoid introducing or amplifying biases.
So, the algorithm developers, supported by the broader (diverse) team, need to consider:
To do this effectively, algorithm developers need to be trained to identify bias.
While some may argue that terminology (e.g., equal vs equitable) is insignificant, it is essential to recognise that the underlying concepts matter.
Regardless of the terms used, we need to acknowledge that we need to both: (1) consider relevant circumstances, and (2) avoid protected attributes. There’s no algorithm without the former; there’s no fairness without the latter.
While we separate fairness and ethics for clarity, in practice, they're often intertwined. Ethical considerations inform what we deem 'fair', and fairness implementations have ethical implications.
For example, deciding whether to use credit scores for insurance is both an ethical and a fairness issue. Separating them helps with analysis, but we also consider how they interact in practice.
Algorithmic fairness is an ongoing process, not a one-time achievement.
There is no perfect answer. We face tensions and trade-offs, which means that our decisions will vary. If anyone claims they have absolute answers, approach that claim with significant skepticism.
As our understanding of fairness evolves, we must continually refine our approaches.
Our goal is to make sure - to the best of our ability - that our algorithms operate fairly and ethically, and that we continuously strive to make them better.
Footnote(s)
Australian Human Rights Commission 2022. Guidance Resource: Artificial intelligence and discrimination in insurance pricing and underwriting. (↩ return to article)
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations. Algorithmic fairness frameworks are not without challenges. They may not capture all forms of bias, especially those deeply embedded in historical data or societal structures. Fairness metrics can sometimes conflict, making it impossible to satisfy all fairness criteria simultaneously.