Skip to content

Equal vs. Equitable: Algorithmic Fairness

Fairness in algorithmic systems is a multi-faceted, and developing, topic.

As technology advances and algorithms play a greater role in decision-making, the need for fair treatment is increasing in importance.

However, there is currently no definitive guidance or universally accepted framework for achieving algorithmic fairness. Researchers, ethicists, and industry professionals are actively exploring this, grappling with the challenges of balancing technical efficacy and business realities with ethical and moral considerations.

This article aims to shed light on the distinctions between equal and equitable treatment in algorithmic systems, while acknowledging that our understanding of fairness is still developing and subject to ongoing debate.

Podcast icon Listen to the audio (human) version of this article - Episode 5 of Algorithm Integrity Matters

Equal vs Equitable

In a previous article, we explored ten key aspects to consider when scoping an algorithm integrity audit.

One aspect was fairness, with this in the description: "...The design ensures equitable treatment..."

This raises an important question. Shouldn't we aim for equal, rather than equitable treatment?

First, let's distinguish between fairness and algorithmic fairness.

Fairness vs Algorithmic Fairness

When we're conducting an algorithm integrity audit, we examine multiple aspects, including algorithmic fairness and ethics. These are distinct but related concepts:

  • Algorithmic fairness focuses on whether the algorithm itself is designed to ensure equitable treatment within its defined parameters.
  • Ethics, on the other hand, considers the broader implications of the algorithm's use, including the morality of the task it's assisting with.

While interconnected, we often evaluate these aspects separately to ensure thorough analysis. Algorithmic fairness looks at how the algorithm processes inputs and produces outputs without bias. The ethical evaluation considers whether the algorithm's purpose and overall impact align with moral and societal values.

A fair algorithm should not discriminate or produce biased results against any specific group based on protected attributes. But it will still consider relevant circumstances.

Relevant Circumstances and Protected Attributes

A range of relevant circumstances feed into the design of the algorithm. This is what the algorithm uses to make an eligibility decision or producing a calculated result.

We’ll use the term “features” – commonly used to describe input variables.

You may know, or hear, about “feature engineering” – this is simply the process of selecting, transforming, and creating relevant features for a machine learning model. A crucial step in developing effective algorithms.

A relevant circumstance is depicted as one or more features within the algorithm.

Examples of features typically used include:

  • Insurance: Number of claim-free years, number of other policies held
  • Lending: Income, employment status

Now, without these, there would be no need for the algorithm – everyone just gets the same result. We need these differentiators for the algorithm to exist in the first place.

However, some of these may be challenging. Consider these examples:

  • Premiums: Age, vehicle type
  • Claims: Age, postcode
  • Lending: Income, credit score, and employment status
  • AML: Individual names
  • CTF: Source and destination of funds, links to high-risk jurisdictions
  • FATCA: U.S. indicia (U.S. place of birth, U.S. address or phone number).

Some background before we detail what the issues with these are.

It's important to note that protected attributes, and proxies for protected attributes, may be included as "relevant circumstances", depending on the context. For instance, in insurance, age is sometimes considered a relevant factor for risk assessment; but using it must be carefully justified and documented.

There needs to be a clear, justifiable reason, with safeguards in place to prevent unfair bias.

There is authoritative guidance for insurance companies in Australia from the Australian Human Rights Commission, in partnership with the Actuaries Institute (Institute of Actuaries in Australia) [Footnote 1]

The ethical implications of using "relevant circumstances" would be evaluated separately as part of a broader ethical framework.

Let's revisit the examples from earlier:

  • Premiums: Age is a protected attribute. Vehicle type can be a proxy (e.g., for disability status).
  • Claims: Age is a protected attribute. Postcode could be a proxy (e.g., for race).
  • Lending: credit scores are sometimes considered proxies for race, gender, age and disability status.
  • AML: names can be a proxy for race, and skew algorithm outputs
  • CTF: nationality, ethnicity vs. source/destination and jurisdiction?
  • FATCA: U.S. indicia are relevant for compliance, but nationality could be a proxy for race.

So, should they be used at all?

There is an argument for, and one against, but that’s part of the ethical debate.

Once that is settled, the decision flows to the algorithmic design.

Back to Equal vs. Equitable

Given this, if all inputs are treated equally, the algorithm becomes ineffective. Applying the same rules to everyone, regardless of their circumstances, would yield identical results, which is counterproductive.

Equitable treatment involves considering relevant circumstances, but, importantly, without factoring in protected attributes.

Unless a specific exemption applies, the algorithm should avoid:

  • Using protected attributes (such as race) as direct inputs
  • Using proxies (like postcodes/zip codes) for protected attributes
  • Perpetuating historical biases present in past data.

Real-World Implementation

In practice, implementing fair algorithms requires several activities.

For instance, a bank implementing a loan approval system might:

  • compile a diverse team, including ethicists, data scientists, domain experts and others
  • evaluate ethical considerations in selecting eligibility criteria and scoring features
  • deploy a bias aware (trained) team to develop the algorithm
  • make sure the algorithm does not include features without ethical clearance
  • implement 'explainable' techniques to understand how decisions are made
  • conduct an impact assessment
  • regularly review the system's decisions for bias
  • provide human oversight for borderline cases
  • commission regular independent reviews.

Bias awareness

We spoke earlier about feature engineering. The goal of feature engineering is to improve the model's performance by providing it with the most informative and relevant inputs.

However, in the context of algorithmic fairness, feature engineering must be done carefully to avoid introducing or amplifying biases.

So, the algorithm developers, supported by the broader (diverse) team, need to consider:

  • Relevance: Does the feature genuinely contribute to the result?
  • Fairness: Could the feature lead to unfair discrimination against protected groups?
  • Proxy variables: Might seemingly neutral features act as proxies for protected attributes?
  • Historical biases: Do the features reflect historical inequalities that shouldn't be perpetuated?

To do this effectively, algorithm developers need to be trained to identify bias.

Concepts matter

While some may argue that terminology (e.g., equal vs equitable) is insignificant, it is essential to recognize that the underlying concepts matter.

Regardless of the terms used, we need to acknowledge that we need to both: (1) consider relevant circumstances, and (2) avoid protected attributes. There’s no algorithm without the former; there’s no fairness without the latter.

While we separate fairness and ethics for clarity, in practice, they're often intertwined. Ethical considerations inform what we deem 'fair', and fairness implementations have ethical implications.

For example, deciding whether to use credit scores for insurance is both an ethical and a fairness issue. Separating them helps with analysis, but we also consider how they interact in practice.

Continuous improvement

Algorithmic fairness is an ongoing process, not a one-time achievement.

There is no perfect answer. We face tensions and trade-offs, which means that our decisions will vary. If anyone claims they have absolute answers, approach that claim with significant skepticism.

As our understanding of fairness evolves, we must continually refine our approaches.

Our goal is to make sure - to the best of our ability - that our algorithms operate fairly and ethically, and that we continuously strive to make them better.

Footnote(s)


Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organizations. Algorithmic fairness frameworks are not without challenges. They may not capture all forms of bias, especially those deeply embedded in historical data or societal structures. Different fairness metrics can sometimes conflict, making it impossible to satisfy all fairness criteria simultaneously. Ongoing research and vigilance are important.


 

 

Subscribe here to receive new articles when they are published.