Skip to content

Fairness reviews: identifying essential attributes

In a previous article, we discussed fairness in algorithmic systems, equity and equality.

When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:

What are the personal characteristics or attributes that, if used, could lead to discrimination?

This article provides a basic framework for identifying and categorising these attributes.

Anti-discrimination laws exist in most jurisdictions, so that's a good place to start.

If none apply to your country (e.g., South Korea, Japan), you could use existing human rights laws, or perhaps one of the international covenants or conventions.

Podcast icon Listen to the audio (human) version of this article - Episode 10 of Algorithm Integrity Matters

The legal landscape

There's no shortage of definitions when it comes to discrimination.

For example, in Australia, there are at least 5 relevant federal laws.

Each state and territory has its own set of rules.

The definitions vary, but there's some overlap.

One example of a definition is detailed in the 2014 guide produced by the Australian Human Rights Commission, "A quick guide to Australian discrimination laws":

The Australian Human Rights Commission Act 1986 specifies "Discrimination on the basis of race, colour, sex, religion, political opinion, national extraction, social origin, age, medical record, criminal record, marital or relationship status, impairment, mental, intellectual or psychiatric disability, physical disability, nationality, sexual orientation, and trade union activity."

That's a lot to take in, and it's just one definition.

Simplifying the approach

To make this easier to work with, we can group these attributes into five main categories:

  1. Age

  2. Race

  3. Sex/gender

  4. Disability

  5. Activity/beliefs

Each of these contain several attributes. Detailing the attributes can help provide context, and support our efforts to reduce bias:

#

Category

Attributes

1.

Age

Age, including age-specific characteristics.

2.

Race

Race, colour, descent, nationality, origin (ethnic, national, ethno-religious), immigrant status, physical features.

3.

Sex / gender

Gender, sex, gender identity, intersex status, sexual activity, sexual orientation.

Marital or relationship status, parental status, pregnancy or potential pregnancy, breastfeeding or bottle feeding, family or carer responsibilities.

4.

Disability

Physical, intellectual, psychiatric, sensory, neurological or learning disability.

Physical disfigurement, disorder, illness or disease that affects thought processes, perception of reality, emotions or judgement, or results in disturbed behaviour.

Presence of organisms causing or capable of causing disease or illness.

5.

Activity / beliefs

Religious/Political: beliefs, activity or affiliation.

Profession, trade, occupation; industrial or trade union activity.

Additional Considerations

There are a few more to think about - they are less frequently observed, but need to be considered:

  • Irrelevant medical records
  • Irrelevant criminal records
  • Discrimination based on association with someone who has any of these attributes.

Putting It Into Practice

Consider whether, and how, each of the attributes might be influencing decisions. 

Some key questions to ask:

  1. Are we collecting data on any of these attributes?
  2. Could our systems be indirectly using these attributes?
  3. Are we using external data or models that use these attributes?
  4. Do our policies or procedures treat people differently based on these attributes?
  5. Are our staff, including data scientists, aware of these potential biases?

Going a bit deeper, we may ask:

  1. Are we inadvertently proxying protected attributes through seemingly neutral data?
  2. Are we prepared to explain our fairness approach to customers, regulators, other stakeholders?
  3. Can we explain how our algorithmic systems work, end-to-end?
  4. Do we regularly audit our systems?
  5. How quickly can we respond if we detect unfairness in our deployed systems?

Regularly revisiting these questions helps ensure our systems remain fair and equitable as they evolve.

 


Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organizations. Algorithmic fairness frameworks are not without challenges. They may not capture all forms of bias, especially those deeply embedded in historical data or societal structures. Different fairness metrics can sometimes conflict, making it impossible to satisfy all fairness criteria simultaneously. Ongoing research and vigilance are important.


 

Subscribe here to receive new articles when they are published.