In a previous article, we discussed fairness in algorithmic systems, equity and equality.
When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:
What are the personal characteristics or attributes that, if used, could lead to discrimination?
This article provides a basic framework for identifying and categorising these attributes.
Anti-discrimination laws exist in most jurisdictions, so that's a good place to start.
If none apply to your country (e.g., South Korea, Japan), you could use existing human rights laws, or perhaps one of the international covenants or conventions.
There's no shortage of definitions when it comes to discrimination.
For example, in Australia, there are at least 5 relevant federal laws.
Each state and territory has its own set of rules.
The definitions vary, but there's some overlap.
One example of a definition is detailed in the 2014 guide produced by the Australian Human Rights Commission, "A quick guide to Australian discrimination laws":
The Australian Human Rights Commission Act 1986 specifies "Discrimination on the basis of race, colour, sex, religion, political opinion, national extraction, social origin, age, medical record, criminal record, marital or relationship status, impairment, mental, intellectual or psychiatric disability, physical disability, nationality, sexual orientation, and trade union activity."
That's a lot to take in, and it's just one definition.
To make this easier to work with, we can group these attributes into five main categories:
Age
Race
Sex/gender
Disability
Activity/beliefs
Each of these contain several attributes. Detailing the attributes can help provide context, and support our efforts to reduce bias:
# |
Category |
Attributes |
1. |
Age |
Age, including age-specific characteristics. |
2. |
Race |
Race, colour, descent, nationality, origin (ethnic, national, ethno-religious), immigrant status, physical features. |
3. |
Sex / gender |
Gender, sex, gender identity, intersex status, sexual activity, sexual orientation. Marital or relationship status, parental status, pregnancy or potential pregnancy, breastfeeding or bottle feeding, family or carer responsibilities. |
4. |
Disability |
Physical, intellectual, psychiatric, sensory, neurological or learning disability. Physical disfigurement, disorder, illness or disease that affects thought processes, perception of reality, emotions or judgement, or results in disturbed behaviour. Presence of organisms causing or capable of causing disease or illness. |
5. |
Activity / beliefs |
Religious/Political: beliefs, activity or affiliation. Profession, trade, occupation; industrial or trade union activity. |
There are a few more to think about - they are less frequently observed, but need to be considered:
Consider whether, and how, each of the attributes might be influencing decisions.
Some key questions to ask:
Going a bit deeper, we may ask:
Regularly revisiting these questions can help ensure our systems remain fair and equitable.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations. Algorithmic fairness frameworks are not without challenges. They may not capture all forms of bias, especially those deeply embedded in historical data or societal structures. Fairness metrics can sometimes conflict, making it impossible to satisfy all fairness criteria simultaneously.