Articles: algorithm integrity in FS | Risk Insights Blog

AI Definitions: Impact Matters More Than Labels

Written by Yusuf Moolla | 04 Jun 2025
TL;DR
• Narrowly applied AI definitions miss the point.
• Robodebt showed us that a "simple" system can be harmful.
• Focus on Impact/Outcomes, rather than labels/mechanisms.

 

Earlier this year, an Australian regulator released its AI Transparency Statement. A positive step.

However, it excludes rules-based models and machine learning, stating that those don’t meet the definition of AI.

This type of exclusion is not uncommon and might even appear to be prudent. In practice, it's not the best approach.

This article aims to explain why this is not ideal, considering:

  1. The definition and associated guidance material
  2. An example of a rules-based system that caused real problems
  3. A focus on impact, rather than mechanisms.

 

The definition of AI used

The regulator adopted the Australian Digital Transformation Agency’s (DTA) definition of AI.

The DTA, in turn, adopted the OECD definition:

A machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions, that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Because this definition is “short and concise”, the OECD issued supplementary guidance in March 2024. It includes this: “AI models include … statistical models and various kinds of input-output functions (such as decision trees…).”

So, the narrow interpretation of the definition is (possibly) incorrect.

Perhaps that’s just semantics. Let’s consider the practical implications.

 

An example - Robodebt

Australia’s Robodebt scandal involved a rules-based system for tax and welfare data matching. The result:

  • Almost half a million incorrect debt notices issued to vulnerable citizens. This included disability pensioners, people with mental illness, and abuse victims.
  • Reports linking the scheme to tragic outcomes, including suicide.
  • A settlement of AUD 1.8 billion.

The system may not meet the regulator's narrowly applied definition of “AI,” yet its impacts were catastrophic and costly. If we were classifying a robodebt type scheme today, would it be prudent to exclude it?

 

Governance based on Impact, not Mechanisms

The transparency statement is a step forward, but the narrow AI definition risks repeating past mistakes.

For banks and insurers, the goal isn’t mere compliance with regulatory checklists; it’s about building systems that are fair, accountable, and resilient, regardless of their technical labels. By governing all impactful automation (AI or not) with equal rigour, we can avoid becoming the next Robodebt headline.

Instead of "does this fall under the definition of AI", we could ask:

  • Does this system influence decisions affecting customers, employees, or other stakeholders?
  • Could errors or biases in this system cause material harm?
  • Is there adequate human oversight and recourse?

Shifting the conversation from “Is this AI?” to “Does this matter?”

A better approach, prioritising outcomes over labels/mechanisms.

 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.