Skip to content

A Balanced Focus on New and Established Algorithms

TL;DR
• While generative AI dominates discussions, established algorithms still drive core business decisions.
• Older systems may not meet current standards for fairness, accuracy, and security.
• Maintain a balanced approach by addressing integrity across all algorithmic systems, both new and established.

 

Even in discussions among AI governance professionals, there seems to be a silent “gen” before AI.

With rapid progress - or rather prominence – of generative AI capabilities, these have taken centre stage.

Large language models (LLMs) and broader generative AI are dominating discussions. They're capturing attention with their impressive capabilities, and rightly so.

However, amidst this excitement, we must not lose sight of the established algorithms and data-enabled workflows that have been driving our core business decisions for years.

These range from simple rules-based systems to complex machine learning models, each playing a crucial role in our operations.

In this article, we'll examine why we need to keep an eye on established algorithmic systems, and how.

Podcast icon Listen to the audio (human) version of this article - Episode 8 of Algorithm Integrity Matters

The Spectrum of Algorithmic Complexity

In financial services, we encounter a wide range of algorithmic systems.

The table below represents a basic outline of the complexity spectrum.

In practice, this order may vary, and many existing systems will use several algorithms, from different “categories”, together. For example, fraud detection systems may combine algorithms from #2 and #4 to create a broader system (this is then more complex than either #2 or #4).

#

Type

Banking Example

Insurance Example

1

Simple Rules-Based

Automatic transaction categorisation (e.g., classifying purchases as "groceries" or "entertainment" based on merchant codes).

Basic policy eligibility checks (e.g., declining coverage for provisional license holders for high-performance vehicles).

2

Advanced Rules-Based

Multi-factor authentication systems that use a combination of rules to verify identity (e.g., checking location, device)

Claims triage systems that route claims to appropriate departments based on multiple criteria (e.g., claim type, amount).

3

Statistical Models

Credit scoring models

Pricing models

4

Machine Learning Models

Algorithms that detect fraudulent transactions in real-time

Models that help identify potentially fraudulent claims

5

Deep Learning and Neural Networks

Models that predict future cash flow patterns

Models that help assess property damage from satellite imagery

6

Generative AI

LLMs powering conversational AI for personalised service

LLMs summarising product disclosure statements to make them easier for customers to read

As we move along the spectrum, the algorithms can handle more sophisticated tasks.

With increasing sophistication, there are new opportunities and challenges, but more sophisticated does not mean more important.

A Potentially Overlooked Challenge

It's easy to get caught up in the Gen AI hype.

While the new technologies grab headlines, a critical issue often goes unaddressed: our established systems still require significant work.

These have often not been subject to the same level of scrutiny and governance that we now expect.

  • For instance, a long-standing credit scoring model might be accurate in predicting defaults but lack fairness in its treatment of certain customer groups. Expectations around fairness are changing.
  • Or a “simple” system to calculate 3rd party commissions might have undetected inaccuracies.
  • Then we have external threats. Bad actors finding new ways to exploit vulnerabilities.

These pose serious reputational risks, regardless of their complexity. Failures in core systems - whether in terms of fairness, accuracy, or security - can severely damage trust in our institutions.

Given these challenges, how can we ensure integrity in both new and established systems?

Addressing the challenge

Before we dive into specifics, it’s important to recognise that focusing on generative AI gives an incomplete, distorted picture.

The use cases, inputs, processes, outputs, and risks of established systems are often very different from those of newer AI technologies.

It is useful to keep those use cases at the forefront when determining overall expectations, identifying specific risks, and designing policies.

With that in mind, here are a few steps that we can take to address the challenge:

  1. Holistic: Ensure that algorithm integrity efforts cover all systems, not just the latest AI technologies.
  2. Modernise: Update older systems to meet current expectations.
  3. Cross-functional perspectives: Involve diverse perspectives to improve fairness.
  4. Threat Modelling: Regularly assess how bad actors might exploit new and established algorithms.

Going forward – maintaining a balance between old and new

As we explore new AI tech, we continue the critical work needed on our established systems, including simple rule-based algorithms.

It's our responsibility to ensure that all our algorithms meet the highest standards of integrity.

 


 

 

Weekly Articles in your Inbox or via LinkedIn   Fill in your details for weekly emails, or subscribe to the newsletter on LinkedIn.