Skip to content

Review triggers and practical checks for algorithmic systems (Part 1)

TL;DR
• Part 1 of a 5-part series.
Some events clearly trigger deeper dives and retesting: new products, data feeds, internal and external reviews 
To catch other potential issues, we can add a few practical checks.

 

Our algorithmic systems and models need regular attention to make sure they continue to operate accurately and fairly. Behaviour changes. Data changes. Feedback loops change how algorithms behave.

Depending on the system and the business, there are clear triggers for when we need a closer look at specific parts of the system. In this first article, we’ll discuss the typical triggers.

When those triggers don’t apply, or miss things, there are practical checks we can run as well. The next and subsequent articles in this series will each dive into one of those.

 

When we know we need a deeper dive (triggers)

There are many situations where deeper dives and retesting are already expected. 

1. Review cycles

Internal: our regular internal reviews are obvious moments to look harder at our algorithms and decision logic. These generally happen annually, or sometimes twice a year.

External: regular external reviews or assurance work are also clear triggers. They’re usually less frequent than internal reviews: every other year, or maybe annually.

2. Regulatory emphasis or attention

Regulators call out specific thematic issues: particular types of products, channels, customer groups or other topics. If they sound even slightly familiar, it’s a good reason to look more closely at the systems that could be involved.

3. New products and product changes

Launching a new product, or making a significant change to features, eligibility, pricing or underwriting raises questions like:

  • Which existing models, rules or scoring engines are we re‑using or tweaking?
  • What extra checks do we need to see whether they still behave as intended?

This is not news. We typically have a set of checks for when this happens.

4. New or changed data feeds

New third‑party data, new internal data sources, or changes to existing feeds can all deliberately, or inadvertently, change how algorithms work. For example:

    • A new field appears and is used in a score.
    • A provider changes definitions or formats.
    • Data gets dropped or truncated somewhere along the data flow.

Many teams have an agreed set of checks for when this happens. This is potentially harder, because it relies on other teams telling us about the change.

5. Model or rule changes

Thresholds and cut-offs change. New rules are added “just for this promotion.” Champion/challenger models swap in and out. Each of these is a reason to do a bit more. We may have a defined schedule to check and recalibrate our models.

 

When we don’t know via a typical trigger (practical checks)

Some issues are easy to miss. Most teams have limited capacity for full reviews, so they may only happen annually. Sometimes we don’t know about product changes or data feed changes, especially if they’re done by a different team (which is quite typical). So we need something else to tell us where to look.

That’s where ongoing practical checks come in. These are small, repeatable checks we can run often (e.g. weekly, monthly, quarterly), or even ad-hoc, when there are no big events on the calendar. They use data we already have. They don’t replace deeper dives when we have a clear trigger. They sit alongside, picking up issues that would otherwise sit quietly between reviews.

None of these checks are complex. They don’t require new platforms or big projects. They’re ways to use what we already have to find issues earlier.

 

Next

In the next article, we’ll start with the first of those practical checks: using complaints, feedback and interactions data.

 


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.