Skip to content

Review triggers and practical checks for algorithmic systems (Part 3)

TL;DR
• Part 3 of a 5-part series.
• Part 1 explained triggers for testing and deep dives: new products, new data, internal/external reviews.
• Part 2 covered the first practical check between triggered testing cycles: using complaints and feedback data.
• This article outlines the second practical check: drift, when outcomes change in ways we didn’t expect, or stay flat when they should change.

 

Our algorithmic systems and models need regular attention to make sure they continue to operate accurately and fairly. Behaviour changes. Data changes. Feedback loops change how algorithms behave.

Depending on the system and the business, there are clear triggers for when we need a closer look at specific parts of the system. In part 1, we discussed the typical triggers.

When those triggers don’t apply, or miss things, there are practical checks we can run as well. Part 2 explored the first practical check – using complaints, feedback and interactions data.

In this third article, we focus on the second practical check.

 

Check 2: Drift (or no drift)

Drift is about how our algorithmic outcomes change over time.

Data and modelling teams might talk about “data drift” (inputs change) and “concept drift” (the relationship between inputs and outcomes changes). They’ll often have more complicated definitions, but it’s the same basic idea and we’ll keep the jargon to a minimum here.

For our purposes, there are two sides to this:

  1. Unexpected Drift. When outcomes change, and we weren’t anticipating it.
  2. Missing Drift. We expect results to change, but they don’t.

Again, as with the first practical check, these are supplementary checks between our formal reviews or other triggers. They certainly do not replace proactive testing.

 

1. Unexpected Drift

Here we’re watching how outcomes move and then trying to explain why.

We already track the basics, in measuring business performance: approval rates, loss ratios, average claim value, premium changes, fraud rates, lapses, retention. We can extend their use, repurposing them (so to speak) as sanity checks on our algorithms.

If numbers shift more than expected, something in the business, model, rules or data has changed.

Sometimes a shift is fine. A new product, a pricing change, a marketing campaign, a flood event that drives up claims, a government rebate that triggers more home loan applications. In those cases, we ask: “Does what we’re seeing make sense given what’s happening in the business?”

If the answer is yes, we move on.

>If not, the system may not be behaving as we originally intended. (Note: it could also reflect a genuine change in customer behaviour, but we’ll park that for now).

We then ask: “Is this change plausibly explained by what we know, or do we need to look at the decision logic?”

Unexplained shifts can mean some customers are now being treated better or worse than we intended. For example:

  • A gradual drop in approval rates in one channel, with no change in application mix.
  • A sharp rise in average claim size for a subset of policies, with no obvious external driver.
  • A sudden improvement in loss ratio for a high‑risk segment.

In those cases, we may be missing whole groups of customers from important decisions, or from our reporting data. If so, we check things like whether data sources have changed (e.g. new fields, a change in field name, a new data provider), or rules/models have been added or tweaked.

We can’t catch everything with outcome drift. But we can catch some of the “didn’t realise that change would do that” types of problems much earlier.

 

2. Missing Drift

The opposite is equally important. When we expect numbers to move but they don’t, that might reveal deeper issues.

Let’s say our peers are seeing an uptick in fraud, or insurance switching, or refinancing. Industry data and conversations tell us there’s activity in the market. We’d reasonably expect to see some impact in our own numbers.

If our metrics are perfectly flat, there are a few possibilities:

  • We’re genuinely different to our peers (product, customer base, risk profile, marketing/sales focus) and not exposed to the specific trend.
  • We’re late to the trend, and need to keep an eye on how our data moves over the next few periods.
  • Our systems are not seeing or not processing the full volume.

That last one is the worrying one. For example, we might have:

  • Hard‑coded upper limits on records to process. Once volume passes that limit, our system ignores the rest.
  • Filters or pre‑screen rules that are excluding the cases you most need to see.
  • Batch jobs or integrations that start failing under load, with poor monitoring.

“Expected but missing” drift can be a major red flag. Again, the question is simple: “Given what we know about the market and our plans, would we expect to see some movement here?” If the answer is “yes”, and the line is flat, we check our algorithms and data flows.

 

Whether it’s drift or no drift, the point is the same: watching for these unexplained shifts (or silences) helps us catch problems before they hurt customers or business outcomes.

 


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.