Articles: algorithm integrity in FS | Risk Insights Blog

Review triggers and practical checks for algorithmic systems (Part 5)

Written by Yusuf Moolla | 29 Apr 2026
TL;DR
• The final article in a 5-part series.
• Part 1 explained triggers for testing and deep dives: new products, new data, internal/external reviews.
• Parts 2, 3 and 4 each covered a practical check between triggered testing cycles.
• This article outlines another practical check: frequent overrides.
• In some cases, they are passive internal “complaints” about rules and models. In others, they might tell us more about behaviour and training than the model.

 

Our algorithmic systems need regular attention to keep operating accurately and fairly.

There are clear triggers for when we need a closer look, discussed in part 1. When those don’t apply, or miss things, we have practical checks: part 2 explored complaints, part 3 focused on drift and part 4 looked at operational friction.

This is the final article in the series, describing a fourth practical check.

 

Check 4: Frequent overrides

Sometimes the system makes a decision, but we change it. Or it suggests a decision, and we don’t follow it.

Those are overrides: from decline to approve, high price to lower price, claim triaged for investigation to approved.

Complaints (from part 2) are about customers telling us they think something is wrong. Overrides could be our colleagues doing the same, passively showing us the rules or models don’t fit for certain cases.

But that is only one possibility. Frequent overrides can also result from:

    • known and expected manual decisions, for edge cases (for example, hardship or vulnerability)
    • sales or time pressure, with staff cutting corners
    • training, culture or behaviour issues, with people ignoring a model that is aligned to policy.

So we treat overrides as a prompt to ask why it’s happening. It’s not automatic proof that our model is wrong, but it can mean that we need to change our system, policy or guidance.

 

How this plays out

Suppose our lending engine recommends “decline” for a lot of small limit increase requests. Our front-line staff keep switching those from decline to approve, perhaps for a subset of customers with steady income and no recent issues.

Or our claims fraud engine triages a lot of claims as possible inflated costs for investigation. Our staff keep clearing those for payment, perhaps for a subset of customers with minor property damage and a clean claims history.

On paper, the scorecard might look fine. In practice, the override pattern is telling us something is off. It could be a known issue, like if our system can’t handle edge cases and manual review is needed. Or it could be:

  • the policy is too tight for that subset of customers (policy needs review)
  • the policy is fine, but the system implementation is stricter than the policy (system needs to change to align)
  • the policy and system are both fine, but staff are taking more risk than intended due to time pressure or sales target pressure (probably then has nothing to do with the system).

 

Three things we can look for

  1. Where the overrides occur: for example, particular products, or customer segments with noticeably higher override rates than others.
  2. Which way decisions are moving: for example, mainly decline → approve, or perhaps approve -> decline (where the decision is manual, but using a systematic recommendation).
  3. Who is doing the overriding: for example, particular teams, roles or locations that override much more or much less than peers.

If we see the same pattern repeatedly, and the data is good enough to trust, we probably need to look more closely at the underlying logic, data, policy and process flows.

Sometimes that leads to policy changes and/or model changes. Sometimes it leads to clearer guidance and training, or tells us we should tighten override discretion to avoid unfair or inconsistent outcomes.

 

Close

That's the end of this series.

The triggers and practical checks should collectively help keep our algorithmic systems fair and accurate.

 

Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.