Articles: algorithm integrity in FS | Risk Insights Blog

Review triggers and practical checks for algorithmic systems (Part 4)

Written by Yusuf Moolla | 22 Apr 2026
TL;DR
• Part 4 of a 5-part series.
• Part 1 explained triggers for testing and deep dives: new products, new data, internal/external reviews.
• Parts 2 and 3 each covered a practical check between triggered testing cycles.
• This article outlines the third practical check: operational friction, when front line staff issues point to model related problems.

 

Our algorithmic systems need regular attention to continue to operate accurately and fairly. 

There are clear triggers for when we need a closer look, discussed in part 1. When those don’t apply, or miss things, there are practical checks we can run. Part 2 explored complaints, part 3 focused on drift.

In this fourth article, we focus on another practical check.

 

Check 3: Operational friction

It’s easy for friction to slip through unnoticed. It usually starts small, for example staff fixing something that seems harmless. But left alone, those fixes can become signs of much deeper model misalignment. When our colleagues start fighting the system, that can tell us that something in the logic or data is off.

Our algorithm might be within tolerance, passing the usual performance checks. It seems to behave as designed and pass our technical model reviews. But, in practice, something is off. For example, we may have a fraud model that flags so many of a particular type of case, that staff start ignoring them.

Of course, not every pushback means the model’s wrong. Sometimes the problem is with the workflow or interface, and other times the friction itself is the safeguard that keeps automated decisions fair.

 

3 common friction scenarios

Friction shows up in many ways. Sometimes the fix is simple: a small threshold adjustment or clarifying a rule. Other times, friction signals a bigger issue that needs a deeper review through one of the main triggers or other checks.

Here are three common ways that friction shows up:

 

1. Manual review/rework

When staff talk about needing to do lots of manual review or rework, especially if it relates clearly to a specific decision engine, it can be about our model/system.

If an issue keeps showing up in team meetings or Teams/Slack chats, it’s a signal. Create a quick way for teams to log issues that seem to be about a model getting in the way. This could be a simple tag that we can search on periodically and work through. Not every gripe is useful data; consistency over time matters more than volume.

 

2. Workarounds

Front‑line staff find ways to overcome situations that they see recurring. This can include building workarounds, sometimes as “shadow” spreadsheets.

Staff shortcuts or shadow tools sometimes just fill a gap that can’t yet be built into the system; that’s a process problem, not what we’re after here. In other cases though, it could be something in the model/system for us to check.

 

3. Abandoned alerts

A high volume of abandoned (ignored) alerts is not ideal.

So we check the reasons for these abandoned alerts. If there are many alerts with the same reason for abandonment, this could be a rule that’s not firing correctly or is set with too high a threshold. This is important, because it could easily mask a real alert that we want to investigate.

 

Useful, but not a replacement for the main triggers

These friction points tell us where our models aren’t matching how people actually work.

As with the other practical checks, this is a supplementary check and doesn't replace proactive testing.

 

Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.