Our algorithmic systems need regular attention to continue to operate accurately and fairly.
There are clear triggers for when we need a closer look, discussed in part 1. When those don’t apply, or miss things, there are practical checks we can run. Part 2 explored complaints, part 3 focused on drift.
In this fourth article, we focus on another practical check.
It’s easy for friction to slip through unnoticed. It usually starts small, for example staff fixing something that seems harmless. But left alone, those fixes can become signs of much deeper model misalignment. When our colleagues start fighting the system, that can tell us that something in the logic or data is off.
Our algorithm might be within tolerance, passing the usual performance checks. It seems to behave as designed and pass our technical model reviews. But, in practice, something is off. For example, we may have a fraud model that flags so many of a particular type of case, that staff start ignoring them.
Of course, not every pushback means the model’s wrong. Sometimes the problem is with the workflow or interface, and other times the friction itself is the safeguard that keeps automated decisions fair.
Friction shows up in many ways. Sometimes the fix is simple: a small threshold adjustment or clarifying a rule. Other times, friction signals a bigger issue that needs a deeper review through one of the main triggers or other checks.
Here are three common ways that friction shows up:
When staff talk about needing to do lots of manual review or rework, especially if it relates clearly to a specific decision engine, it can be about our model/system.
If an issue keeps showing up in team meetings or Teams/Slack chats, it’s a signal. Create a quick way for teams to log issues that seem to be about a model getting in the way. This could be a simple tag that we can search on periodically and work through. Not every gripe is useful data; consistency over time matters more than volume.
Front‑line staff find ways to overcome situations that they see recurring. This can include building workarounds, sometimes as “shadow” spreadsheets.
Staff shortcuts or shadow tools sometimes just fill a gap that can’t yet be built into the system; that’s a process problem, not what we’re after here. In other cases though, it could be something in the model/system for us to check.
A high volume of abandoned (ignored) alerts is not ideal.
So we check the reasons for these abandoned alerts. If there are many alerts with the same reason for abandonment, this could be a rule that’s not firing correctly or is set with too high a threshold. This is important, because it could easily mask a real alert that we want to investigate.
These friction points tell us where our models aren’t matching how people actually work.
As with the other practical checks, this is a supplementary check and doesn't replace proactive testing.
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.