Skip to content

Algorithm integrity: compliance is not the end goal

TL;DR
Algorithm risk management should drive business results, not just satisfy auditors or tick compliance boxes.
• Focus on customer outcomes and revenue/cost optimisation.
• Start with your specific business context and risks, not frameworks or "best practice" certifications.

 

We can build beautiful compliance frameworks with comprehensive procedures.

But what looks good on paper can still deliver poor results. Even with perfect documentation, our algorithms could decline good customers, approve risky ones, and frustrate others with poor experiences. The governance can seem to work perfectly, but stifle real business outcomes. And too much focus on “best practice” can distract from the real risks and opportunities.

Of course compliance matters. Regulatory requirements are important. But as the minimum standard, not the goal.

In simpler terms: GRC = Governance, Risk AND Compliance; not Governance & Risk FOR Compliance.

 

Frameworks, repositories and certifications

We often reach for guides or frameworks like ISO standards, NIST frameworks or risk databases. It seems to be an easy option. But in reality it can be easier to start with, but harder to execute.

A leading university created an AI Risk Repository with more than 1,600 risks. If we start with that many, how likely are we to finish, and what will we achieve? We could spend months categorising theoretical problems, while our competitors are solving actual business problems.

Yes, these frameworks, guides and repositories can help us to cross-check, to make sure we haven't missed anything. But they limit our thinking when we don't start with our own specific contexts and risks.

We can also get caught up in chasing compliance certifications. Documenting processes to impress auditors, instead of focusing our efforts on serving our customers better. Reviewing whether procedures were followed, instead of whether algorithms are delivering business outcomes.

 

The key difference

Take bias in fraud detection algorithms.

A compliance focus can mean basic fairness checks to satisfy audit requirements.

But biased scoring could cost us a lot of money, in terms of both wasted investigation time and missing actual fraud. So, this is a real problem, and we can pour more resources into it. Because we can achieve a measurable outcome, rather than just a regulatory sign-off or checking a box.

What's really interesting is that the business-focused approach probably delivers more robust compliance than the compliance-focused approach. Because it directly addresses the underlying business problem.

 

Algorithm integrity should drive business results

We could focus on whether our models meet audit requirements. Or use algorithm integrity to optimise approval rates for good customers while turning high risk customers away. Fair credit/pricing algorithms offer competitive rates to qualified customers, based on real risk factors rather than demographics.

We could focus on compliance. Or use algorithm integrity to save money and improve customer experience. Better fraud detection systems catch more real fraud and flag fewer innocent customers (fewer false positives).

If we get too caught up in documentation and compliance, we could miss these opportunities entirely.

 

The bottom line

In banking and insurance, algorithm performance directly influences competitiveness. Superior algorithms help win the right customers and reduce costs.

Instead of building "best practice" documentation to satisfy auditors and compliance teams, we can use algorithm integrity to improve business results.

Are our algorithm integrity efforts making our businesses more competitive, or just keeping our auditors happy?

 


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.