Articles: algorithm integrity in FS | Risk Insights Blog

ISO 42001: A Foundation, not a Solution

Written by Yusuf Moolla | 11 Jun 2025
TL;DR
• A management system standard: covers governance processes, but doesn't ensure integrity or fairness.
• Mixed benefits: can build credibility, can create a false sense of security.
• Certification isn’t a must-have: the rationale for pursuing it should be solid.

 

ISO 42001 is a relatively new standard for AI management systems.

It launched in December 2023, and a lot of people are talking about it, especially consultants and vendors. So, naturally, compliance teams are asking about it, and executives want to know if they need it.

Is it the answer to AI risk management?

It's getting attention, but like most standards, it's not a silver bullet.

The most important limitation appears to be by design: it is about establishing management systems, and won't necessarily solve AI problems like bias. This creates a gap between what people expect and what it delivers.

In other words, it tells you to have processes for risk assessment, bias monitoring, etc. But it doesn't tell you how to eliminate bias from your lending algorithms or what constitutes fair treatment in insurance pricing.

Note: there are other complementary standards - current or in development - that address specific aspects like privacy, security, and lifecycle management. 

This article highlights some of the benefits and limitations of ISO 42001.

 

The Benefits

1. A reasonable start, with a structured approach: It can be a good start, as a structured approach to AI governance. This builds credibility and potentially reduces scrutiny (from regulators, for example).

2. Provides a common language: It can help progress internal AI governance efforts by talking about AI risks in consistent terms.

3. Lifecycle coverage: If used thoughtfully, it makes you think through the entire journey - from concept to ongoing monitoring, etc. This matters because AI systems change and expectations shift over time.

 

The Limitations

1. False sense of security: If the gaps aren’t understood, or if they’re forgotten, teams can assume they're covered and don’t need to do more. This can then create the perverse effect of reducing vigilance.

2. Management system, not outcomes: The standard is about having a management system in place. It gives you a framework for governing AI, but it doesn't solve the fundamental technical challenges of making AI fair, transparent, or unbiased. You can be fully compliant and still harm customers because the standard focuses on having the right processes, not achieving the right results. You can be compliant, and even certified, but still deploy algorithms that unfairly reject loan applications or price insurance premiums based on protected characteristics.

3. Controls: Annex A includes 39 controls including governance, data management, risk assessment, and lifecycle management. But Annex A is NOT a “comprehensive” list of controls, as some are claiming. While the controls might be tangible, a principles focused approach helps broaden the discussion and thinking. In practice, a set of principles, using the controls as examples to stimulate thinking, could work well. So don’t ignore or discard the controls, but don’t let them be a limiting factor – go beyond the specific stated controls to address the risks and opportunities.

4. Certification: There is lots of attention paid to certification. Using the standard does not mean that you must take the certification path. It can be overvalued: ISO certification is a nice to have, but it doesn’t hold as much weight as a proper annual independent assurance process. We’ve written about this before. It is not a trivial exercise; before pursuing certification, be clear about why you're doing it and what you expect to achieve.

 

The Bottom Line

ISO 42001 is useful - it gives you structure and shows intent.

But don't mistake having the framework for fully managing AI risk. Your customers (and, potentially, regulators) care about what your AI does, not what your processes say it should do.

It is a foundation, not the ultimate result – it won’t, alone, prevent unfair customer treatment.

 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.