Skip to content

Curated Insight: Selected developments to watch in 2026

TL;DR
• Aug 2026: full application of EU AI Act "High-Risk" AI rules, incl. credit and insurance.
• Operational Resilience: becoming a key supervisory lens for AI, incl. 3rd-parties.
• The RAISE-Fin project (UK): practical tools for managing AI hallucinations in FS.

 

As we close out 2025, here are three key developments to watch for next year.

 

1. The High-Risk deadline (EU AI Act)

The EU AI Act is already in force, but August 2026 is important for banks and insurers. That’s when the obligations for High-Risk AI systems (including credit scoring, loan evaluation, and insurance pricing) become fully applicable.

Recital 58 notes that AI systems used for credit scoring should be high-risk because they determine access to "financial resources or essential services such as housing, electricity, and telecommunication services". Similarly, it explains that systems used for life and health insurance pricing can have a "significant impact on persons’ livelihood" and, if not duly designed, "can lead to serious consequences... including financial exclusion and discrimination".

 

2. AI as an Operational Resilience issue

There are signals of a shift in focus for 2026: treating AI not just as a model risk, but as an operational resilience risk. This may be inferred from recent information from Australia's APRA and the UK’s PRA, among others.

The questions from supervisors might change. Instead of just asking "is your credit model fair?", they may ask "what is your contingency plan if your third-party AI claims processor goes offline?" This may shift some of the burden from data scientists to operational risk teams.

 

3. Practical auditing for hallucinations (The RAISE-Fin Project)

There’s a lot of talk about AI governance frameworks, but practical tools for handling slop are still thin on the ground.

As an example of developments underway to address this, there’s a new project in the UK, the RAISE‑Fin project, looking directly at generative AI reliability and the risk of hallucinations.

The researchers are focusing on incorrect or misleading responses generated by AI tools, as used in financial services contexts. They aim to produce guidance, auditing methods, and policy recommendations to help FS firms detect and govern this specific risk. These might help us in checking that our models are safe for use.

 


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.