Skip to content

AI Risk Training: Role-based tailoring

TL;DR
AI literacy is growing in importance (e.g., EU AI Act, IAIS).
AI literacy needs vary across roles.
• Even "AI professionals" need AI Risk training.

 

We touched on the importance of Algorithm/AI awareness and education in an article last month.

 

Growing expectations

Article 4 of the EU AI Act is dedicated to AI literacy. 

In this recent podcast episode, Ryan Carrier explained two important things to note about this Article:

  1. Most of the provisions of the Act come into force in August 2025, with others 2026 or later. But AI literacy requirements will start to apply on 2 Feb 2025.
  2. Most of the provisions of the Act focus on high-risk AI systems. But AI literacy requirements apply to all AI systems.

The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI, with the draft version containing various references to education, training, and awareness.

So, this is an important topic.

 

Tailored to the audience

To be effective, AI literacy efforts need to be tailored to the audience. What the data scientists need to be aware of is quite different to what most employees need to know. So, the training will need to be differentiated.

In the same podcast episode, Ryan Carrier explained one way to solve this: ForHumanity developed distinct personas.

They look something like this:

  • AI Leaders
  • Top Management & Oversight Bodies
  • Employees working with the AI system
  • All other employees
  • AI Subjects

The personas represent different levels of interaction with AI systems and corresponding literacy needs.

Let’s explore each of the personas, noting that some of these may look different for you, depending on your jurisdiction, the types of systems you’re implementing, etc.

AI Leaders

AI Leaders typically include AI project managers, data scientists and machine learning engineers.

Responsibilities include:

  • Developing and implementing AI models for credit scoring, fraud detection, and risk assessment
  • Staying updated on the latest AI technologies and their potential applications in finance

Challenge: this group is often neglected, in terms of risk focused training. Possibly because it is assumed that they are already aware, which is often not the case. Some data science professionals are very technically focused, so you may come across models that are “accurate”, but exhibit bias.

Top Management & Oversight Bodies

This group includes C-suite executives, board members, and senior managers.

Their AI literacy needs include:

  • Understanding the strategic implications of AI in financial services
  • Overseeing AI governance and risk management frameworks
  • Making informed decisions about AI investments and implementation

Challenge: this group might be responsible for ensuring AI literacy, so the level of literacy across the organisation may cascade from here.

Employees Working with the AI System

Front-line staff and middle managers who directly interact with AI systems. In banking and insurance, this could include customer service representatives using AI-powered support systems, or fraud risk analysts using the outputs of machine learning triage systems.

Their AI literacy needs may include:

  • Understanding how AI impacts their daily work and decision-making processes
  • Recognising potential biases or errors in AI outputs
  • Effectively checking AI-driven outputs before using them to communicate with customers

All Other Employees

Staff who may not directly work with AI but need a basic understanding of its impact on the organisation.

AI Subjects

Customers and end-users. Their AI literacy needs will depend on the nature of the systems used.

 

Not a one-size-fits-all concept

Each group has unique needs and responsibilities when it comes to understanding and interacting with AI systems.

Even "AI people" need AI literacy training. This is especially important to broaden awareness of risks and balance out a purely technical focus.

By tailoring AI literacy efforts to these distinct personas, banks and insurance companies can ensure all stakeholders have the appropriate level of understanding.    

 


Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.


 

Weekly Articles in your Inbox or via LinkedIn   Fill in your details for weekly emails, or subscribe to the newsletter on LinkedIn.