We touched on the importance of Algorithm/AI awareness and education in an article last month.
Article 4 of the EU AI Act is dedicated to AI literacy.
In this recent podcast episode, Ryan Carrier explained two important things to note about this Article:
The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI, with the draft version containing various references to education, training, and awareness.
So, this is an important topic.
To be effective, AI literacy efforts need to be tailored to the audience. What the data scientists need to be aware of is quite different to what most employees need to know. So, the training will need to be differentiated.
In the same podcast episode, Ryan Carrier explained one way to solve this: ForHumanity developed distinct personas.
They look something like this:
The personas represent different levels of interaction with AI systems and corresponding literacy needs.
Let’s explore each of the personas, noting that some of these may look different for you, depending on your jurisdiction, the types of systems you’re implementing, etc.
AI Leaders typically include AI project managers, data scientists and machine learning engineers.
Responsibilities include:
Challenge: this group is often neglected, in terms of risk focused training. Possibly because it is assumed that they are already aware, which is often not the case. Some data science professionals are very technically focused, so you may come across models that are “accurate”, but exhibit bias.
This group includes C-suite executives, board members, and senior managers.
Their AI literacy needs include:
Challenge: this group might be responsible for ensuring AI literacy, so the level of literacy across the organisation may cascade from here.
Front-line staff and middle managers who directly interact with AI systems. In banking and insurance, this could include customer service representatives using AI-powered support systems, or fraud risk analysts using the outputs of machine learning triage systems.
Their AI literacy needs may include:
Staff who may not directly work with AI but need a basic understanding of its impact on the organisation.
Customers and end-users. Their AI literacy needs will depend on the nature of the systems used.
Each group has unique needs and responsibilities when it comes to understanding and interacting with AI systems.
Even "AI people" need AI literacy training. This is especially important to broaden awareness of risks and balance out a purely technical focus.
By tailoring AI literacy efforts to these distinct personas, banks and insurance companies can ensure all stakeholders have the appropriate level of understanding.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.