Skip to content

Bridging the purpose-risk gap: Customer-first algorithmic risk assessments

TL;DR
• Banks and insurers sometimes focus on business concerns and regulatory matters in assessing AI risks.
• By reframing risk assessments to prioritise customer outcomes, Financial Services organisations can better align with their stated purpose.

 

Banks and insurers sometimes lose sight of their customer-centric purpose statements when assessing AI risks, focusing instead on business concerns and regulatory obligations.

Regulators are noticing this disconnect.

This article aims to outline why the disconnect happens and how we can fix it.

By reframing risk assessments, we can better serve customers and align with our purpose.

Podcast icon Listen to the audio (human) version of this article - Episode 13 of Algorithm Integrity Matters

What Regulators Are Saying

A recent ASIC report on AI governance (Report 798) highlights a concerning trend.

[ASIC - Australian Securities and Investment Commission - is a regulatory body.]

ASIC reviewed how a sample of licensees (FS providers) are using and planning to use AI. The review considered how they are identifying and mitigating consumer risks, and their governance arrangements.

In the resulting report, one of the findings related to the perspective used in assessing risks.

Finding 5 of the report states:

"Some licensees assessed risks through the lens of the business rather than the consumer. We found some gaps in how licensees assessed risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias."

This observation underscores the need for FS organisations to refocus their risk assessments on customer outcomes.

Financial Services Purpose Statements

A consistent theme among purpose statements from major banks and insurers is that customers are either the priority or the singular focus.

Common elements include:

  • Improving customers' financial well-being
  • Providing exceptional customer service
  • Building trust and long-term relationships
  • Empowering customers to achieve their financial goals.

It should follow, naturally, that risk assessments focus on customers – protecting customers, maintaining customer value, etc.

But based on the finding in the ASIC report, and anecdotally, this sometimes goes awry.

 

Why We Deviate from Our Purpose

There are several reasons for the deviation. Among them are:

  1. Operational Priorities: Immediate tangible risks often take precedence, especially when we have limited resources.
  2. Specific Regulations: A focus on individual regulatory requirements can overshadow broader considerations.
  3. Complexity: Algorithmic risks can seem complex, leading to a narrow focus on technical aspects.
  4. Conversion: Purpose statements are often high-level and difficult to translate into operational terms.
  5. Maturity: As algorithmic risks emerge, initial risk responses may rely on theoretical, templated risk frameworks.

This is understandable.

And it is not new.

We have competing priorities, we get sidetracked, we have lots of regulations to deal with.

But we must recognise this, work to better balance these, and get back to our purpose.

 

Realigning with Purpose

To bridge this gap, we need to consider our purpose from various angles.

Now, everything can flow directly, or indirectly, from customer. But the link is not always easy to make, and so it helps to break it down a bit.

Typically, this would be customer as the focal point, then regular/internal business risk, and then compliance and regulatory expectations.

Let's consider what this could be for fairness – one of the 10 key aspects of algorithm integrity.

A business focused risk statement might be:

Algorithmic systems may not comply with anti-discrimination regulations, exposing the organisation to legal and reputational risks.

A customer focused risk statement might be:

Algorithmic systems may discriminate against certain customer segments, thereby not treating customers fairly.

A purpose-aligned statement could be:

How we manage fairness in algorithmic systems may result in:

  1. discriminating against certain customer segments, thereby not treating customers fairly. (customer)
  2. overcompensating for fairness, resulting in ineffective practices. For example, if we drop age when calculating an insurance premium, our pricing could be too low for the associated risk. (business)
  3. not meeting human rights obligations or other related regulatory expectations. (compliance)

Note: each of the 3 items here will directly, or indirectly, affect customers.

Rinse and repeat for the other aspects of algorithm integrity. Some, like Privacy, may not need to change. 

If we assess algorithm risks with a customer-centric focus, we have a framework that aligns with our stated purpose, while continuing to manage regular business risks and meet regulatory expectations.

As AI and algorithmic systems change, a customer first perspective will help us innovate sustainably, for long-term success.

Putting customers first, when assessing risks, will help us make sure that our use of algorithms aligns with and serves our purpose.

 

 
 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.