TL;DR • Testing is a core basic step for algorithmic integrity. • Testing involves various stages,...
Algorithmic System Integrity: Explainability (Part 1)
We need algorithmic systems that people can trust.
The ability to explain how algorithms work – a.k.a., explainability - can help build this trust.
When an AI denies a loan, or flags a claim, many people (e.g., customers, regulators, employees) need to know why.
This is part 1 of a series of articles that explore this aspect of integrity.
Here we discuss why it matters and what some of the challenges are.
Future articles will cover potential solutions to each of the challenges.
Why Explainability Matters
- Customer Trust: The absence of explanations creates frustration. Clear reasons help customers accept outcomes, reduce complaints and improve trust.
- Legal Compliance: New rules require FS organisations to explain automated decisions. There may even be fines for non-compliance.
- Better Decisions: When staff understand how an algorithmic system works, and how decisions are made, they can catch errors faster.
However, achieving explainability in complex algorithms—especially those leveraging AI or machine learning—presents unique challenges.
The Challenges
Here are four key explainability challenges:
1. Complex Algorithms
Advanced AI models can produce more accurate results than simpler models.
But they often operate as "black boxes." The internal decision-making process is opaque.
This makes it difficult to trace how inputs lead to specific outputs.
2. Complicated Processes
AIgorithmic FS systems often involve intricate workflows, combining multiple data sources and transformations.
Consider how much work was involved in unpacking data flows for capital adequacy projects – Basel or Solvency – and the resulting spaghetti. These are both wide (many systems within the flow) and deep (multiple data elements/variables).
Some of these data flows cross over with credit scoring processes, pricing models and claims flows. So, the same complications apply. Data lineage was enough of a headache with "simple" algorithms. And now the flows include more complex algorithms and external data sources.
3. Privacy and Confidentiality
Providing detailed explanations of AI decisions might risk exposing sensitive customer information.
It can reveal proprietary algorithms.
In the case of fraud systems, we often don’t want to reveal how exactly we identify potential fraud – this knowledge can be used by bad actors to beat the system.
4. Human Interpretability
Ensuring that explanations are not only technically accurate, but also understandable to various parties, including technical people, and non-technical stakeholders like customers and regulators.
Technical people need technical explanations. Perhaps with some translation from system field names to more meaningful names.
Non-technical people need plain language explanations. Not always easy to achieve when the starting point is a complex mathematical concept. And converting to plain language can mean that important context is lost.
The next article in this series will address the first challenge – complexity - with some solutions.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.
