Articles: algorithm integrity in FS | Risk Insights Blog

Algorithmic System Integrity: Explainability (Part 6) - Interpretability

Written by Yusuf Moolla | 22 Apr 2025
TL;DR
Technical stakeholders need detailed explanations.
Non-technical stakeholders need plain language.
Visuals, layering, literacy, and feedback are among the techniques we can use.

 

 

This is the final article in this series that started here.

 

Challenge Recap

We need explanations that are both accurate and understandable.

Non-technical people typically need plain language explanations. This is not always easy to achieve when the starting point is a complex mathematical concept. Converting to plain language can mean that important context is lost.

Technical people often need technical explanations. But there are nuances, like the need to translate system field names to more meaningful names. Some complexity needs to be retained for certain purposes, and removed for others.

 

Solutions

Here are four approaches that can help:

 

1. Visuals

For non-technical stakeholders, these can help simplify complex concepts. For example, explaining AI decision-making processes through flowcharts or diagrams. If done well, they can help to reduce the loss of context that often comes with plain language translation.

For technical stakeholders, visuals can help with understanding complex flows or interactions.

 

2. Layering

Layered explanations can work for different levels of technical expertise.

We could, for example, have high-level summaries for non-technical stakeholders and keep building on this until we get to the detailed technical explanations for technical personnel. Or we could start with the details and pare back until we have the layer/version that suits non-technical stakeholders.

Importantly, we still need each layer to be accurate, and consistent with the other layers. We need to avoid contradictions – this can erode trust. A real problem – explainability is (in part) about enhancing trust.

 

3. Literacy

Training and education programs for stakeholders to improve their understanding of AI systems.

This can include a range of levels of education; for example, workshops on AI basics for non-technical stakeholders, advanced technical training for developers, and various others in between.

This article explains tailoring literacy efforts to different roles, including ForHumanity’s five distinct personas.

 

4. Feedback

Feedback mechanisms can be used to gauge the effectiveness of explanations.

This can help identify where explanations are unclear or ambiguous.

 

A note on literacy

The third item, literacy, is the topic of a past article, and is likely to be the topic of future articles.

Even among highly capable people, it is not sufficiently appreciated.

It might be one of the most important topics in Algorithmic System Integrity.

 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.