TL;DR • Why Explainability Matters: It builds trust, is needed to meet compliance obligations, and...
Algorithmic System Integrity: Explainability (Part 4)
In a previous article, we explored the concept of explainability, its importance, and four challenges. We then addressed the first and second challenges - complexity and complicated processes.
Before we dive into the third challenge, let’s pause to consider “explainability” in practical terms.
Explainability helps build trust in AI systems. But there is no single definition.
This brief article is a straightforward discussion focused on the underlying intent and practical implications.
Prevailing definition?
There are several definitions that have been proposed, some with much deliberation.
Despite these efforts, there is no universal consensus on a single definition.
Rather than trying to create yet another definition, we’ll focus instead on key practical considerations.
Five Key Considerations
By focusing on these considerations, we establish a foundation:
1. Be able to explain
None of this will matter if we can’t explain our systematic decisions (“globally”), or individual decisions (“locally”).
This includes using some of the solutions we’ve already described to address complexity and complicated processes.
2. Consider the Context
Are the “systems” producing fully automated decisions, or are there humans in the loop?
Are systematic outputs used as inputs to manual processes, but not directly the final decisions.
Note: if the output is fed into a human process, but used as the default decision, it could be considered a systematic decision in practice.
3. Consider the Purpose
What is the algorithmic system being used for?
For example, fraud detection systems require internal explanations but may limit external transparency to avoid exposing logic to fraudsters.
The purpose influences both the type and level of explanations needed.
4. Consider the audience’s needs
Stakeholders have varying needs:
- Developers need technical details to debug and refine
- End-user employees need to understand how decisions are influenced, but not all the technical specifics
- Auditors and regulators need evidence of compliance and transparency
- Customers need plain-language explanations they can act on.
Tailoring the explanations ensures they are relevant and understandable to each audience.
5. Decide whether/what/how
With the core matters considered, and an ability to explain, we ask:
- Under what circumstances will we provide explanations?
- Where we decide to explain, what information will we provide?
- How do we communicate the explanations effectively?
Next
Now that we’ve laid this groundwork, we’re better prepared to tackle the complexities of privacy and confidentiality, ensuring that AI systems are both transparent and secure.
The next article will delve into privacy.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.
