TL;DR • Explainability is necessary to build trust in AI systems. • There is no universally...
Algorithmic System Integrity: Explainability (Part 5) - Privacy & Confidentiality
In a previous article, we explored the concept of explainability, its importance, and 4 challenges.
We then addressed the 1st and 2nd challenges - complexity and complicated processes – and considered “explainability” in practical terms.
In this article, we discuss the 3rd challenge – Privacy and Confidentiality.
Challenge Recap
Algorithmic systems create unique challenges when balancing explainability with privacy and confidentiality.
Among these are:
1. Protecting Sensitive Customer Information
Providing detailed explanations of AI decisions can risk exposing sensitive customer information. For instance, in banking, revealing how a credit score is calculated could inadvertently disclose personal financial data.
2. Preserving Proprietary Algorithms
Detailed explanations can reveal proprietary algorithms, enabling competitors to shortcut the development process. Perhaps more importantly, it can enable customers or prospective customers to try to game the system.
3. Securing Fraud Detection Systems
Revealing how potential fraud is identified can be risky. Bad actors can use this knowledge to manipulate the system. Fraud detection systems typically rely on identifying specific patterns. If fraudsters know what we're looking for, and how, they could alter their behaviour, or the information they provide, to evade detection.
Audience specific considerations
One way to identify solutions to these challenges is to tailor our explanations, based on who we are explaining to.
Customers or Prospective Customers
We want to enhance trust and satisfaction and meet compliance expectations.
We use clear, concise language, with explanations that can be easily understood. We avoid technical jargon.
Our default expectation is that customers will do the right thing, but we don’t want to enable them to game the system either. There are a few things we can do here:
- limit explanations, especially for fraud systems
- explain the decision, but not necessarily the detailed sequence of steps
- keep records of data provided previously, so that we can identify changes that don’t make sense.
Front Line Staff
We want to enable front line staff to address customer queries.
They may get the same info we give to customers.
If we provide more information to front line staff, we need to be clear about to use and communicate it.
We also need to prevent leakage and misuse.
Developers
Developers are interested in system improvement and troubleshooting, so they need the details.
We restrict this to authorised personnel only, with the usual mechanisms to prevent leakage and misuse.
Ideally, we keep models as simple as possible, and we don’t overcomplicate processes.
Senior Management
Need to ensure that systems meet customer expectations, support business objectives, and enable compliance.
Often this means high-level explanations that focus, transparently, but simply, on enabling leaders to understand.
This can sometimes mean getting into the details.
Regulators and Auditors
We need to demonstrate compliance with laws and regulations, with decisions are transparent and justifiable.
We provide explanations that demonstrate compliance with regulations, using unambiguous language.
The details will vary depending on specific needs, but this group may also need to dive into the details.
Next
The next article will delve into the fourth challenge – making sure that the explanations can be understood.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.
