Algorithm Integrity Matters
Algorithm Integrity Matters
A podcast for FS leaders who want to enhance fairness and accuracy in their use of data, algorithms, and AI.
Each episode explores existing and emerging challenges and solutions related to algorithmic integrity, including discussions on navigating independent audits.
The goal of this podcast is to give leaders the knowledge they need to ensure their data practices benefit customers and other stakeholders, reducing the potential for harm and upholding industry standards.
Episode 0:
Introduction
A brief intro to the podcast.
If you have topic or guest suggestions, feel free to reach out via email: info@riskinsights.com.au
Episode 1:
How reliable is the algorithm / workflow audit that you have commissioned?
Spoken (by a human) version of this article.
One common issue with audits is undue reliance.
Can you rely on the audit report to tell you what you need to know?
Could you be relying on it too much?
Episode 2:
Choice vs obligation: motivation shapes the effectiveness of your audit
Spoken (by a human) version of this article.
The motivation(s) for commissioning an audit can determine how effective it will be.
Often, our approach differs depending on whether we are forced to, or choose to. Our engagement and satisfaction levels are generally higher when we choose (than when we are forced).
Episode 3:
Navigate Algorithm Audit Guidance: some aren't relevant to your context
Spoken (by a human) version of this article.
AI and algorithm audits help ensure ethical and accurate data processing, preventing harm and disadvantage.
However, the guidelines are not yet mature, and quite disparate.
This can make the audit process confusing, and quite daunting - how do you wade through it all to find the information that you need, in deciding how to commission your audit?
Fortunately, there is a solution - narrowing the guidelines down, based on relevance.
Not all existing guidelines are universally applicable.
This article will help you distinguish between audit guidance that applies to your situation and guidance that may not be relevant to your industry, deployment, or organizational needs.
Episode 4:
Structuring the Audit Objective: 10 Key Aspects of Algorithm Integrity
Spoken (by a human) version of this article.
In Episode 1, we explored the challenges of placing undue reliance on audits.
One potential solution that we outlined is a clear scope, particularly regarding the audit objective.
In this episode, we focus on algorithm integrity as the broad audit objective.
While it’s easy to assert that an algorithm has integrity, confirming this assertion is a bit more complex.
To help simplify this, this episode breaks it down into a set of key areas to consider.
Episode 5:
Equal vs Equitable: Algorithmic Fairness
Spoken (by a human) version of this article.
Fairness in algorithmic systems is a multi-faceted, and developing, topic. In episode 4, we explored ten key aspects to consider when scoping an algorithm integrity audit.
One aspect was fairness, with this in the description: "...The design ensures equitable treatment..."
This raises an important question. Shouldn't we aim for equal, rather than equitable treatment?
This episode aims to shed light on the distinctions between equal and equitable treatment in algorithmic systems, while acknowledging that our understanding of fairness is still developing and subject to ongoing debate.
Episode 6:
Balancing Security and Access for increased algorithmic integrity
Spoken (by a human) version of this article.
When we talk about security in algorithmic systems, it's easy to focus solely on keeping the bad guys out.
But there's another side to this coin that's just as important: making sure the right people can get in.
This article aims to explain how security and access work together for better algorithm integrity.
Episode 7:
Postcodes: Hidden Proxies for Protected Attributes
Spoken (by a human) version of this article.
In a previous article, we discussed algorithmic fairness, and how seemingly neutral data points can become proxies for protected attributes.
In this article, we'll explore a concrete example of a proxy used in insurance and banking algorithms: postcodes.
We've used Australian terminology and data. But the concept will apply to most countries.
Using Australian Bureau of Statistics (ABS) Census data, it aims to demonstrate how postcodes can serve as hidden proxies for gender, disability status and citizenship.
Episode 8:
A Balanced Focus on New and Established Algorithms
Spoken (by a human) version of this article.
Even in discussions among AI governance professionals, there seems to be a silent “gen” before AI.
With rapid progress - or rather prominence – of generative AI capabilities, these have taken centre stage.
Amidst this excitement, we mustn't lose sight of the established algorithms and data-enabled workflows driving core business decisions. These range from simple rules-based systems to complex machine learning models, each playing a role in our operations.
In this episode, we'll examine why we need to keep an eye on established algorithmic systems, and how.
Episode 9:
Algorithmic Integrity: Don't wait for legislation
Spoken (by a human) version of this article.
Legislation isn't the silver bullet for algorithmic integrity.
Are they useful? Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply.
However, existing legislation may already apply, new algorithm-focused laws can be too narrow or quickly outdated, etc.
In short, they are helpful, but we need to know what we're getting - what they cover, don't cover, etc.
Episode 10:
Fairness reviews: identifying essential attributes
Spoken (by a human) version of this article.
When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask:
What are the personal characteristics or attributes that, if used, could lead to discrimination?
This article provides a basic framework for identifying and categorising these attributes.
Episode 11:
Deprovisioning User Access to Maintain Algorithm Integrity
Spoken (by a human) version of this article.
The integrity of algorithmic systems goes beyond accuracy and fairness.
In Episode 4, we outlined 10 key aspects of algorithm integrity.
Number 5 in that list (not in order of importance) is Security: the algorithmic system needs to be protected from unauthorised access, manipulation and exploitation.
In this episode, we explore one important sub-component of this: deprovisioning user access.
Link from article: U.S. National Coordinator for Critical Infrastructure Security and Resilience (CISA) advisory.
Episode 12:
Risk-Focused Principles for Change Control in Algorithmic Systems
Spoken (by a human) version of this article.
With algorithmic systems, an change can trigger a cascade of unintended consequences, potentially compromising fairness, accountability, and public trust.
So, managing changes is important. But if you use the wrong framework, your change control process may tick the boxes, but be both ineffective and inefficient.
This article outlines a potential solution: a risk focused, principles-based approach to change control for algorithmic systems.
Resource mentioned in the article: ISA 315 guideline for general IT controls.
Episode 13:
Bridging the purpose-risk gap: Customer-first algorithmic risk assessments
Spoken (by a human) version of this article.
Banks and insurers sometimes lose sight of their customer-centric purpose when assessing AI/algorithm risks, focusing instead on regular business risks and regulatory concerns.
Regulators are noticing this disconnect.
This article aims to outline why the disconnect happens and how we can fix it.
Report mentioned in the article: ASIC, REP 798 Beware the gap: Governance arrangements in the face of AI innovation.
Episode 14:
External data - use with care
Spoken (by a human) version of this article.
Banks and insurers are increasingly using external data; using them beyond their intended purpose can be risky (e.g. discriminatory).
Emerging regulations and regulatory guidance emphasise the need for active oversight by boards, senior management to ensure responsible use of external data.
Keeping the customer top of mind, asking the right questions, and focusing on the intended purpose of the data, can help reduce the risk.
Law and guideline mentioned in the article:
- Colorado's External Consumer Data and Information Sources (ECDIS) law
- New York's proposed circular letter.
Episode 15:
Algorithm Integrity Documentation - Getting Started
Spoken (by a human) version of this article.
Documentation makes it easier to consistently maintain algorithm integrity.
This is well known.
But there are lots of types of documents to prepare, and often the first hurdle is just thinking about where to start.
So this simple guide is meant to help do exactly that – get going.