TL;DR • AI literacy is growing in importance (e.g., EU AI Act, IAIS). • AI literacy needs vary...
“AI slop”: word of the year and a hard-to-predict risk for banks and insurers
The Macquarie Dictionary Word of the Year* for 2025 is “AI slop”. Poor quality content generated by AI tools. It’s worth noting that the “AI” being referred to here is generative AI, not AI more broadly.
As an aside, “Roman Empire” was also on the shortlist. I can’t help but hum Crowded House’s Weather with You when I hear that; and it’s from the 90s, funny how some associations stick.
Back to the point. For financial services leaders, “AI slop” means worrying about hallucinations, over‑confident but inaccurate advice and unchecked AI outputs. We (humans) tend to favour suggestions made by automated systems; this is commonly known as automation bias. When we’re under time pressure or assume “the system has already checked this”, we lower our guard. So we could easily end up with inaccurate AI-generated outputs in internal documents, or, worse still, client comms, with lower than usual scrutiny.
To illustrate this, consider the two recent failures by a major firm. First in Australia, then more recently in Canada. The same firm sells “trustworthy AI” services and produces “go-to-guides” for ethical and reliable AI.
Now, when that firm delivers public reports that are generated with what appears to be poor quality control, we have a problem. And not just with start‑ups or side projects; but core professional work from a brand‑name institution that used to pride itself on its quality. It took them a long time to earn trust, and will likely take a long time to earn it back.
If a global brand with a reputation for quality can stumble like this, smaller institutions under cost pressure are probably even more exposed.
But while the risk is hard to predict in a stochastic system with variable outputs, we can be deliberate with our checks and balances.
For banks and insurers
Our teams can act now by putting controls in place:
1. Hallucinations are failures, not accidents
If our models produce apparently authoritative but false references, that’s a failure of design, testing, and governance; not simply a quirk of the technology. We need to focus on data quality, documentation, and explainability; and treat hallucinations as something to be anticipated, tested for, and controlled.
2. Check AI outputs to maintain trust
A major firm in the news for issuing public AI‑slop reports shows how weak internal controls can undermine an entire brand. To avoid this, and maintain both public and internal confidence, we check those outputs. This means evaluating AI outputs with the same level of rigour that we would apply to human outputs.
3. Advice, disclosures, and other high‑risk use cases
Pay extra attention to consumer facing outputs like product disclosures and responses to complaints, and high-risk documents like board papers. Any use of AI in these artefacts can benefit from structured checks: source verification, human sign‑off, and clear audit trails of what the AI generated and what the human changed.
* Macquarie Dictionary Online, 2025, Macquarie Dictionary Publishers, an imprint of Pan Macmillan Australia Pty Ltd, www.macquariedictionary.com.au
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.