TL;DR • This resource is from ICA (Insurance Council of Australia) and CSIRO (Australia's national...
AI Isn't Just Amplifying Familiar Risks
Is AI just amplifying the risks we've always had to deal with?
I’ve come across this view a few too many times now, and it's frustrating. It gives a false sense of security.
Before we dive deeper, consider that:
-
There are genuinely new challenges. This is the thrust of this article. Some amplified from older risks, others emerging from how AI learns and behaves.
-
Not all of it fits neatly into “digital risk.” Some issues touch on ethics, reputation, etc. That's not unique, cyber risks can also damage reputation. But thinking purely through a digital lens limits how we prepare and respond.
-
It’s still early days. We don’t yet have all the right answers. But we can ask better questions, continue learning, acknowledge the unknowns, and keep striving to close the gap.
The myth: AI only amplifies familiar risks
It’s an appealing idea because it suggests that the same controls and governance structures we’ve relied on for decades will continue to keep us safe. It is comforting. Established governance frameworks (e.g., for cyber, privacy) provide a foundation for managing digital risk. It can be quicker and less disruptive to extend familiar controls rather than starting from a blank slate.
But legacy frameworks are not enough by themselves. Controls for static systems don’t translate neatly to address dynamic systems. Instead, we can redirect our focus to building on what works, aware that there are limitations, and ready to enhance our controls and our thinking.
Clinging to existing approaches alone could leave us exposed to fast-moving, complex threats. The framing is incomplete. It captures only part of the picture.
You could argue that these are not new classes of risk. Granted. But they are certainly quite different. And they need responses that most of us are not yet very familiar with.
The Amplification Story Breaks Down Very Quickly
Traditional machine learning magnified certain long-standing challenges. Data privacy and security became sharper, but are broadly still recognisable. But even basic ML has some new problems that we haven’t solved yet. For example:
- Bias and discrimination risks go beyond mere amplification. Unchecked, algorithmic bias can scale quietly, hardwiring and scaling human bias, and producing decisions that seem objective. It can also find patterns that are correlative (and not causative), that humans would typically ignore or even miss.
- ML models can be opaque, with outcomes that can’t be explained easily. This is a governance challenge that is largely unseen in deterministic systems.
- Model drift adds another distinct behaviour: models degrade over time, something rule-based systems don’t experience.
The Leap to Generative and Agentic AI
Then we have emerging generative and agentic systems:
- Generative AI doesn’t just predict or classify; it creates. It can fabricate plausible but false information confidently. It manufacture new problems. A spreadsheet doesn’t hallucinate, but a generative model can produce fake customer records or make up legal precedent, wrapped in perfect language. Missteps by major consultancies and law firms highlight how generative outputs can fabricate evidence, causing real reputational and financial damage.
- Agentic AI introduces a new type of autonomy. Systems that can act, plan, and coordinate can behave in ways their designers didn’t anticipate. When multiple agents interact, even if each is “safe” in isolation, their collective behaviour can be a problem. And very different to what we historically had to deal with.
Why This Matters
It underestimates what we’re dealing with. If we accept the view that AI risks are just amplified older risks, we’ll apply controls built for static systems to manage dynamic technologies.
The danger is that we attempt to retrofit old policies to new technology, missing blind spots entirely. With regulators increasingly focused on these shifting risks, relying on yesterday’s playbook isn’t good enough. Controls and audits designed for static systems will struggle to keep up with dynamic technology.
The notion that "AI doesn't introduce new risks" offers false reassurance. Traditional ML may have stretched familiar ideas of risk management and introduced a handful of newer risks. Generative and agentic AI are not just amplifying existing threats; they’re redefining digital risk. Anything less than acknowledging that leaves us vulnerable.
A Way Forward?
We’re all overwhelmed, and trying to keep up. Especially because the solutions, and risks, keep changing.
But we can’t rely on what we’ve always done. And we can’t wait for perfect answers either. We want to face the risk head on, not just react to headlines. And there are already lots of headlines.
So here are three things we can do now:
- Ask broad questions. Draw on perspectives from technology, ethics, legal, and reputation. Engage across our teams. Learn from other industries: the specifics may not be relevant, but they can inform our thinking.
- Update, test, and adapt our controls. Build on what works, regularly challenging and refining.
- Acknowledge the unknowns: With healthy scepticism, ongoing conversation. Continuously trying to find and close the gaps.
Staying curious and open is just as important as any new control.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.