Successful use of algorithms relies on genuine human oversight.
The Insurance Council of Australia's recent report on AI adoption* emphasizes this. It calls out the risks of "over-reliance on automation” and “insufficient human oversight".
Many high-profile failures show that simply having a human involved doesn't guarantee these risks are managed. Part of the problem lies in passive trust. To avoid falling into this trap, oversight needs to be real, not just a tick box.
When we first roll out a new system, we pay attention to how it works. We double-check results and ask questions. But after some time, if nothing bad happens, we relax. We rush through our checks. “Humans in the loop” become humans just going through the motions. It’s natural and understandable: there’s always another priority.
There are plenty of high-profile examples showing what happens when we stop checking.
Consulting firms producing reports for government that are full of AI-generated citations, some completely fabricated. It’s reasonable to assume that the reports passed review, but it appears that important details were not double checked. The same thing has happened in law, with AI-written legal briefs filled with fake cases. In many of these examples, the mistakes only came out when someone with deep knowledge looked closely.
The ICA’s guidance about fraud detection explains that investigators need to have final decision-making power. It also calls for reviews that include testing for bias: "Protected features and their proxies must be excluded from AI models, with thorough testing for bias embedded against protected attributes."
Having a human involved only helps if that person is alert and asks questions. They need to be ready to stop the process or flag a problem if anything looks strange. Challenging results consistently.
This can be hard work, especially if the business case was unrealistic (e.g., based on vendor hype). If we expect big savings without the ongoing effort required for real oversight, we're likely to set ourselves up for disappointment.
With newer “AI” capabilities, especially, we need to balance our idea of the potential with the reality of the time and effort we need to put in to build trust.
In practice, some degree of passive trust is always a risk.
So we also need to proactively:
* Bratanova A., Kaur S., Banyard S., Chamikara M.A.P., Walker G., Chen H. & Hajkowicz S. (2025). AI for Better Insurance: Enhancing Customer Outcomes amid Industry Challenges. A Consulting report for the Insurance Council of Australia by CSIRO, Australia.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.