A sidestep from our usual focus on algorithmic integrity in financial services, because this issue touches us all.
We all know kids, or have kids, in school, college or university. With increasing use of AI, some schools are trying to work out how to detect AI-generated content. Reactions range from “OK to use it” to “use it and score zero”.
The problem is that the AI detection tools are not that accurate. They often classify AI content as human generated. More importantly, they flag human content as AI-generated. This happens especially with writing from:
Stanford researchers highlighted this two years ago: GPT detectors are biased against non-native English writers.
In August 2024, TEQSA (The Australian Government’s Tertiary Education Quality and Standards Agency) released this report. It recommended that reliance on AI detectors should be limited. It specifically calls out that testing of AI detector tools continually shows that they are unreliable and tend to produce false results. This includes a well-publicised case where an AI detector flagged The Bible as being written by ChatGPT.
Many other studies say the same.
Practically, there are some simple explanations for the problem:
Unfortunately, the tools are still relied on without additional safeguards. The ABC reported on a recent incident where a student was accused of using AI. When staff aren't properly trained, students can get hurt.
Let’s consider an extreme case. A final-year student submits an assignment. The policy says that if it’s AI-generated, the student fails. That could mean repeating the year, or even exclusion from further study. The consequences can be brutal, affecting mental health, reputation, money, and even visas.
Among the things we can do to prepare are:
I’ll be monitoring this and discussing it with my kids, hoping for the best, trying to prepare for the worst.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances.