You probably already know this intuitively. This article may help you to explain why acceptable use policies are not enough on their own, and why we really need other initiatives as part of our AI literacy efforts.
We’ll use the acronym AUP to include acceptable use policies, terms of use, terms of service, end user agreements, and other such policies.
There are two problems with relying on AUPs:
AUPs are now very common, but they are simply not enough.
The typical AUP shows up when we’re signing up for a new service or accessing an existing service. For example, we see some of them when we first install software or when we’re signing up to a new online service. Others may be displayed each time we access a service, for example, software programs that are updated frequently and require us to re-accept agreements when they change.
But we know that very few people actually read them. I know that for many services I sign up for, or when updating software, I often scroll mindlessly to the bottom and hit Accept. I know this is not what I should do, but who has time to read through it all?
The Terms of Service for a major short-form video platform is ~7,000 words. To put it into context, the average page in a non-fiction book has 250 words. So, the user must read 28 pages of legal language, then interpret it. This is not very effective.
The same applies to popular LLM providers. One of them has an 8,000 word commercial user services agreement. We could use the LLM to summarise the 32 pages, but then we can’t be sure that the summary is accurate or complete.
How many users are going to read, understand and interpret these?
Here we are referring to boards, AI specialists, senior management, and others responsible for ensuring that risks are properly managed.
Given how long the AUPs are, and with heightened expectations (relative to end-users), we can’t rely on the AUP. Reading it once, if that even happens, will definitely not suffice. Especially without training to identify what can go wrong and what guardrails need to be put in place, etc.
There’s nothing wrong with having AUPs, of course. We need them (perhaps without the legalese), but they’re not enough on their own. We also must have broader awareness, training and upskilling.
For generative AI, this AI Baseline Guidance Review provides an example of what to do. Here is a summary of what it says is required to embed a ‘Responsible AI’ culture and mitigate Gen-AI risks:
The same principles apply to AI and algorithmic systems more generally.
Whether we're dealing with credit scoring algorithms, pricing models, or fraud detection systems, we can’t rely on policies alone. The specific risks and controls will differ, but we need education, training, and practical guidance.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.