Articles: algorithm integrity in FS | Risk Insights Blog

Acceptable Use Policies alone are not enough

Written by Yusuf Moolla | 06 Aug 2025
TL;DR
• Acceptable Use Policies are typically too long and complex for most people to actually read.
• AUPs alone don't create literacy/understanding or enable proper risk management.
• We also need training, education, and practical guidance.

 

You probably already know this intuitively. This article may help you to explain why acceptable use policies are not enough on their own, and why we really need other initiatives as part of our AI literacy efforts.

We’ll use the acronym AUP to include acceptable use policies, terms of use, terms of service, end user agreements, and other such policies.

There are two problems with relying on AUPs:

  1. End-users generally don’t read AUPs because they can be quite long, and most people accept them without even reading through, much less understanding/interpreting.
  2. Non-end-users (like senior management) face a similar challenge: the AUPs need quite a bit of interpretation and discussion, especially when applied to other audiences; they also do not address all the knowledge required for those audiences.

 

Problem 1: end-users

AUPs are now very common, but they are simply not enough.

The typical AUP shows up when we’re signing up for a new service or accessing an existing service. For example, we see some of them when we first install software or when we’re signing up to a new online service. Others may be displayed each time we access a service, for example, software programs that are updated frequently and require us to re-accept agreements when they change.

But we know that very few people actually read them. I know that for many services I sign up for, or when updating software, I often scroll mindlessly to the bottom and hit Accept. I know this is not what I should do, but who has time to read through it all?

The Terms of Service for a major short-form video platform is ~7,000 words. To put it into context, the average page in a non-fiction book has 250 words. So, the user must read 28 pages of legal language, then interpret it. This is not very effective.

The same applies to popular LLM providers. One of them has an 8,000 word commercial user services agreement. We could use the LLM to summarise the 32 pages, but then we can’t be sure that the summary is accurate or complete.

How many users are going to read, understand and interpret these?

 

Problem 2: non-end-users

Here we are referring to boards, AI specialists, senior management, and others responsible for ensuring that risks are properly managed.

Given how long the AUPs are, and with heightened expectations (relative to end-users), we can’t rely on the AUP. Reading it once, if that even happens, will definitely not suffice. Especially without training to identify what can go wrong and what guardrails need to be put in place, etc.

 

What can we do instead (or in addition)

There’s nothing wrong with having AUPs, of course. We need them (perhaps without the legalese), but they’re not enough on their own. We also must have broader awareness, training and upskilling.

For generative AI, this AI Baseline Guidance Review provides an example of what to do. Here is a summary of what it says is required to embed a ‘Responsible AI’ culture and mitigate Gen-AI risks:

  • A Gen-AI Acceptable Use Policy (AUP) that provides clarity on the requirements for adoption and rules of usage.
  • Ensuring the AUP is understood by everyone in the organisation.
  • Up-skilling to reap the benefits, e.g. prompt engineering, education about the risks of AI-augmented social engineering.
  • Specialist technical up-skilling for key SME roles to address risks.

The same principles apply to AI and algorithmic systems more generally. 

Whether we're dealing with credit scoring algorithms, pricing models, or fraud detection systems, we can’t rely on policies alone. The specific risks and controls will differ, but we need education, training, and practical guidance.


Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.