TL;DR • Legislation and standards are helpful but not sufficient for ensuring algorithmic...
Glossaries for algorithms and AI don’t always agree (and that’s ok)
Definitions for common terms used to describe algorithms and AI can vary: scope, depth, focus, etc.
Regulators have their language. Compliance and legal folk have their own. Vendors have buzzwords. Data teams have their technical variations.
You probably sit somewhere in the middle, trying to work out what this means for your customers and your team’s day-to-day work.
Some definitions from two recent publications
Even the more serious reference sources don’t always agree. For example, this 2025 FSB report and the recently produced FS AI lexicon contain different sets of terms. There is some overlap in coverage, but even then the definitions have different emphases and wording.
Here are 5 examples, to illustrate how they differ:
1. Traditional AI
FSB: “A suite of computational techniques that pre-date recent advances, such as GenAI.”
Lexicon: “Traditional AI, also referred to as symbolic or rule-based AI, is a subset of AI that focuses on performing discreet, preset tasks using predetermined algorithms and rules. These AI applications are designed to excel in a single activity or a restricted set of tasks, such as playing chess, diagnosing diseases, or translating languages.”
Difference: The FSB definition is broad and time‑based, while the Lexicon is specific about rule‑based, narrow‑task systems.
2. Explainability
FSB: “The ability of an AI model to provide clear and interpretable outputs or decisions.”
Lexicon: “Property of an AI system that enables a given human audience to comprehend the reasons for the system’s behavior; the ability to understand an AI system’s output and decision given certain inputs.”
Difference: FSB focuses on the model being interpretable, while the Lexicon stresses that explanations must make sense to a particular human audience.
3. Model Risk
FSB: “The potential for adverse consequences arising from decisions based on incorrect or misused models.”
Lexicon: “The potential for adverse consequences from decisions based on incorrect or misused model outputs and reports. Model risk can be from individual models and be in the aggregate. Aggregate model risk is affected by interaction and dependencies among models; reliance on common assumptions, data, or methodologies; and any other factors that could adversely affect several models and their outputs.”
Difference: The FSB definition is quite simple, which can be a good thing. The Lexicon adds detail on aggregate model risk, dependencies and assumptions, which can be useful context.
4. Algorithm
FSB: “A set of steps to be performed or rules to be followed to solve a mathematical problem. More recently, the term has been adopted to refer to a process to be followed, often by a computer.”
Lexicon: “A clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result.”
Difference: FSB allows “algorithm” to mean a general process or set of steps, not just maths. The Lexicon keeps it closer to a formal computational procedure that produces a prescribed result.
5. Machine learning
FSB: A method of designing a sequence of actions, known as algorithms, to solve a problem which optimise automatically through experience and with limited or no human intervention.”
Lexicon: “An AI learning method that enables computational systems to learn patterns, make predictions, and optimize decisions from large amounts of data without being explicitly programmed for each task. Machine learning encompasses supervised, unsupervised, and reinforcement learning paradigms, serving as the technical foundation for data-driven intelligence and automation.”
Difference: FSB frames ML as optimising a sequence of actions through experience. The Lexicon emphasises learning patterns from data to make predictions and decisions across different learning types.
Building an internal glossary
Knowing what the various definitions are, and choosing a preferred subset can be helpful.
Rather than trying to start with a single “correct” definition, it might be more useful to:
- ask: which wording helps our people understand the system, the risk, and the customer impact?
- make sure that we are not deviating from regulatory/supervisory wording
- keep up with new regulatory/supervisory definitions.
In some cases, it may make sense to stay flexible and use more than one definition, depending on the audience and the document. There is value in noticing where language diverges, and being deliberate about which wording we use in which context.
Once we have settled on a set of core definitions, we can use them:
- When a policy or framework mentions “machine learning” or “model risk”, we can point authors to a preferred wording, rather than letting every team improvise.
- When a board paper talks about “algorithms” or “explainability”, we can check whether the term matches how regulators and customers are starting to use it.
- When a vendor pitches “AI”, we can ask them whether that’s traditional AI or something else, to better understand the risks and guardrails.
This isn’t about winning a jargon contest. It’s about making it harder for risky systems to hide behind fuzzy language, and easier to have clear conversations about how algorithms are actually treating customers.
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.