When we talk about security in algorithmic systems, it's easy to focus solely on keeping the bad guys out.
But there's another side to this coin that's just as important: making sure the right people can get in.
This article aims to explain how security and access work together for better algorithm integrity.
Let’s break it down.
It's obvious why we need to prevent unauthorised access.
Bad actors could:
These can lead to financial losses, reputational damage, and even legal consequences.
Robust security measures are a must.
Here's where it gets tricky.
While we're busy building digital fortresses, we need to make sure we're not locking out the good guys.
Prioritise access over security, and you're leaving the door open for potential breaches and misuse.
If you lean too far, and don’t give people the access they need, you can’t effectively ensure integrity.
This creates a paradoxical situation where overzealous security measures actually create or increase risk.
Here's why, with reference to the 10 key aspects of algorithm integrity from a previous article:
Key aspect of algorithm integrity |
Risk of not providing access |
Ref. |
1. Accuracy and robustness |
Limited Oversight |
A |
2. Alignment with objectives |
Impaired Decision Making |
B |
3. Fairness (incl. impact assessments) |
Limited Oversight |
A |
4. Transparency and explainability |
Reduced Transparency |
D |
5. Security |
Workarounds and Shadow IT |
C |
6. Privacy |
Workarounds and Shadow IT |
C |
7. Governance, Accountability and Auditability |
Reduced Transparency |
D |
8. Risk Management |
Missed Early Warnings |
E |
9. Ethics and Training |
Limited Oversight |
A |
10. Compliance |
Incomplete Audits |
F |
Robust security is crucial, but it must be balanced with the need for oversight and control.
The goal should be to create a secure algorithmic system that still allows for the necessary visibility and access to maintain integrity.
Ensuring that the right people have the right access reduces risk. We want security measures that don't hinder legitimate work, and access that doesn't compromise security.
By getting it right, we enhance algorithmic integrity.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.