If you've encountered the view that "AI audit reports" need to be made public, and you're responsible for an algorithmic system, you might feel a bit uneasy.
However, for most readers of this article, there's no need to be overly concerned.
This article explains why you shouldn't be too worried, and how to redirect the energy misspent on anxiety.
AI and algorithm audit guidelines vary widely and are not universally applicable. We discussed this in a previous article, outlining how the appropriateness of audit guidance depends on your circumstances.
Here are some further specifics to explain what this notion of public audit reporting may mean for your situation.
Public financial statement audit reports are often cited in the context of AI audits. Here is some background information on financial audits that will help in understanding the potential direction that AI audits will take:
AI audit reporting may follow a similar path.
It will probably happen faster. But it is reasonable to assume that it won’t happen overnight.
While public reporting isn't universally required, recent legislation shows some signs of moves in that direction:
These signal a growing trend towards transparency.
The EU AI Act primarily targets high-risk applications (in terms of audit requirements).
The ECDIS Act requires annual returns, but not necessarily public audit reports.
We explored this topic in depth in a previous article. Here are the key points relevant to our discussion:
This distinction is important. The debate around public reporting relates to formal audits, not other types of reviews.
Understanding this should alleviate some anxiety about public reporting. If you're conducting internal reviews of your AI systems, these are likely not the “audits” discussed in the context of public disclosure debates.
Transparency demands mainly target formal independent audits. And primarily for high-risk AI applications.
The need for public reporting will be driven by several factors, including:
However, this won't apply to all reviews.
And when public reporting is required, the level of detail in a public report is likely to differ from an internal review report. This is because:
For example, if you commission a review of your algorithmic fraud system to check for potential bias, you might choose to publicly share some information. But you won’t disclose all the details. For example, how you check for fraud may need to remain hidden, or it may be easier for fraudsters to figure out how to bypass the checks.
Note: for high-risk AI systems that fall under the EU AI Act, certain information may need to be registered in the EU database. This includes a description of the system, its intended purpose, and information about the provider.
Even if your organisation isn't currently required to publish AI audit reports, it's wise to be prepared.
Here are three things you can consider now:
The potential for public AI audit reports shouldn't be a source of anxiety. Instead, view it as an opportunity to strengthen your AI governance practices.
While there's a growing trend towards transparency in algorithmic decision-making, especially in the public sector, the need for public reports on algorithm integrity reviews is not universal.
However, banks and insurance companies using high-risk AI systems, as defined by the EU AI Act, may be subject to specific requirements. Further public reporting requirements may emerge as regulators gain more experience in overseeing these systems.
So, being prepared is important. This includes identifying and mitigating risks before they become issues.
By conducting regular, thorough reviews of your algorithmic systems - you can ensure their integrity, fairness, and reliability.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.