This article is a bit long-winded, but there is a point. Bear with me. This week marks 25 years of...
Do we really have “best” practices yet?
“Best practice” gets thrown around a lot in AI and algorithm governance discussions. I hear it often and imagine you’ve come across it too. But with emerging AI and algorithms, we don’t really know what “best” looks like yet.
To call something best practice, we’d need at least a few things:
- clear evidence that it works better than other options
- results that hold up over time
- proof it works for different groups of stakeholders
- for guidelines or frameworks that are meant to be industry agnostic, evidence they work beyond one sector
- in banking and insurance, alignment with regulatory expectations.
For most algorithmic systems in banking and insurance, we don't have all of those yet.
We may have “what others are doing”, “what a vendor recommends”, or “what the (thousand-page) framework says”.
That’s not quite “best”. So when we hear “This is best practice”, perhaps we should ask for evidence that it is.
Best practice is comforting. It’s good marketing speak, and sounds like a solid defence.
Something along the lines of “Current good practice, under review” is less authoritative. But it is perhaps more honest, giving us room to adjust as we learn in practice.
Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.