Many algorithm related laws and guidelines refer to training to improve integrity and responsible use. There is a good reason for this. If everyone involved knows what to look out for, we have a better chance of achieving integrity in our algorithmic systems.
Boards providing oversight, senior management providing direction, 2nd line risk/compliance providing guidance or review, managers taking carriage of operations, data scientists developing the systems, and all staff using AI.
If everyone becomes aware and gets involved, the job becomes easier.
Without awareness, algorithm integrity becomes difficult to achieve. If the data scientist is not aware, they may throw in data that results in bias. If senior managers are not aware, they may not commission assessments.
In short, we need ongoing training and awareness across our organisations.
ForHumanity recognises this and has a specific project focusing on AI literacy, establishing learning objectives and developing AI literacy teaching modules. ForHumanity is a non-profit public charity dedicated to mitigating risk from AI, Algorithmic, and Autonomous (AAA) Systems. To get involved with the project, join the growing community here. For more in-depth learning on AI risk topics, ForHumanity also provides free courses here.
Here are examples of guides and laws that emphasise the need for AI-related training and education, to ensure responsible development and use.
We will explore this topic further in future articles.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.