Articles: algorithm integrity in FS | Risk Insights Blog

Algorithm Integrity: Training and Awareness

Written by Yusuf Moolla | 10 Dec 2024
TL;DR
• Ongoing education helps everyone understand their role in responsibly developing and using algorithmic systems.
• Regulators and standard-setting bodies emphasise the need for AI literacy across all organisational levels.

 

Many algorithm related laws and guidelines refer to training to improve integrity and responsible use. There is a good reason for this. If everyone involved knows what to look out for, we have a better chance of achieving integrity in our algorithmic systems.

Boards providing oversight, senior management providing direction, 2nd line risk/compliance providing guidance or review, managers taking carriage of operations, data scientists developing the systems, and all staff using AI.

If everyone becomes aware and gets involved, the job becomes easier.

Without awareness, algorithm integrity becomes difficult to achieve. If the data scientist is not aware, they may throw in data that results in bias. If senior managers are not aware, they may not commission assessments.

In short, we need ongoing training and awareness across our organisations.

ForHumanity recognises this and has a specific project focusing on AI literacy, establishing learning objectives and developing AI literacy teaching modules. ForHumanity is a non-profit public charity dedicated to mitigating risk from AI, Algorithmic, and Autonomous (AAA) Systems. To get involved with the project, join the growing community here. For more in-depth learning on AI risk topics, ForHumanity also provides free courses here.

Here are examples of guides and laws that emphasise the need for AI-related training and education, to ensure responsible development and use.

  • IAIS: The International Association of Insurance Supervisors is developing a guidance paper on the supervision of AI. The November 2024 draft recommends regular competency-based training for boards so that they can effectively scrutinise AI system deployment. It also recommends effective training “cascading” throughout the insurer to ensure that all staff are aware of – and understand their role in addressing – AI risks.
  • DNB: De Nederlandsche Bank included “skills”, in 2019, as one of 6 general principles for the use of AI in the financial sector.
  • ASIC: The Australian Securities & Investments Commission produced a report in October 2024, following a review of how 23 financial services organisations were using or planning to use AI. The report included this: “Directors and officers should be aware of the use of AI within their companies, the extent to which they rely on AI-generated information to discharge their duties and the reasonably foreseeable associated risks.”
  • NIST: The National Institute of Standards and Technology produced the AI Risk Management Framework in 2023. It includes a specific subcategory that outlines the need for AI risk management training for personnel and partners.
  • EU AI Act: The European Union Artificial Intelligence Act includes a specific expectation about “AI literacy”.

We will explore this topic further in future articles.

 

 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.