Articles: algorithm integrity in FS | Risk Insights Blog

Risk-Focused Principles for Change Control in Algorithmic Systems

Written by Yusuf Moolla | 29 Oct 2024

With algorithmic systems, a single uncontrolled change can trigger a cascade of unintended consequences, potentially compromising fairness, accountability, and public trust.

So, managing changes is important. This is a given.

But, with algorithmic systems, change control goes beyond traditional practices.

While the base concepts may be similar, the specific risks differ.

Importantly, if you use the wrong framework, you could be including controls that you don’t need, excluding controls that you do need, and not really addressing the risk. Your change control process may then tick the boxes, but be both ineffective and inefficient.

This article outlines a potential solution: a risk focused, principles-based approach to change control for algorithmic systems.

Podcast icon Listen to the audio (human) version of this article - Episode 12 of Algorithm Integrity Matters

An existing guideline

There are well-established methodologies and guides for change control.

We'd ignore them at our peril. We can learn from them.

For example, one source is from financial auditing, often cited in establishing algorithm audit methods.

The ISA 315 guideline for general IT controls includes four key elements of change management:

  1. Change management process: ensure changes are properly planned, tested, and implemented
  2. Segregation of duties (change migration): prevent unauthorised changes from being implemented
  3. System development, acquisition or implementation: ensure new systems are properly designed and tested before deployment
  4. Data conversion: maintain data integrity during system changes or upgrades.

Traditional IT change control vs algorithmic systems change control

The typical objective of IT change control is to minimise fraud and error.

This holds true for algorithmic systems, but it's not just about keeping systems running smoothly and accurately; we also need to establish trust, fairness, confidentiality and accountability.

This means that when we design change controls for algorithmic systems, we maintain a level of focus on error and fraud, but also pay attention to other risks.

The ISA 315 guideline focuses on financial statement risk. It is mainly about fraud and error.

A solid base, but it won’t fully address the unique risks and challenges posed by algorithmic systems.

For example:

  1. Complexity: Many AI and machine learning models are complex, making it difficult to predict the full impact of changes using traditional testing methods.
  2. Data Dependency: Changes in input data can significantly alter algorithmic outcomes without any code modifications, a scenario not always directly addressed in traditional IT change management.
  3. Ethical Implications: Financial statement audits rarely consider the ethical implications of changes, which are crucial for AI systems making high-stakes decisions.
  4. Confidentiality: the data must be kept private, and often the algorithms contain IP that must not be leaked.

 

So we need to expand and/or adapt, to cover the unique needs of algorithmic systems.

 

Key Aspects of Change Management for Algorithm Integrity

Expanding on the ISA 315 guideline, to encompass algorithmic system risks, here is what the four key controls could translate to:

  1. Robust Design and Testing Processes

Before any change is implemented, it must undergo rigorous design and testing:

  • Impact analysis: how the change might affect the algorithm's accuracy, fairness, and alignment with business objectives.
  • Comprehensive testing: using diverse datasets to ensure the change performs as expected across various scenarios.
  • Peer review: the proposed changes are reviewed by other data scientists or AI experts to catch potential issues.
  1. Controlled Migration to Production

The process of moving changes from development to production environments must be tightly controlled, including segregation of duties:

  • Staging environments: an environment that mimics production, for final checks before go-live.
  • Rollback plans: clear procedures for reverting changes if unexpected issues arise.
  • Segregation of Duties: clear separation between those who develop changes and those who implement them in production, limiting access to make changes to production algorithms.
  • Approvals: multi-step approvals for changes, involving both technical and business stakeholders.
  1. Documentation and Auditability

Every change must be meticulously documented:

  • Change logs: detailed records of what was changed, why, and by whom.
  • Version control: to track changes over time.
  • Audit trails: all actions are logged and traceable.
  1. Data Integrity

Changes in input data continue to support the business objectives:

  • Data quality: the data continues to be correct and fit for use.
  • Data approval: a formal process for approving new or revised data sources for use in the system
  • Data lineage and provenance: documentation to explain and track data flows, including transformations, from source(s) to target(s).

 

This could be a good starting point.

But not all systems are the same.

 

Tailoring the change control approach

The approach to controlling change will vary, depending on the nature of the system, the level of complexity, whether the system is developed in house or is purchased, and the risk management focus.

Here are some examples - hypothetical, but based on real-world observations.
Note: the “risk focus” is indicative, not exhaustive, and would be based on a risk assessment.

  1. Credit Scoring System (rules based, in-house developed)
    Risk Focus: Transparency and regulatory compliance
    Potential key change controls: Bias (re)testing prior to change; documentation of rule changes for regulatory scrutiny; change approval process involving both technical and business stakeholders.
  2. Insurance Pricing Model (machine learning, purchased)
    Risk Focus: Performance, fairness, and model interpretability
    Potential key change controls:  Testing with diverse datasets before deployment; check new vs old re model drift, performance degradation; collaboration with vendor for model updates and explanations; strict access controls and audit trails for all parameter adjustments.
  3. AI-Powered Insurance Claims Processing (In-house developed)
    Risk Focus: Efficiency, accuracy, fairness, explainability/transparency
    Potential key change controls: Testing with known claim patterns and edge cases; evaluating the regular updates that reflect new fraud techniques and claim patterns; detailed logging of all AI model changes and retraining events.
  4. Credit Limit Assignment System (Hybrid rules and ML, in-house dev with 3rd-party components)
    Risk Focus: Fairness, accuracy, explainability
    Potential key change controls: Clear delineation between rule-based and ML components in change process; user acceptance testing with diverse customer scenarios; (re) checking credit limit assignments for unexpected patterns.

In short, not all systems are the same. 

So rather than a standard set of controls, it may be better to start with a risk assessment, then use a set of principles to guide control selection and design.

 

Risk Assessment and Principles

Truly effective change management for algorithmic systems must align with broader principles of algorithm integrity.

By overlaying the traditional change control guidelines on the 10 key aspects of algorithm integrity we've previously discussed, we can craft a set of guiding principles.

This approach allows us to adapt specific change controls based on risk assessments, with the principles then steering the implementation details.

Overarching Focus on Risk Assessment

The principles are underpinned by a risk assessment process, which should:

  • Identify potential risks across all areas - e.g., performance, fairness, transparency, security
  • Prioritise risks - e.g., based on their potential impact and likelihood
  • Determine how resources will be allocated - e.g., based on the prioritisation (focus areas)
  • Be regularly updated - e.g., to reflect new use cases, evolving threats, updated obligations.

That sets the foundation.

Principles

With our risk based foundation in place, we can use these principles to develop our controls:

  1. Changes enhance the algorithm's accuracy and robustness.
    Risk links: Performance degradation, errors
    Related Aspect: Accuracy and Robustness (#1)
  2. Changes enhance or maintain the algorithm's alignment with intended objectives.
    Risk links: Misalignment with business goals, inadvertent use of irrelevant data
    Related Aspect: Alignment with Objectives (#2)
  3. Changes do not introduce bias and adhere to ethical standards.
    Risk links: Discrimination, ethical violations, erosion of public trust
    Related Aspects  Fairness (#3) and Ethics and Training (#9)
  4. Changes maintain or improve the ability to explain the algorithm's decisions and ensure clear accountability for changes.
    Risk links: Opacity in decision-making processes, lack of responsibility for modifications
    Related Aspects: Transparency and Explainability (#4) , Governance, Accountability, Auditability (#7)
  5. Changes follow secure development practices, and maintain or enhance privacy protections.
    Risk links: Security breaches, unauthorized access, privacy breaches
    Related Aspects: Security (#5) and Privacy (#6)
  6. Changes maintain adherence to laws and contractual obligations.
    Risk links: Regulatory non-compliance, contractual non-compliance
    Related Aspect Compliance (#10)

 

We now have an approach that is flexible and targeted.

 

Embracing Change, Preserving Integrity

With algorithmic systems, change is inevitable.

We need to manage these changes in a way that preserves and enhances the integrity of our systems. Changes must be deliberate, controlled, and aligned with our values and objectives.

Systems vary in nature, complexity, etc.

So the specific risks and controls will be different across systems. This means that a principles-based approach, underpinned by a risk assessment, will likely be better than a checklist approach.

Ultimately, if you follow an approach like this, you will have better control.

More effective. More efficient. A better use of your time and resources.

 
 

Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It may not be appropriate for high-risk use cases (e.g., as outlined in The Artificial Intelligence Act - Regulation (EU) 2024/1689, a.k.a. the EU AI Act). It was written for consideration in certain algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.