Articles: algorithm integrity in FS | Risk Insights Blog

Algorithmic Integrity: Don't wait for legislation

Written by Yusuf Moolla | 08 Oct 2024
TL;DR
• Legislation and standards are helpful but not sufficient for ensuring algorithmic integrity.
• Existing laws, contractual obligations, and customer expectations already apply to algorithmic systems.
• Proactively focus on building fair, accountable, and transparent systems tailored to your specific context.

 

Legislation isn't the silver bullet for algorithmic integrity.

Neither are standards or "best practice" frameworks.

I know that many people will disagree with this, for various reasons. Whole industries are being developed around them, with lots of money and effort thrown in. Some organisations won't be moved without legislation. Others just want to achieve a standards-based certification.

Now, are they useful?

Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply. 

However:

  • existing legislation may already apply
  • new algorithm-focused laws can be too narrow or quickly outdated
  • standards can be confusing, and may not cover what we need
  • "best practice" frameworks help, but they're not always the best (and there are several, so they can't all be "best").

In short, they are helpful.

But we need to know what we're getting - what they cover, don't cover, etc.

Let's explore legislation in more detail, and leave standards and frameworks for future articles.

Podcast icon Listen to the audio (human) version of this article - Episode 9 of Algorithm Integrity Matters

Generic compliance exercises

Laws play a crucial role in setting expectations.

But if we're only aiming to meet legal requirements, we're missing the point entirely.

Consider this.

Would you entrust your company's financial future to someone who only follows a rigid, one-size-fits-all investment strategy?

Probably not. You'd want a portfolio manager who understands market dynamics and can read trends. They understand your corporate objectives and the broader economic landscape. The result - nuanced, tailored investment decisions.

The same principle applies to our algorithms.

We aren't content with merely following generic standards or scraping by legal requirements.

Instead, we focus on building systems that are fair, accountable, and transparent in our specific context.

Just as a skilled portfolio manager adapts strategies to the company's unique needs, we tailor our algorithmic integrity practices to address our context, risks and objectives, and the people we serve.

Existing legislation

Some existing legislation is already relevant to algorithmic integrity.

For instance, anti-discrimination laws can apply to algorithmic decision-making, while data protection regulations already govern many aspects of AI systems' data handling.

Fairness is already captured in human rights laws. While these laws may not be specific to algorithms, the broad concept of fairness isn't either.

The EU AI Act was passed recently. GDPR laws existed already by then. Waiting for the EU AI Act could have meant that privacy obligations were not fulfilled.

Granted, existing laws may not always be easy to interpret in the context of new algorithmic systems. So the new laws can help. But existing laws apply, even when the new ones are not ready or don't cover your specific system.

Contractual obligations and customer expectations

Contracts with customers and third parties already set requirements. These may not be captured in legislation. But we have to abide by them.

Customers expect that we will treat them fairly and manage their data carefully. There are already some laws for both of these, in most jurisdictions. Regardless of what the laws say, we want to treat our customers fairly and keep their data secure and private.

New legislation - scope

Newer algorithm-focused laws often suffer from being too narrow in scope.

Consider the EU AI Act.

The first, or one of the first, of it's kind. It has generated significant activity. It will, no doubt, lift integrity.

But what does that mean for your systems? Is your system covered by the definition of "AI"?

Is your system covered by the risk level? It has a tiered risk approach. Many systems - those deemed low or minimal risk - may not be covered.

Then we have Local Law 144 in New York City.

It focuses on bias in automated employment decision tools.

Some say that the law is a watered-down version of the original bill - it may not meet the original intent. It doesn't cover all protected categories (limited to race and gender). It focuses only on certain aspects of the hiring process. It allows for considerable discretion by employers in determining whether their system is in scope. 

Again, before it was enacted, anti-discrimination laws existed. So you could be "compliant" with this Law, but not be compliant more broadly.

Another interesting case is Colorado's ECDIS Regulation. [3 CCR 702-10]

It focuses specifically on the use of external consumer data and information sources (ECDIS) and related algorithms.

It is certainly useful - it makes it clear that discriminatory ECDIS must not to be used.

This is really important. Insurers have been using these. Often without justification. Sometimes without even knowing that they're doing something wrong. The models and data are sold to insurers, by apparently reputable organisations. Everybody else is using it, so the assumption is that they're ok to use. This legislation makes it clearer.

But anti-discrimination laws already existed. So discriminatory ECDIS should not have been used anyway.

All of this doesn't mean the laws are not useful, or unnecessary.

But they are each limited in scope, so compliance with them can create a false sense of integrity.

New legislation - objectives

New legislation takes time to develop.

Technology is advancing rapidly, as are the use cases for tech.

The laws can be outdated by the time they are enacted.

But the underlying objectives don't change that frequently. So, keeping an eye on the broader goal, rather than the specific legislation, may be a better long-term approach.

By focusing on broader ethical principles - such as fairness, transparency, and accountability - we can create more robust and adaptable algorithmic integrity practices that remain relevant even as technology and legislation evolve.

Proactive Approaches to Algorithmic Integrity

Instead of relying solely on legislation, standards, or frameworks, we want to focus on building systems that have genuine integrity.

Rather than waiting for legislation, consider:

  • conducting regular risk assessments and impact assessments
  • implementing diverse and inclusive design practices
  • establishing internal governance structures for algorithm selection, development, and deployment
  • engaging with stakeholders to understand and address concerns
  • investing in ongoing education and training.

Let's commit to making informed, ethical decisions, even when - especially when - no law or framework explicitly tells us to.

Integrity isn't about blindly following rules. It's about doing the right thing, even when no one's watching.

With a proactive, principle-based approach to algorithmic integrity, we can build systems maintain customer trust.