Legislation isn't the silver bullet for algorithmic integrity.
Neither are standards or "best practice" frameworks.
I know that many people will disagree with this, for various reasons. Whole industries are being developed around them, with lots of money and effort thrown in. Some organisations won't be moved without legislation. Others just want to achieve a standards-based certification.
Now, are they useful?
Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply.
However:
In short, they are helpful.
But we need to know what we're getting - what they cover, don't cover, etc.
Let's explore legislation in more detail, and leave standards and frameworks for future articles.
Laws play a crucial role in setting expectations.
But if we're only aiming to meet legal requirements, we're missing the point entirely.
Consider this.
Would you entrust your company's financial future to someone who only follows a rigid, one-size-fits-all investment strategy?
Probably not. You'd want a portfolio manager who understands market dynamics and can read trends. They understand your corporate objectives and the broader economic landscape. The result - nuanced, tailored investment decisions.
The same principle applies to our algorithms.
We aren't content with merely following generic standards or scraping by legal requirements.
Instead, we focus on building systems that are fair, accountable, and transparent in our specific context.
Just as a skilled portfolio manager adapts strategies to the company's unique needs, we tailor our algorithmic integrity practices to address our context, risks and objectives, and the people we serve.
Some existing legislation is already relevant to algorithmic integrity.
For instance, anti-discrimination laws can apply to algorithmic decision-making, while data protection regulations already govern many aspects of AI systems' data handling.
Fairness is already captured in human rights laws. While these laws may not be specific to algorithms, the broad concept of fairness isn't either.
The EU AI Act was passed recently. GDPR laws existed already by then. Waiting for the EU AI Act could have meant that privacy obligations were not fulfilled.
Granted, existing laws may not always be easy to interpret in the context of new algorithmic systems. So the new laws can help. But existing laws apply, even when the new ones are not ready or don't cover your specific system.
Contracts with customers and third parties already set requirements. These may not be captured in legislation. But we have to abide by them.
Customers expect that we will treat them fairly and manage their data carefully. There are already some laws for both of these, in most jurisdictions. Regardless of what the laws say, we want to treat our customers fairly and keep their data secure and private.
Newer algorithm-focused laws often suffer from being too narrow in scope.
The first, or one of the first, of it's kind. It has generated significant activity. It will, no doubt, lift integrity.
But what does that mean for your systems? Is your system covered by the definition of "AI"?
Is your system covered by the risk level? It has a tiered risk approach. Many systems - those deemed low or minimal risk - may not be covered.
Then we have Local Law 144 in New York City.
It focuses on bias in automated employment decision tools.
Some say that the law is a watered-down version of the original bill - it may not meet the original intent. It doesn't cover all protected categories (limited to race and gender). It focuses only on certain aspects of the hiring process. It allows for considerable discretion by employers in determining whether their system is in scope.
Again, before it was enacted, anti-discrimination laws existed. So you could be "compliant" with this Law, but not be compliant more broadly.
Another interesting case is Colorado's ECDIS Regulation. [3 CCR 702-10]
It focuses specifically on the use of external consumer data and information sources (ECDIS) and related algorithms.
It is certainly useful - it makes it clear that discriminatory ECDIS must not to be used.
This is really important. Insurers have been using these. Often without justification. Sometimes without even knowing that they're doing something wrong. The models and data are sold to insurers, by apparently reputable organisations. Everybody else is using it, so the assumption is that they're ok to use. This legislation makes it clearer.
But anti-discrimination laws already existed. So discriminatory ECDIS should not have been used anyway.
All of this doesn't mean the laws are not useful, or unnecessary.
But they are each limited in scope, so compliance with them can create a false sense of integrity.
New legislation takes time to develop.
Technology is advancing rapidly, as are the use cases for tech.
The laws can be outdated by the time they are enacted.
But the underlying objectives don't change that frequently. So, keeping an eye on the broader goal, rather than the specific legislation, may be a better long-term approach.
By focusing on broader ethical principles - such as fairness, transparency, and accountability - we can create more robust and adaptable algorithmic integrity practices that remain relevant even as technology and legislation evolve.
Instead of relying solely on legislation, standards, or frameworks, we want to focus on building systems that have genuine integrity.
Rather than waiting for legislation, consider:
Let's commit to making informed, ethical decisions, even when - especially when - no law or framework explicitly tells us to.
Integrity isn't about blindly following rules. It's about doing the right thing, even when no one's watching.
With a proactive, principle-based approach to algorithmic integrity, we can build systems maintain customer trust.