Articles: algorithm integrity in FS | Risk Insights Blog

Ethical algorithms or effective algorithms?

Written by Yusuf Moolla | 29 Jan 2026
TL;DR
• It’s not a trade-off: We don't have to choose between being ethical and being effective.
• Accuracy is an ethics feature: If we focus on making our models more accurate for all customer segments, we can solve many fairness problems automatically.
• Ethics frameworks: Useful, but as a guardrail, not a tax on performance.

 

With government bodies and other organisations regularly releasing Data/AI ethics frameworks, it is easy to get stuck on a focus on ethics.

[As examples, in December 2025, the Australian Government released its new Data Ethics Framework, and the UK Government updated its Data and AI Ethics Framework.]

When we add "ethics" to a business conversation, it can sound like a tax. A constraint that will slow us down or force us to accept lower performance in exchange for being "good corporate citizens."

So, we might grapple with “Do we want to be ethical, or do we want to be effective?”

The answer, of course, is both. But we can get there without sacrificing either, if we take the view that they are complementary, in most cases anyway.

 

It’s not always a trade-off

Fairness and accuracy can be seen as opposites. The logic there is that to protect a vulnerable group, we have to tweak the model away from its mathematical optimum.

But that assumes the optimum is correct.

I wrote recently about how models can be lazy, latching onto proxies instead of finding real drivers of risk. When a model is lazy, it can be both unethical and ineffective.

  • Ineffective: If our fraud model flags innocent customers in a specific postcode just because it’s an easier pattern to learn than actual fraud behaviour, we are blocking good revenue. That is a bad model.
  • Unethical: That same laziness looks like bias against the people living in that postcode.

So maybe we don't start with an ethics debate, but focus first on better models.

 

Accuracy is an ethics feature

If we focus purely on building a truly "effective" algorithm, one that maintains its accuracy across every slice of our customer base, we automatically solve most of our ethics problems.

But we have to measure it correctly.

We sometimes accept lazy metrics to match our lazy models. A global accuracy score of >90%, and we think we are winning. But if that average hides a lower accuracy rate for a minority group, we are mis-pricing risk or missing opportunities for that segment.

 

But what about where ethics and accuracy do diverge?

So, if we just chase perfect accuracy, do we need an ethics framework at all?

Yes. Because sometimes the model is accurate, but the outcome is still wrong, or we haven’t considered other ethical principles along the way.

This is where "Effective" and "Ethical" diverge. Examples of how this happens are:

  • Historical data: If you feed an algorithm 20 years of (potentially incomplete) data showing that men were less likely to commit fraud than women, the "accurate" prediction is to flag more claims from women for investigation. The model is effectively learning from the data, but the result perpetuates a past we want to change, and/or might be based on poor data quality.
  • Technical focus: Accurate pricing might mean charging the highest premiums to poor customers because they live in higher-risk areas. The mathematics is sound, but the social outcome might be unacceptable.
  • Ignoring other principles: If we don’t build out systems with security and privacy in mind, we could be in breach of those obligations. Our models could be fair and accurate, but unsafe. The same applies to explainability, etc.

 

A two-pronged approach

We don't let the ethics framework scare us into thinking we have to sacrifice performance. Instead, we:

  1. Focus on Effectiveness: Diverse accuracy. If the model is wrong for a specific group, fix it. That’s good engineering.
  2. Then use Ethics as a Guardrail. We also ask the hard questions: "The metrics show that the model works, but is the outcome something we can stand by?", and others like “Is our algorithm transparent”, “Is it secure”, etc.

 

We want ethical algorithms. We can get a head start just by building effective ones.

 

Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.