Skip to content

Algorithm integrity and the trust we hold

We already know that ineffective algorithms have a direct monetary impact.

But the deeper, longer-lasting damage is what happens to trust.

Most languages have words for trust. In English, we hear the word “trust” so routinely that it’s easy to forget what we are actually asking of people. Looking at words from other languages helps reinforce what it really means:

    • Japanese: shinrai conveys a firm belief in the reliability or truth of someone or something.
    • Arabic: amanah signifies a deep obligation to uphold promises.
    • French: confiance blends reliance with the quiet expectation that the other party will not take advantage.
    • Spanish: confianza describes what you extend when you let someone into your life or finances.

Different words, same idea: “I’m putting something that matters in your hands, and I expect you to protect it.”

That’s what happens with banks and insurers every day. Customers hand over their personal data, salaries, and savings. Shareholders hand over capital on the promise of responsible management. Regulators and communities grant us the license to operate, all on the assumption that we won’t use our position carelessly or unfairly.

When we move decisions into algorithms, none of that responsibility goes away. We change the mechanisms: credit risk models, pricing engines, automated claims triage workflows. But we don’t change the underlying promise. Stakeholders don’t transfer their trust to our models; they keep giving it to us.

This means that getting our algorithms right is much more than just “the right thing to do”. Once people give us their trust, we take on a responsibility to make sure our systems behave in ways that are worthy of it. That includes the models that decide who gets credit and on what terms, how much someone pays for cover, and whether a claim is processed fairly.

When a model fails or otherwise causes harm, we may have broken that trust. The customer doesn’t blame “the algorithm”. They blame us, the institution whose logo is on the letter. Regulators don't just audit the code; they penalise the business. And shareholders pay the price when broken trust turns into fines and lost accounts.

They all assume we will notice, and care, if our systems start to hurt people. Upholding that assumption is what integrity in algorithmic systems really means. We have to show that our algorithmic systems work in the real world through system reviews, impact assessments, and vendor assurance, to prove that we deserve the trust placed in us.


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.