Third-party risk management isn’t new. Whether we’re using a cloud provider, external payment processor, or supply chain partner, we know that relying on external vendors comes with inherent risks.
The CrowdStrike outage last year is an example. A single faulty update caused millions of systems to crash, including banks and essential services. Even trusted third parties can introduce risks that are sudden and widespread.
But generative AI adds a new layer of complexity. The services might not just fail; they could disappear completely or look very different, very quickly.
Tech commentator Ed Zitron has been vocal about the shaky economics. He argues that the major Gen AI platforms are propped up by runway spending - burning investor cash to keep services running while hoping for a miracle.
Most (if not all) of them lose much more money than they make and are not profitable. If funding slows, they could vanish, taking their services with them.
This isn't some lone voice. The Financial Stability Board (FSB) wrote in Nov 2024 that the rapid adoption of AI in finance is creating new systemic risks, including 3rd party dependencies and service provider concentration. According to the FSB:
"The reliance on specialised hardware, cloud services, and pre-trained models has increased the potential for AI-related third-party dependencies. The market for these products and services is also highly concentrated, which could expose FIs to operational vulnerabilities and systemic risk from disruptions affecting key service providers."
I'm not being a doomsdayer. LLMs can deliver value in areas like customer service automation and fraud detection. Some costs are coming down, with new hardware, more efficient software, and open-source alternatives. Over time, trends like these may help providers find stability and paths to profitability. But we can't ignore the potential systemic risks, and it's prudent to be aware of them and plan accordingly.
If you’re fine-tuning, using an API, or relying on a prebuilt solution, this risk is important to be aware of.
Many vendors offering AI-driven solutions (chatbots, risk modelling, document automation) aren’t building their own models. Instead, they’re relying on third parties like OpenAI, Anthropic, or Google.
If those platforms fold, raise prices, or reduce access, the services we rely on could:
There’s a lot happening. I don’t think anyone has the answers just yet.
Again, this certainly isn’t a call to abandon AI. As the tech matures, some of these risks may ease.
But we can ask questions to work out how exposed we are, and factor the risks into anything we build, adapt, or buy.
Disclaimer: The information in this article does not constitute legal advice. It may not be relevant to your circumstances. It was written for specific algorithmic contexts within banks and insurance companies, may not apply to other contexts, and may not be relevant to other types of organisations.