Skip to content

UAT, BVT and the missing customer focused test

TL;DR
• UAT often tests for staff workflows, not the actual customer outcome or experience.
• BVT verifies the spec is met, but can miss flaws or blind spots in the spec itself.
• We can add customer / outcome testing scenarios (fairness, edge cases) before going live.

 

In most banking and insurance projects, a ‘Green’ status report usually relies on two acronyms: UAT (User Acceptance Testing) and BVT (Business Verification Testing).

If the user accepts it, and we verify it, we go live. Should we also be asking ‘Have we tested this for the customer?’

This way of framing it isn’t standard in most project plans. But at least one regulator is pushing for a focus on customer outcomes (the UK’s FCA Consumer Duty); ‘customer outcome testing’ could be a response to make that idea real during delivery.

 

UAT is usually for the Staff, not the Customer

In purely digital channels, UAT might involve the customer. But in the complex back-end systems (credit decisioning, claims pricing), the 'User' is often a staff member.

When they perform UAT, they are validating their own workflow:

  • Did the screen load?
  • Did the "Decline" button work?
  • Did the workflow move the application to the right queue?

If the system successfully rejects a vulnerable customer because of a data error, UAT will still pass. The system "worked" perfectly for the staff member; it just failed the customer.

 

BVT is just checking our own homework

Business Verification Testing sounds rigorous. It implies we are verifying the business is safe.

If our pricing requirement says "Increase premiums by 20% for this risk factor," BVT will confirm that the math is correct. But it won’t tell us that the risk factor relies on a field that is empty for 50% of our customers.

This is the classic trap of Verification (building the thing right) vs. Validation (building the right thing). BVT is doing its job: verifying the system matches the spec. But if the spec has a blind spot, we have tested that we are doing the wrong thing, correctly.

 

The Missing Phase: Customer/Outcome Testing

We have testing for the Staff (UAT) and testing for the Spec (BVT).

Good test managers already include 'Unhappy Paths' or 'Edge Cases.' But often these focus on system stability (does it crash?), not customer impact (is it fair?). We can elevate those edge cases, thinking not purely in terms of technical exceptions, but in terms of customer outcomes.

This is starting to appear in regulations like the UK’s Consumer Duty, but in many project plans, it’s still absent. We rely on the idea that if the requirements were good, the outcome will be good. Where it does appear in project plans, like UAT/BVT, it sometimes can get rushed through when under pressure to meet deadlines.

That is a dangerous position, especially for complex systems.

We don't (necessarily) need a new acronym. We just need to stop treating acceptance and verification as the finish line. Before we go live, we can add other questions that test for effectiveness and customer outcomes, not just function:

  • Does the outcome pass the sniff test? Would we be comfortable if our system was scrutinised by a regulator, or customer?
  • What happens if the customer has thin credit data? Or if the address format is non-standard?
  • Is the system fair and accurate? Do we have the synthetic data, or real historical data where relevant, to prove it?

If we can’t answer questions like these, we haven’t finished testing.

 


Disclaimer: The info in this article is not legal advice. It may not be relevant to your circumstances. It was written for specific contexts within banks and insurers, may not apply to other contexts, and may not be relevant to other types of organisations.