Why Financial Services Platforms Fail After Customisation

23 Jan 2026
Tristan Brown is a Senior Solutions Consultant at Inspired Testing

There's a long-standing tradition in IT: when something breaks, blame the vendor. It's practically muscle memory at this point. A screen hangs, a workflow sulks, a report refuses to load, and somebody mutters "vendor bug" before anyone even checks the logs.

But here's the awkward truth: the vendor core is usually the most predictable, least offensive and frankly most boring part of your estate. Modern financial services platforms are hardened, regulated, endlessly regression-tested and engineered by teams whose full-time job is to prevent drama.

The chaos doesn't come from the core. It comes from everything organisations add to it. This article explores that reality and outlines exactly how to test the creature you've built, not the platform you bought.

The vendor core isn't the monster

People love painting the vendor as the villain because it's convenient, tidy and helps everyone avoid looking too closely at their own estate. But the idea that the vendor core is the unstable part simply doesn't hold up anymore.

Modern financial services platforms have been engineered within an inch of their lives. Vendors run regression farms that execute more tests in a single night than some internal teams run in a quarter. Static analysis tools catch issues long before human code reviews, while regulators examine the architecture until even the edge cases behave themselves.

The core is dull, which is exactly what you want. Dull is predictable and doesn't accidentally corrupt your billing file at 2am.

The real trouble starts when the clean, polished product arrives in your organisation and people get creative with small rule tweaks. Each change looks harmless on its own, but stitched together over time, the platform stops looking like the vendor's product and starts looking like something built in a dimly lit laboratory.

The vendor gives you a clean, stable body, and your team adds wings, teeth and occasionally a tail. When it finally lurches into production and knocks over an integration or two, the reaction is always the same: "It must be the vendor." It rarely is. The monster is the collection of stitched-on limbs you added yourselves.

The customisation layer: where the bodies are buried

This is the attic of your estate, where teams store things they don't want to deal with and then act surprised when something growls back.

This layer contains business rules copied from systems retired years ago, glue code patched repeatedly, logic written "temporarily" three years ago, data models bent out of shape to protect legacy reports, undocumented UI tweaks, integration mappings that have drifted quietly for a decade, and workflows that survived four restructures and now behave like folklore.

It's the wilderness that nobody owns, nobody remembers designing, and where 70 to 80 percent of your defects originate. In financial services especially, this layer can turn an elegant vendor platform into a sprawling ecosystem where no one is entirely sure what talks to what or why.

The vendor core stays calm while your edges behave like a house party getting out of hand.

Why teams still treat custom code as safe

Despite all evidence to the contrary, teams often act as if the custom layer is the least risky part of the platform, mistaking familiarity for stability.

Here are the myths that keep the chaos alive:

"It's only a small customisation." There is no such thing as a small customisation, only customisations whose consequences haven't yet introduced themselves.

"It passed UAT before, so it must be stable." UAT proves one thing only: someone can complete a happy path without crying.

"The vendor must have broken something." This usually translates to inadequate regression testing of your changes. The logs, the vendor's architects and reality usually disagree.

"It should just work." Custom code doesn't "just work". It works exactly as written, not as intended.

The defect snowball

The defect snowball doesn't start with a bang but with something tiny: a mapping mistake, a misaligned rule, a workaround that quietly rots. Then it spreads.

Regression cycles expand as confidence drops and teams widen scope "just to be safe," turning regression into an endurance event. Upgrades become rescue missions involving late-night debugging of logic written by someone who left the company two years ago.

Integrations start behaving like estranged relatives that don't communicate well or agree on anything. Incidents multiply as a small bad mapping becomes wrong data, which triggers a failed workflow, breaks an integration, corrupts a payload and leaves the service desk drowning in alerts.

And everyone repeats the ritual phrase: "Why is the vendor platform so unstable?" Because the custom layer is quietly fraying at the edges, and nobody wants to admit it.

Where this hits hardest

Some platforms attract customisation like wasps to cake. Bank-in-a-box platforms like Temenos, Mambu, nCino and Thought Machine arrive clean and modular but leave covered in legacy rules and bespoke products. Investment platforms such as FNZ, Avaloq and Bravura have stable engines until someone adds custom pricing scripts and bespoke tax logic that melts during upgrades. CRM ecosystems including Salesforce FS Cloud and Dynamics get heavily customised under the illusion that "it's only configuration."

Across all of them, the pattern holds: the vendor core stays calm while the custom layer starts a rebellion.

How to test for customisation-driven defects

Testing needs to move to where the risk actually lives: the stitched-on logic, not the vendor baseline.

Build a customisation risk map. Catalogue every custom element and score it by complexity, criticality and likelihood of causing problems. Prioritise accordingly.

Test the seams, not the centre. Most defects don't live in systems; they live between them.

Follow the data. Features don't break; mappings, transformations and reconciliations do.

Make upgrade testing non-negotiable. Upgrades aren't side quests; they're boss battles that require proper preparation.

Automate where it hurts. Focus automation on the brittle areas, not just the easy stuff.

Publish a custom code defect density metric. This changes behaviour faster than any governance framework.

Enforce ownership of custom logic. If it runs in production, someone must own it.

Treat custom work as first-class engineering. If it can break the platform, test it like the platform.

Own the creature

The vendor core is fine. It's your customisation footprint that's causing the drama.

Once you start seeing your estate honestly, the testing strategy becomes obvious: test the creature you built, not the stable engine underneath. Stop blaming the vendor, start owning the custom layer, and test the reality you created rather than the fiction everyone wishes were true.

Do that and the whole estate calms down. Upgrades stop feeling like Russian roulette, regression shrinks, incidents drop, and the creature finally behaves like it should.

Tristan Brown

Senior Strategic Consultant, Inspired Testing

Tristan, an ISTQB-certified expert, brings over 25 years of experience in Testing and Quality Assurance, specialising in the Financial Services sector. He is known for transforming testing functions, optimising global teams, and driving innovation within organisations ranging from startups to FTSE corporations. A hands-on leader, Tristan excels in mentoring talent, adopting agile methodologies, and fostering organisational change to enable scalable growth. His expertise spans rescuing failing programmes, implementing QA maturity models, and delivering tailored training plans. Tristan's strategic insights and technical leadership ensure exceptional results, making him a trusted partner in achieving long-term success for businesses.