Why Vendor Platforms Fail After Customisation

20 Jan 2026
Tristan Brown is a Senior Solutions Consultant at Inspired Testing

The Vendor Gave You a Product. You Gave It Wings and Teeth.

Why FS Platforms Fail After Customisation and What To Do About It!

There’s a long-standing tradition in IT: when something breaks, blame the vendor. It’s practically muscle memory at this point. A screen hangs, a workflow sulks, a report refuses to load and somebody mutters “vendor bug” before anyone even checks the logs.

But the awkward truth is this: the vendor core is usually the most predictable, least offensive and frankly most boring part of your estate. Modern FS platforms are hardened, regulated, endlessly regression-tested, continuously improved and engineered by teams whose full-time job is to prevent drama.

The chaos doesn’t come from the core.

The chaos comes from everything organisations add to it.

This paper explores that reality with equal parts honesty and humour and outlines exactly how to test the creature you’ve built, not the platform you bought.

The Vendor Core Isn’t the Monster

People love painting the vendor as the villain. It’s convenient, tidy and helps everyone avoid looking too closely at their own estate. But the idea that the vendor core is the unstable part simply doesn’t hold up anymore.

Modern FS platforms have been engineered within an inch of their lives. Vendors run regression farms that execute more tests in a single night than some internal teams run in a quarter. Static analysis tools yell at developers long before a human reviews the code. Regulators poke and prod the architecture until even the edge cases behave themselves.

The core is dull. And dull is good. Dull is predictable. Dull doesn’t accidentally corrupt your billing file at 2am.

The real trouble starts when the clean, polished product arrives in your organisation… and people get “creative”.

A small rule tweak to match an old process someone vaguely remembers.

Glue code because the upstream system is allergic to modern standards.

A workflow clone from the legacy system that everyone swears is temporary.

A UI patch because someone didn’t like the wording on a button.

A data object added to avoid fixing a downstream report.

Each change looks harmless on its own. But stitched together over time, the platform stops looking like the vendor’s product and starts looking like something built in a dimly lit laboratory with questionable wiring choices.

The vendor gives you a clean, stable body.

Your team adds wings.

And teeth.

And occasionally a tail.

And when it finally lurches into Production and knocks over an integration or two, the reaction is always:

“It must be the vendor.”

It rarely is.

The monster isn’t the core.

The monster is the collection of stitched-on limbs you added yourselves.

The Customisation Layer: Where the Bodies Are Buried

This is the attic of your estate the place where teams store things they don’t want to deal with and then act surprised when something growls back.

This layer is full of:

  • business rules copied from systems retired years ago
  • glue code patched repeatedly like a cyclist’s tyre
  • logic written “temporarily” three years ago
  • data models bent out of shape to protect legacy reports
  • UI tweaks nobody documents
  • integration mappings that have drifted quietly for a decade
  • workflows that survived four restructures and now behave like folklore

It’s the wilderness.

The part nobody owns.

The part nobody remembers designing.

And, inevitably, the part where 70–80 percent of your defects originate.

In FS especially, this layer can turn an elegant vendor platform into a sprawling, stitched-together ecosystem where no one is entirely sure what talks to what, why it talks to it, or what happens if it stops.

The vendor core stays calm.

The edges your edges behave like a house party that’s getting out of hand.

Why Teams Still Treat Custom Code As “Safe” (When It Absolutely Isn’t)

Despite all evidence to the contrary, teams often act as if the custom layer is the least risky part of the platform. Familiarity is mistaken for stability. It’s the software equivalent of someone insisting their homemade chilli “isn’t that spicy” while everyone else coughs into their sleeves.

Here are the myths that keep the chaos alive:

“It’s only a small customisation.”

Right. And tiny screws have sunk entire ships.

There is no such thing as a small customisation. There are only customisations whose consequences haven’t yet introduced themselves.

“We copied it from the old system, so it’s proven.”

Sure. Proven to have caused problems in the old system too.

Bad logic doesn’t improve with age. It just finds new places to break.

“It passed UAT before, so it must be stable.”

UAT proves one thing and one thing only:

Someone can complete a happy path without crying.

That’s it.

“The vendor must have broken something.”

This usually translates to:

“We didn’t regression test our changes properly and now we’d like someone else to take the blame.”

The logs usually disagree.

The vendor’s architects disagree.

Reality disagrees.

“It should just work.”

A lovely sentiment.

Also the opening line of 90 percent of root cause analysis.

Custom code doesn’t “just work”.

It works exactly as written not as intended.

The Defect Snowball: How Custom Code Turns Small Issues Into Organisational Chaos

The defect snowball doesn’t start with a bang. It starts with something tiny: a mapping mistake, a rule that doesn’t quite align, a workaround that quietly rots.

Then it spreads.

Regression cycles expand.

Confidence drops. Teams widen the scope “just to be safe.”

Regression becomes an endurance event.

Upgrades become a rescue mission.

Vendor releases, which should be routine, become high-stakes operations involving snacks, prayer and late-night debugging of logic written by someone who left the company two years ago.

Integrations start acting like estranged relatives.

They don’t communicate well.

They don’t agree on anything.

Someone always ends up offended.

Incidents multiply.

A small bad mapping becomes:

wrong data > failed workflow > broken integration > corrupted payload > a service desk drowning in alerts.

SLAs wobble. Dashboards redden. Executives panic.

And everyone repeats the ritual phrase:

“Why is the vendor platform so unstable?”

Because the custom layer is quietly fraying at the edges.

And nobody wants to admit it.

Where This Hits Hardest: Platforms That Suffer When Clients Get Creative

Some platforms attract customisation the way a cake attracts wasps. The more “flexible” they are, the more they get contorted.

Bank-in-a-box platforms (Temenos, Mambu, nCino, Thought Machine)

Arrive clean and modular.

Leave covered in legacy rules, bespoke products, multi-step approvals and logic nobody recognises.

Investment platforms (FNZ, Avaloq, Bravura)

Stable engines… until someone adds a custom pricing script, a half-tested rule engine and bespoke tax logic that melts during upgrades.

CRM ecosystems (Salesforce FS Cloud, Dynamics)

Heavily customised under the illusion that “it’s only configuration.”

Fields multiply, flows tangle, integrations sulk.

Billing and utility platforms (SAP IS-U, Oracle Utilities)

One “extra rule” becomes a multi-system meltdown across billing, debt, metering and reporting.

Across all of them, the pattern holds:

The vendor core stays calm.

The custom layer starts a small rebellion.

How To Test for Customisation-Driven Defects

This is where sanity returns. Testing needs to move to where the risk actually lives: the stitched-on logic, not the vendor baseline.

Build a customisation risk map.

Catalogue every custom element.

Score it by complexity, criticality and likelihood of ruining someone’s weekend.

Prioritise accordingly.

Test the seams, not the centre.

Most defects don’t live in systems.

They live between them.

Follow the data.

Features don’t break.

Mappings, transformations, reconciliations and enrichment do.

Make upgrade testing non-negotiable.

Upgrades aren’t side quests.

They’re boss battles.

Automate where it hurts.

Automating the easy stuff is ornamental.

Automate the brittle areas instead.

Publish a “custom code defect density” metric.

This changes behaviour faster than any governance framework.

Enforce ownership of custom logic.

If it runs in Production, someone owns it.

Treat custom work as first-class engineering.

Not “just config”.

Not “just a tweak”.

If it can break the platform, test it like the platform.

Close: Own the Creature

The vendor core is fine.

It’s your customisation footprint that’s causing the drama.

Once you start seeing your estate honestly wings, teeth, tail and all the testing strategy becomes obvious: test the creature you built, not the stable engine underneath.

Stop blaming the vendor.

Start owning the custom layer.

And test the reality you created, not the fiction everyone wishes were true.

Do that and the whole estate calms down.

Upgrades stop feeling like Russian roulette.

Regression shrinks.

Incidents drop.

And the creature finally behaves.

Tristan Brown

Senior Strategic Consultant, Inspired Testing

Tristan, an ISTQB-certified expert, brings over 25 years of experience in Testing and Quality Assurance, specialising in the Financial Services sector. He is known for transforming testing functions, optimising global teams, and driving innovation within organisations ranging from startups to FTSE corporations. A hands-on leader, Tristan excels in mentoring talent, adopting agile methodologies, and fostering organisational change to enable scalable growth. His expertise spans rescuing failing programmes, implementing QA maturity models, and delivering tailored training plans. Tristan's strategic insights and technical leadership ensure exceptional results, making him a trusted partner in achieving long-term success for businesses.

Join the conversation on LinkedIn
Connect with our experts and read the latest industry insights on our dedicated LinkedIn page.