AI Testing Case Study | Wealth Management Automation

AI Testing and QA: Transforming Wealth Management Reporting with Intelligent Automation

Photo by Jessica Rockowitz on Unsplash

AI Testing and QA: Transforming Wealth Management Reporting with Intelligent Automation

The project aimed to evaluate, validate, and implement AI-driven solutions to automate the creation of client suitability reports. By enhancing speed, accuracy, and compliance, the initiative transformed manual financial reporting into an intelligent, scalable process. This advancement positioned the client as an AI-ready financial services leader, setting a foundation for broader innovation and digital transformation.

Company

Personalised, expert wealth management advisers

Industry

Financial Services

Location

United Kingdom

Solution

Strategic Test Consulting

Duration

3-month Proof of Concept followed by a 6-month pilot

Team

1 x Principal Consultant
2 x Test Engineers

Client Background

A leading UK-based wealth management and professional services firm, the organisation has over 180 years of heritage, 23 offices across the UK, Ireland and the Channel Islands and manages more than £64 billion in assets. The organisation provides integrated wealth, investment, tax, and advisory services to individuals, businesses, and charities. With a reputation for trusted financial expertise, the firm continuously explores digital and AI-driven innovations to enhance client experience and operational efficiency.

Challenge

The client sought to modernise its financial planning and suitability review process, a workflow that required financial planners and investment managers to spend between 6 – 8 hours per report per client, manually transcribing client meetings and compiling reports.

To reduce this overhead the client wished to implement a solution which would automate the creation of customer plans through the use of AI.

However, the client had no prior experience testing or validating AI systems, which made quality assurance for AI-generated reports challenging. They needed expert guidance to evaluate potential tools, mitigate AI-related risks such as hallucinations and inaccuracies, and establish measurable standards for AI performance.

Solution

Inspired Testing partnered with the client to deliver a three-month Proof of Concept (POC) evaluating two commercial AI-driven financial services tools designed to automate report generation. Each solution leveraged AI to extract and summarise information from video consultations, then prepopulate suitability reviews and client correspondence.

Testing focused on:

  • Measuring accuracy, recall, and F1 scores for AI outputs
  • Assessing hallucination frequency and data integrity
  • Comparing usability and feature performance between the two tools

Following the POC, a six-month pilot phase was launched using the platform best met the client’s criteria. Inspired Testing applied its proprietary AI Testing Framework, enabling the client to validate and reuse structured metrics for ongoing AI implementations.

Testing then focussed on assessing the AI tool vs Key Performance Indicators (KPIs):

  • Field precision - the overall accuracy of the tool.
  • Field recall – the ability for the tool to recall information in the meeting
  • F1 Score – the medium between the Field precision and Field recall.
  • Human-parity – does the tool respond like a human
  • Coreference accuracy – does the tool track who is speaking when
  • Temporal accuracy – does the tool have an understanding of time scale and dates
  • Import error rate – does the tool pull in all the information from other systems
  • Export error rate – does the tool push out all the information from other systems
  • Manual clean-up effort – how much time does a practitioner spend on cleaning up the output
  • Time saved vs manual – comparing the AI process to the manual process in saving time
  • SUS score – the overall usability of the system; ease of use

This was implemented in the client using lightweight and reusable user driven questionnaires alongside independent testing by the project team. These results allowed the client and the platform provider to have not just assurance on the approach but overall transparency to improve the platforms outputs.

Results Before Improvement

  • Manual, time-intensive report creation averaging 6-8 hours dependent on the experience of the financial planners and investment managers
  • No AI testing framework or methodology in place
  • Low confidence in evaluating AI-driven outputs
  • Inconsistent accuracy and formatting in financial reports

Results After Improvement

  • Automated reporting reduced preparation time to 30mins per client, saving time of about a month of work a year
  • Established repeatable AI quality benchmarks (accuracy, recall, F1 scoring)
  • Developed an AI testing framework adaptable to future projects
  • Significantly improved report consistency and reduced review cycles

Business Impact

  • Transitioned from a manual process to an AI-enabled wealth management workflow, positioning the client as an AI-ready financial services leader.
  • Reduced operational effort and cost through automation of manual processes
  • Enhanced accuracy, consistency, and compliance across suitability reports
  • Accelerated innovation by enabling safe, scalable AI adoption within the organisation

Why Inspired Testing

  • Deep expertise in AI and non-deterministic software testing
  • Proven frameworks for measurable AI quality assurance
  • Strong understanding of financial services operations and compliance needs
  • Delivered a scalable, reusable testing approach that supports ongoing AI innovation