AI Prompt Guide for Testers

01 Dec 2025
AI Prompt Guide for Testers

Artificial Intelligence is rapidly transforming the way quality assurance is delivered. At Inspired Testing, we see AI not as a replacement for human judgment, but as a powerful accelerator of it.

This guide has been designed for testers who are using general AI tools — such as ChatGPT, Gemini, or Claude — rather than VeloAI. Its purpose is to help you harness AI responsibly, effectively and in alignment with Inspired Testing’s values of technical excellence, integrity and quality.

Through structured prompts and practical examples, this guide will show you how to:

  • Analyse requirements more thoroughly
  • Generate comprehensive and risk-based test cases
  • Produce cleaner automation code and data sets faster
  • Communicate results and insights more clearly
  • Maintain full compliance with ISO 27001 and ethical AI standards

Each module has been developed to mirror the way Inspired Testing consultants work — combining deep testing expertise with practical efficiency. Every example prompt is designed to make AI collaboration part of your daily testing workflow, whether you’re working on web, mobile, performance, or enterprise systems.

Remember: AI is only as valuable as the precision and context of your prompts. The more specific, structured and ethical your inputs are, the better the outputs will be. Use this guide as both a training resource and a live reference in your projects.

Our goal: to enable every Inspired Testing professional to work smarter, faster and more effectively — with AI as a trusted partner in quality.

Module 1 – Getting Started & Prompt Engineering for Testers

Purpose

This module introduces testers at Inspired Testing to the principles of effective AI-assisted prompting. It is designed for those using general AI tools — such as ChatGPT, Claude, or Gemini — instead of VeloAI. It explains how to design, refine and structure prompts that help testers analyse requirements, generate test ideas and document findings while upholding the company’s ISO 27001-aligned data-handling standards.


1.1 The Role of AI in Testing

AI does not replace professional judgement.
It enhances productivity by:

  • Generating structured test ideas in seconds
  • Supporting rapid analysis of ambiguous requirements
  • Assisting with automation or defect documentation
  • Producing summaries and stakeholder-ready explanations

AI accelerates your thinking — it doesn’t replace it.


1.2 Responsible Use at Inspired Testing

All testers must comply with Inspired Testing’s ISO 27001 policy:

  • Never paste client data, code, or credentials into public AI tools.
  • Use synthetic or anonymised examples.
  • Treat every AI output as a draft to be verified.
  • Maintain human review and accountability for deliverables.

1.3 Anatomy of a Strong Prompt

Element Description Example
Role Define who the AI acts as “You are a senior QA consultant specialising in API testing.”
Context Supply feature or story details “You are testing a login API with token-based authentication.”
Task State what is needed “List positive, negative and boundary test cases.”
Constraints Impose limits or structure “Output as a Markdown table (ID, Input, Expected Result).”

1.4 Weak vs Strong Prompts

Weak Prompt Strong Prompt
“Write test cases for login.” “You are a QA analyst validating a banking app’s login API. Generate 10 test cases covering positive, negative and boundary conditions. Include inputs, expected outputs and validation notes in a table.”
“Explain this bug.” “You are a QA consultant preparing a defect summary. Rewrite this description for a client stakeholder; include business impact and probable root cause.”

1.5 Prompting Techniques

A. Role-Based Prompting

You are a test lead at Inspired Testing reviewing user stories.

Identify five ambiguities and suggest clarifying questions for the business analyst.

B. Context-Rich Prompting

You are validating the requirement:

‘System must support password resets via email.’

List functional, negative and edge test scenarios.

C. Structured Output Prompting

Provide output as a Markdown table with columns:

Scenario ID | Input | Expected Result | Type (Positive/Negative)

D. Iterative Refinement

1. Start broad → “List test cases for a login page.”
2. Narrow → “Now focus on session-handling negatives.”
3. Format → “Convert the final set to Gherkin scenarios.”


1.6 Advanced Prompt Patterns

Chain of Thought

You are a QA specialist preparing for exploratory testing.

Step 1 – Identify primary user actions.

Step 2 – Derive test heuristics.

Step 3 – List edge cases grouped by risk level.

Constraint-Driven

Generate exactly 8 test scenarios for an online checkout:

4 positive + 4 negative, each under 25 words.

Few-Shot

Provide an example → the AI learns your format:

Example:

| ID | Scenario | Expected |

|----|-----------|-----------|

| TC01 | Valid email & password | Redirect to dashboard |

Now create 10 more in the same format.


1.7 Refining AI Output

Use follow-up prompts to evolve results:

  • “Expand this to include boundary cases.”
  • “Focus on security testing aspects.”
  • “Summarise for management audience.”
  • “Add a column for test-data requirements.”

1.8 Inspired Testing QA Prompt Framework

ROLE: [QA lead | Test analyst | Automation engineer]

CONTEXT: [Feature, API, requirement]

TASK: [Generate test cases | Write defect | Summarise report]

FORMAT: [Table | Gherkin | JSON | Summary]

CONSTRAINTS: [# of items, tone, length, focus]

Example

ROLE: Senior QA consultant at Inspired Testing

CONTEXT: Password-reset workflow for a banking app

TASK: Create 12 functional and negative test scenarios

FORMAT: Markdown table (ID, Scenario, Expected Result, Notes)

CONSTRAINTS: One scenario per step; note dependencies


1.9 Common Prompt Structures

Goal Prompt
Clarify requirements “List ambiguous phrases and suggest clarifying questions.”
Generate test ideas “List 15 exploratory ideas for a file-upload feature grouped by heuristic.”
Help with automation “Convert this manual test into Playwright code with assertions.”
Defect writing “Rewrite this issue clearly for the client’s defect tracker.”
Report summary “Summarise this QA report for executives focusing on risk and trends.”

1.10 Do’s and Don’ts

Do Don't
Use AI for brainstorming and drafting. Paste client data or confidential artefacts.
Verify every output yourself. Assume AI answers are factually accurate.
Use structured prompts for consistency. Request opinions on clients or people.
Combine AI and human analysis. Depend solely on AI judgement.

1.11 Example Workflow

Prompt 1: List functional and boundary test cases for registration form.

Prompt 2: Add negative cases for missing mandatory fields.

Prompt 3: Convert to Gherkin format.

Prompt 4: Highlight top 3 high-risk scenarios.


1.12 Key Takeaways

  • Clear intent + structure + iteration = useful results.
  • Protect client data and confidentiality.
  • AI enhances your testing intelligence — it doesn’t replace it.
  • Review and refine everything you generate.

Module 2 – Requirements Analysis & Test Planning

Purpose

This module helps testers use AI tools to interpret requirements, expose ambiguities and create test-planning artefacts early in the lifecycle.

At Inspired Testing, QA consultants engage early with business analysts and developers — AI can act as a “junior analyst” that assists, not replaces, the human reviewer.


2.1 Analysing Requirements

Goal

Identify unclear or incomplete parts of a requirement and propose questions or acceptance criteria.

Example Requirement

“When users add products to the basket, the system should automatically calculate discounts and display updated totals in real time.”

Prompt Examples

You are a senior QA consultant reviewing an e-commerce user story.

Story: "When users add products to the basket, the system should automatically calculate discounts and display updated totals in real time."

Identify potential ambiguities or missing acceptance criteria.

Output as 'Observation → Clarification Needed' pairs.

Break the requirement into atomic, testable statements.

Classify each as Functional, Non-Functional, or UI-related.

List five clarification questions a QA would raise about discount logic, triggers, or thresholds.

Expected Output

A structured list of ambiguous phrases, missing rules (e.g., discount types, rounding behaviour) and a set of clarifying questions to raise in refinement.


2.2 Identifying Risks & Dependencies

Example Context

“The system must support both web and mobile checkouts using a shared API.”

Prompt Examples

You are preparing a test strategy.

Requirement: "The system must support both web and mobile checkouts using a shared API."

List functional, integration and performance risks that may impact testing.

Output a 2-column table: Risk | Mitigation / Test Consideration.

Identify dependent components and services that require validation

(front-end frameworks, API versions, database schema, external payment gateway).

Expected Output

A concise risk matrix highlighting concurrency, data consistency, API versioning and environment-specific performance issues.


2.3 Generating Acceptance Criteria

Example Requirement

“A user can reset their password via email if they forget it.”

Prompt Examples

Act as a QA consultant helping to refine acceptance criteria.

Convert this requirement into Gherkin-style Given/When/Then scenarios.

Include success and failure paths.

List acceptance criteria for password-reset:

  • Valid email flow
  • Invalid or expired link
  • Rate-limit or throttling case

Expected Output

Scenario: Successful reset

Given a registered user requests a reset

When they click a valid link within 15 minutes

Then the system prompts for a new password and confirms change

Scenario: Expired link

Given a link older than 15 minutes

When the user opens it

Then show “Link expired” and offer resend


2.4 Coverage & Traceability

AI can help build simple traceability matrices ensuring all acceptance criteria are test-covered.

Prompt Examples

Create a traceability matrix mapping requirements → criteria → test ideas.

Columns: Req ID | Acceptance Criteria | Test Ideas | Notes.

Perform a gap analysis: compare these criteria with the parent requirement and highlight missing conditions.


2.5 Estimating Scope & Effort

Estimate QA effort for the password-reset feature.

Classify effort as Low / Medium / High and explain reasoning in three points.

Group test cases into Manual / Automated / Exploratory.

Suggest which to automate first based on stability and ROI.


2.6 Scenario Generation

Example Requirement

“The loan calculator should return results instantly and support values from £1 000 to £50 000.”

Prompt Examples

Design functional, boundary and negative tests for the loan calculator.

Include valid ranges, input errors and performance aspects.

Return a table: ID | Scenario | Input | Expected Result | Type.

Extend the list to include security and usability considerations.

Expected Output

ID Scenario Input Expected Result Type
TC01 Valid loan amount £10 000 Result returns instantly Functional
TC02 Below minimum £999 Error “Min £1 000” Negative
TC03 Upper boundary £50 000 Result ≤ 2 s Boundary
TC04 Invalid chars “abc” Validation error Negative

2.7 Story Refinement & Sprint Preparation

Summarise each backlog story in one sentence and note key QA focus.

Output a 3-column table: Story | Summary | QA Focus.

Review this backlog item and list unclear criteria or technical dependencies to raise during refinement.


2.8 Drafting Mini Test Strategies

Create a short test strategy for an insurance-quote system.

Sections: Objectives, Scope (in/out), Key Risks, Environments.

Format as Markdown headers.

List main QA deliverables and entry/exit criteria from the given requirements.


2.9 Key Takeaways

  • Engage AI early – before sprint start.
  • Treat AI as a structured reviewer.
  • Verify all acceptance criteria with stakeholders.
  • Maintain visible traceability between requirements and test artefacts.

Module 3 – Test Design & Test Data Generation

Purpose

This module teaches testers how to use AI tools to transform analysed requirements into robust, risk-based test cases and realistic test data.

At Inspired Testing, structured test design is at the heart of quality assurance — AI helps generate breadth quickly while leaving depth and prioritisation to human expertise.


3.1 From Requirement to Test Case

Goal: Convert clarified requirements into complete, traceable test cases.

Prompt Examples

You are a QA analyst designing tests for a loan-application workflow.

Create positive, negative and boundary scenarios covering input validation, calculations and error messages.

Return as a Markdown table with: ID | Scenario | Pre-conditions | Steps | Expected Result.

List at least 10 edge cases often missed in financial-form validation and mark which are High, Medium, or Low risk.

Translate the following acceptance criteria into full test cases using Given/When/Then format.


3.2 Functional Test Design

Generate test cases for an online checkout process including:

  • Cart updates
  • Payment gateway failures
  • Voucher / discount application
  • Timeout / session expiry

Output as a concise table grouped by test type (Functional | Integration | Error Handling).

You are testing an insurance-quote API.

Produce functional and error scenarios for the endpoint /quote.

Include boundary values, invalid data and authentication checks.


3.3 Exploratory and Heuristic Testing

AI can assist in creating test charters or exploratory checklists.

Prompt Examples

You are planning a 60-minute exploratory session on a new “File Upload” feature.

Generate a list of test ideas grouped by heuristic (Data, Stress, Usability, Error Handling, Security).

Suggest 10 exploratory ideas focusing on accessibility, localisation and browser-compatibility risks.


3.4 Negative and Boundary Testing

Create boundary-value test cases for an age-input field that accepts 18–65 inclusive.

Include just-inside, just-outside and invalid values.

Generate equivalence-partition test sets for a password field with the following rules:

  • 8–16 characters
  • At least 1 uppercase, 1 number, 1 symbol
  • No spaces

3.5 Data-Driven Testing Prompts

AI can automate data-set generation for both functional and non-functional testing.

Prompt Examples

Generate 25 rows of synthetic test data for an insurance-policy form:

Columns: PolicyID, CustomerName, Age, PremiumAmount, StartDate, EndDate.

Include edge cases like future start dates or negative premiums.

Produce JSON payloads for the “CreateUser” API including:

  • Valid user
  • Duplicate email
  • Missing mandatory field
  • SQL-injection attempt

Provide CSV-ready data for load testing a login endpoint (username | password | expected status).


3.6 Domain-Specific Examples

Banking / Finance

Design test scenarios for fund transfer between accounts covering limits, currency differences and authorisation failures.

Retail / E-Commerce

Generate 15 test cases for coupon redemption combining product categories, cart values and expiry rules.

Healthcare

Create data combinations for patient registration with required fields, optional insurance details and validation of medical-record IDs.


3.7 Performance and Resilience Data

Generate 100 unique payloads to simulate simultaneous loan-applications.

Each payload must vary income, loan amount and duration.

Return as JSON array suitable for JMeter.

Suggest 10 performance-testing scenarios focusing on concurrency and degradation points for a REST API.


3.8 Test Case Optimisation and Prioritisation

Given a list of 40 test cases, ask the AI to group by risk and recommend top 10 for smoke testing.

Classify tests into must-run vs nice-to-have based on customer impact and defect probability.

Output as table: ID | Scenario | Priority | Rationale.


3.9 Output Formatting and Traceability

Encourage structured outputs that can plug into management tools.

Format your test cases for export to Azure DevOps or TestRail.

Columns: Title | Preconditions | Steps | Expected Result | Priority.

Generate a Requirement-to-Test mapping document showing IDs and relationships.


3.10 Key Takeaways

  • AI accelerates coverage creation but does not replace design judgement.
  • Focus prompts on inputs → outputs → risk.
  • Always validate generated data before use.
  • Maintain traceability between requirements, test cases and test data.

Module 4 – Test Automation & AI-Assisted Scripting

Purpose

This module demonstrates how testers can use AI tools to accelerate automation, review scripts and optimise frameworks while maintaining security and code quality.

At Inspired Testing, automation is not a goal in itself — it is a multiplier of human insight. AI can help generate scaffolding and refactor code, but testers remain accountable for design, maintainability and robustness.


4.1 Generating Automation Code Safely

AI can rapidly create code snippets for common automation frameworks such as Cypress, Playwright, Selenium, REST Assured, or Postman.

However, testers must always review generated code for accuracy, data safety and compliance with client standards.

Prompt Examples

You are a QA automation engineer using Playwright with TypeScript.

Generate a test script for the login page that:

  • Opens the login URL
  • Enters credentials
  • Verifies redirect to the dashboard

Include clear comments.

Write a Selenium-Java test for verifying error messages on invalid email input.

Add WebDriver setup and teardown.

Convert this manual test into a Cypress script.

Manual steps:

  1. Navigate to the cart
  2. Apply a voucher code
  3. Verify discount is applied

Expected Output

A complete, commented test skeleton ready for review and refactoring.


4.2 Improving and Refactoring Automation Code

Review the following Selenium test code and suggest:

  • Simpler locators
  • Better wait strategies
  • Improved assertions

Optimise this Cypress test for readability and reduce duplication using page objects.

Explain how to parameterise this test to support multiple browsers.

Use AI as a code reviewer to improve style consistency, readability and maintainability.


4.3 Framework Design Assistance

AI can generate scaffolding for framework layers, reducing repetitive setup work.

Prompt Examples

Design a basic page-object model structure for a React web app using Playwright.

List recommended folder names and file organisation.

Suggest a naming convention for test classes and methods in a Java-TestNG framework.

Generate a template for logging and reporting integration using Allure.


4.4 API Automation

You are testing a bank transfer API.

Generate Postman or REST Assured requests for the following scenarios:

  • Valid transfer
  • Invalid account number
  • Insufficient balance
  • Unauthorised token

Produce a JSON schema validation snippet for the /transfer endpoint response.

Write a Python requests-based API test that verifies HTTP status codes and response fields.


4.5 Data-Driven and Parameterised Tests

Demonstrate how to parameterise Cypress tests using external CSV data.

Include one positive and one negative example.

Generate JUnit test data parameterisation for login combinations.

Fields: username, password, expectedStatus.

Create a data-driven test for a search API that reads queries from a CSV file and verifies response time < 2 s.


4.6 Continuous Integration and Pipeline Support

You are integrating automated tests into an Azure DevOps pipeline.

Generate YAML steps to:

  • Install dependencies
  • Run Cypress tests
  • Publish Allure results

Create a GitHub Actions workflow for running Python API tests on every pull request.

Include test summary reporting step.

Summarise how to use AI to analyse failed pipeline logs and identify flaky tests.


4.7 Maintenance and Self-Healing Assistance

Review this failed test log and suggest three possible causes for the timeout.

Recommend a more resilient wait condition.

Suggest how to implement self-healing locators using AI tools or dynamic selectors.

Analyse this suite’s execution times and recommend tests to run in parallel to reduce build duration by 30 %.


4.8 Cross-Browser and Device Testing

Generate Playwright configuration for running tests across Chrome, Edge and Firefox.

Create mobile-emulation scenarios for Chrome DevTools covering iPhone 14 and Galaxy S22 viewports.

List five test cases that must run on both mobile and desktop platforms to validate responsiveness.


4.9 Security and Compliance Within Automation

Highlight five security considerations to check before committing AI-generated automation code to a client repository.

Explain how to mask test data or use secure secrets storage within automated pipelines.


4.10 Key Takeaways

  • Use AI to accelerate automation, not to bypass engineering rigour.
  • Always review generated code for accuracy, reusability and security.
  • Leverage AI for refactoring, data-driven design and CI/CD optimisation.
  • Keep human QA oversight central to Inspired Testing’s automation ethos.

Module 5 – Reporting, Defect Management & Communication

Purpose

This module shows how testers can use AI to improve communication, produce concise reports and enhance defect documentation.

At Inspired Testing, clarity and consistency in QA communication are essential — AI should help structure and refine, but never distort, the facts.


5.1 Generating Test Summaries

Prompt Examples

You are a QA consultant preparing a daily execution summary.

Summarise these test results:

  • 125 tests executed
  • 9 failed
  • 4 blocked due to environment issues

Output: concise summary + top 3 risks.

Generate a test summary email for a project manager, highlighting:

  • Tests executed and pending
  • Major blockers
  • Next steps for tomorrow

Tone: concise, professional and non-technical.


5.2 Executive Reporting

AI can adapt technical details into business-friendly summaries.

Prompt Examples

Summarise the following QA report for an executive audience.

Focus on risk, release readiness and trends rather than technical detail.

You are a QA lead preparing a weekly dashboard narrative.

Summarise pass/fail trends and defect closure rate in five bullet points.

Use AI to translate QA data into actionable, business-level insight.


5.3 Defect Documentation

Prompt Examples

You are a QA tester writing a defect report.

Draft a clear title, description and reproduction steps for:

“App crashes when saving profile photo larger than 5 MB.”

Include expected vs actual result and environment info.

Rephrase this defect summary in client-friendly language:

'NullPointerException at UserServiceImpl.java line 142'

Provide reproduction steps and add a short business impact summary for escalation.


5.4 Root-Cause & Trend Analysis

Prompt Examples

Analyse this defect list and group by root cause: Requirements, Code, Data, Environment, Other.

Summarise which category contributes most to total defects.

Given 10 recurring issues from the past 3 sprints, list 3 preventive measures for each.


5.5 Release Readiness Assessment

Prompt Examples

You are preparing a QA sign-off summary.

Generate 5 bullet points covering:

  • Test completion %
  • Outstanding defects
  • Known risks
  • Mitigation actions
  • Go/no-go recommendation

Summarise release readiness for executives:

Provide 1-paragraph summary + traffic-light risk rating (Green/Amber/Red).


5.6 Communication with Stakeholders

Prompt Examples

Rewrite this QA update for a non-technical client stakeholder.

Keep tone neutral, focus on business impact and mitigation.

Draft a short Teams message updating the team on test progress and blockers.

Generate 3 ways to phrase sensitive feedback about recurring environment instability.


5.7 Visual Reporting Assistance

Prompt Examples

Convert this test execution table into a written summary.

Highlight top 3 failure trends.

Suggest 5 chart types that best visualise test progress and defect ageing for management.


5.8 Retrospective and Lessons Learned

Prompt Examples

Create a short “lessons learned” summary from the following QA project notes.

Group by Process, Tools, Communication and Quality.

Generate a retrospective summary slide text for QA contribution to sprint review.


5.9 Key Takeaways

  • Use AI to enhance QA communication, not to embellish results.
  • UKeep tone neutral, factual and business-oriented.
  • UAlways validate AI-generated summaries against actual metrics.
  • UClear communication builds trust with clients and stakeholders.

Module 6 – Advanced Testing Domains (Performance, Security, Accessibility)

Purpose

This module explores how testers can use AI to prepare for specialised testing areas — Performance, Security and Accessibility.

At Inspired Testing, these disciplines require domain expertise, tools and analytical depth. AI enhances preparation and documentation, helping testers think broadly, identify risk and create structured test ideas.


6.1 Performance Testing

Prompt Examples

Generate 10 performance-testing scenarios for an e-commerce checkout.

Group by Load, Stress, Spike and Endurance testing.

You are a QA performance specialist preparing a JMeter test plan.

List key metrics (Throughput, Response Time, Error %, CPU Utilisation).

Provide expected thresholds for a medium-traffic retail site.

Suggest data sets to simulate 100 concurrent users performing mixed operations (browse, add to cart, checkout).

Expected Output

A table grouping performance test ideas by type, including metrics and expected outcomes:

Test Type Scenario Metric Target
Load 100 concurrent users Response time < 2 s
Stress Gradual ramp-up to 1 000 users Error rate < 1 %
Spike Sudden burst of 500 users System recovery time < 10 s
Endurance 12-hour steady load Memory usage growth < 5 %

6.2 Analysing Performance Results

Prompt Examples

Given response time metrics, summarise bottlenecks and likely causes.

Group by Server, Database, or Front-End.

Suggest 3 optimisation recommendations for endpoints exceeding SLA.

Expected Output

A structured summary categorising bottlenecks by layer and prioritising optimisations, e.g., “API payload compression” or “indexing of slow queries.”

AI helps testers move from raw data to actionable recommendations.


6.3 Security Testing

List common vulnerabilities to test in a login API according to OWASP Top 10.

Include test ideas for each vulnerability.

You are testing a financial application.

Generate 10 test ideas for verifying secure data handling, focusing on:

  • Authentication
  • Authorisation
  • Session management
  • Data encryption

Write a checklist for API security testing covering headers, tokens and payload validation.

Expected Output

Vulnerability Test Idea Expected Outcome
Injection Try ' OR 1=1-- in login field Rejected safely
XSS Submit <script>alert(1)</script> Escaped output
Broken Authentication Replay expired token Request denied
Sensitive Data Exposure Inspect network requests Data encrypted

6.4 Security Risk Review

Prompt Examples

Analyse this system architecture description and list possible security weaknesses.

Output as table: Component | Potential Vulnerability | Recommended Check.

Review this code snippet for potential injection vulnerabilities.

Provide reasoning and suggested fix.

Expected Output

Component Vulnerability Recommended Check
Login API Weak password policy Enforce complexity and expiry
Database SQL injection Use parameterised queries
Session Token Predictable pattern Use UUIDs + short expiry

6.5 Accessibility Testing

Prompt Examples

Generate a checklist based on WCAG 2.2 Level AA for web accessibility.

Include items for colour contrast, keyboard navigation and ARIA roles.

Suggest exploratory test ideas for accessibility validation using NVDA or VoiceOver.

You are reviewing a web form for compliance.

List 10 accessibility issues a tester should check manually.

Expected Output

Category Example Check Tool / Method
Colour Contrast Text contrast ratio ≥ 4.5:1 Chrome DevTools / Axe
Keyboard Navigation Tab order logical Manual test
Screen Readers Labels announced properly NVDA / VoiceOver
Forms Error messages accessible Inspect DOM attributes

6.6 Usability and Localisation

Prompt Examples

List usability heuristics (Nielsen’s 10) and generate 2 test ideas for each.

Generate test ideas for localisation testing of a multilingual app covering language fallback, date/time formats and encoding.

Expected Output

AI provides a matrix mapping usability heuristics to examples, such as “Visibility of system status → Display progress indicators,” and localisation cases like “Currency symbols adapt by region.”


6.7 Key Takeaways

  • Use AI to plan and document specialised testing approaches.
  • AI assists in checklists and thought organisation — it does not replace certified tools or security expertise.
  • Maintain compliance with OWASP, WCAG and client security policies.
  • For every AI-generated test idea, ensure a qualified tester validates its accuracy and feasibility.

Module 7 – Ethical Use, Confidentiality & ISO-Aligned AI Practice

Purpose

This module explains how testers at Inspired Testing must handle AI responsibly.

AI can enhance efficiency, but its misuse can expose sensitive data or misrepresent outputs.

By following ethical principles and aligning with ISO 27001 controls, testers ensure client trust and compliance.


7.1 Data Confidentiality

  • Never paste client code, credentials, or production data into public AI tools.
  • Use synthetic or masked data when demonstrating functionality.
  • Verify whether the AI model stores prompts or responses; avoid tools that retain them.
  • Prefer enterprise or private deployments for sensitive projects.
  • Retain auditability: record prompt history in secured repositories when appropriate.

7.2 Ethical Prompting

Prompt Examples

List ethical considerations testers must follow when using AI for client work.

Group by: Data Privacy, Intellectual Property, Transparency.

Write a short policy statement describing acceptable use of AI in QA documentation.

Expected Output

A concise table mapping categories to practices:

Category Example Guideline
Data Privacy No real user data or credentials in prompts
Intellectual Property Respect client IP and licensing when referencing code
Transparency Declare when AI contributed to deliverables

7.3 Bias and Misuse Prevention

Prompt Examples

Explain how AI output bias could affect defect prioritisation or risk assessment.

Suggest safeguards.

You are a QA lead defining review steps for AI-generated content.

Create a checklist to verify accuracy, neutrality and confidentiality compliance.

Checklist Example

Review Aspect Check Action
Accuracy Cross-verify facts against requirements
Neutrality Remove subjective wording or bias
Confidentiality Ensure data masking and anonymisation

7.4 ISO 27001 Alignment

ISO Domain QA Context Best Practice
Access Control Limiting AI tool usage by role MFA, logging, authorisation
Asset Management Tracking prompt/response artefacts Store securely in SharePoint
Data Protection Handling of test data Always anonymise or mask
Supplier Management Using third-party AI vendors Validate SLAs, data-handling terms

Think of every prompt as data leaving your laptop — treat it with the same care as production information.


7.5 Communication and Transparency

  • Always disclose when AI was used to draft, summarise, or analyse content.
  • Maintain a human approval step before any AI-generated material reaches a client.
  • Encourage open discussion of AI usage within teams — transparency drives accountability.

7.6 Continuous Ethics Review

Draft a quarterly checklist for reviewing AI usage across projects.

Include categories: Compliance, Data Handling, Training, Client Disclosure.

Create a one-paragraph ethics statement suitable for inclusion in a project QA plan.


7.7 Key Takeaways

  • Protect client and company data above all else.
  • Verify, review and anonymise before sharing.
  • Align every AI action with ISO 27001 controls.
  • Disclose AI assistance in deliverables.
  • Uphold Inspired Testing’s reputation for trust, integrity and quality.

Module 8 – Prompt Library & Cheat Sheet (Quick Reference)

Purpose

This final module provides a quick-access prompt library for testers using general AI tools in their daily QA work.

It includes ready-to-use examples grouped by activity type — requirements, design, automation, reporting and compliance — for consistent, professional prompting across Inspired Testing teams.


8.1 Requirements Analysis

Objective Example Prompt
Clarify ambiguity “Identify unclear or incomplete parts of this user story and list five clarifying questions.”
Derive acceptance criteria “Convert this story into Given/When/Then acceptance tests.”
Identify risks “List top five functional and integration risks for this feature.”
Analyse dependencies “Identify external systems or APIs that could affect test planning.”

8.2 Test Design

Objective Example Prompt
Generate boundary tests “List boundary and equivalence-partition cases for this input field.”
Create exploratory ideas “Suggest ten exploratory test ideas for this module grouped by heuristic.”
Negative test coverage “Create five negative test cases focusing on input validation and data format.”
Prioritisation “Group test cases by risk and business impact, marking each as High, Medium, or Low.”

8.3 Test Data Generation

Objective Example Prompt
Generate synthetic data “Create 20 rows of anonymised test data for an insurance policy form.”
Create JSON payloads “Produce sample API requests for CreateUser endpoint including valid and invalid inputs.”
CSV for automation “Generate CSV-ready data for login endpoint (username, password, expected result).”
Performance testing data “Simulate 100 concurrent user sessions with varied input data for load testing.”

8.4 Automation

Objective Example Prompt
Script generation “Generate a Playwright test for login that asserts a successful redirect to dashboard.”
Refactoring “Optimise this Selenium test for readability and maintainability using the Page Object Model.”
CI/CD assistance “Write a GitHub Actions YAML snippet to run Cypress tests and publish reports.”
Code review “Review this Cypress script and suggest improvements in waits, assertions and reusability.”

8.5 Reporting and Communication

Objective Example Prompt
Daily summary “Summarise today’s test execution (passed, failed, blocked) in five bullet points.”
Executive summary “Summarise this QA report for executives focusing on risk, readiness and quality trends.”
Defect clarity “Rewrite this defect for a client stakeholder, adding business impact and reproduction steps.”
Lessons learned “Draft a QA retrospective grouped by Process, Tools and Communication.”

8.6 Performance, Security, Accessibility

Area Example Prompt
Performance “Generate load and stress test scenarios for checkout workflow with metrics and thresholds.”
Security “List API test ideas for OWASP Top 10 vulnerabilities.”
Accessibility “Write a checklist based on WCAG 2.2 AA covering colour contrast, keyboard use and ARIA labels.”

8.7 Ethics and ISO Practice

Area Example Prompt
Confidentiality “List five rules for handling client data securely when using AI tools.”
Review process “Create a QA checklist for verifying accuracy and neutrality of AI-generated content.”
Transparency “Write a statement acknowledging responsible AI assistance in deliverables.”
Audit readiness “Generate a brief ISO 27001 compliance summary for AI-assisted QA workflows.”

8.8 AI Enhancement & Self-Learning

Area Example Prompt
Continuous learning “Suggest five ways AI can help testers upskill in exploratory and automation testing.”
Personal improvement “Generate a weekly self-assessment checklist for measuring AI-assisted QA productivity.”
Knowledge sharing “Write three LinkedIn-style insights about how AI improves QA efficiency.”

8.9 Key Takeaways

  • This prompt library is meant to accelerate daily testing work.
  • Use it as a starting point — adapt each prompt to your specific project or domain.
  • Always review AI output for technical and contextual accuracy.
  • Maintain Inspired Testing’s high bar for professionalism, clarity and ethical standards.

Module 9 – Prompt Expansion Appendix

Deepening Context, Iteration and Precision


Purpose

This appendix provides advanced prompt-building patterns for testers who want to go beyond the quick examples in the Prompt Library.

It demonstrates how to create layered, context-rich prompts that follow Inspired Testing’s standards of completeness, traceability and professional communication.


9.1 The Structured Prompt Framework

Use this five-part template to build reliable, repeatable prompts.

Element Description Example
Role Define the AI’s testing perspective “You are a senior QA consultant specialising in web accessibility.”
Context Describe the system, feature, or data “Testing the shopping-cart API for concurrency and pricing accuracy.”
Task Specify the exact activity “Generate 12 test cases including boundary and negative scenarios.”
Format Tell the AI how to structure results “Output as Markdown table: ID | Scenario | Input | Expected Result | Notes.”
Constraints Impose limits or quality targets “Keep descriptions under 20 words; highlight security-critical cases.”

Full Example

ROLE: QA consultant for financial-loan portal

CONTEXT: Validating interest-rate calculator and error messages

TASK: Generate 15 functional and boundary test cases

FORMAT: Markdown table with 5 columns (ID | Scenario | Input | Expected | Risk)

CONSTRAINTS: One scenario per row; prioritise regulatory compliance conditions


9.2 Layered Prompting (Progressive Refinement)

Encourage testers to iterate rather than expecting perfect results in one go.

Step Goal Example Prompt
1 – Draft Generate the baseline “List functional and negative test cases for password reset.”
2 – Refine Add missing coverage “Expand to include boundary and error-handling scenarios.”
3 – Structure Standardise the format “Convert the list to a table with columns ID | Scenario | Input | Expected Result.”
4 – Validate Ask AI to self-check “Review the table for duplication and missing edge cases.”
5 – Summarise Produce stakeholder view “Summarise key risks and next steps in plain language.”

Tip: Use the same chat thread to preserve context and let the AI build on its previous output logically.


9.3 Context Expansion Prompts

AI quality scales with the amount of context you provide.

Use these starter prompts to add richness:

Before generating tests, ask me any clarifying questions about:

  • System purpose
  • Input ranges
  • Business rules
  • Data dependencies

Here are the system details: [paste summary]

Using that, list assumptions you need clarified before generating test cases.

Create a list of environmental or integration factors that could affect test design.


9.4 Output-Control Techniques

Tell the AI exactly what format you expect.

Tabular Outputs

Provide results as a Markdown table with columns:

ID | Scenario | Input | Expected Result | Type (Positive/Negative)

Narrative Outputs

Write a 3-paragraph executive summary of the above results:

  • Paragraph 1: Scope and objective
  • Paragraph 2: Key findings
  • Paragraph 3: Risks and recommendations

JSON / Code Outputs

Return the test cases as a JSON array with keys:

id, title, input, expected, priority


9.5 Meta-Prompts (Prompts About Prompts)

Teach testers to improve their own prompting quality.

Review my prompt and rate its clarity on a scale of 1–10.

Suggest two improvements for context and one for format.

Rewrite this prompt using the Inspired Testing structured format (Role, Context, Task, Format, Constraints).

Evaluate whether this prompt risks exposing client data or proprietary information.

If yes, rewrite safely using anonymised placeholders.


9.6 Combining Prompts Across Modules

Example compound workflow:

  1. Start with Module 2 (Requirements) → Generate acceptance criteria.
  2. Feed output into Module 3 (Test Design) → Expand into functional and negative cases.
  3. Use Module 4 → Generate automation code skeleton.
  4. Apply Module 5 → Create defect or summary narrative.

Prompt Example

Use the acceptance criteria below to generate functional and negative test cases.

Then convert them into Playwright scripts with comments.

Finally, produce a 5-bullet QA summary for management.

This teaches testers to chain AI outputs safely and efficiently.


9.7 Prompt Quality Checklist

Category Self-Check Question
Context Have I explained what system or feature is under test?
Intent Does the AI know what outcome I want (test cases, script, summary)?
Structure Did I specify how I want results formatted?
Constraints Did I limit quantity, tone, or detail appropriately?
Ethics Did I remove any client or personal data before prompting?
Validation Did I ask for AI to review or summarise its own output?

9.8 Example: Complete End-to-End Prompt

ROLE: QA automation lead for an airline booking API

CONTEXT: Testing seat-selection and payment endpoints (REST, JSON)

TASK: Generate 20 functional, boundary and error test cases

FORMAT: Markdown table with ID, Endpoint, Input, Expected Result, Type

CONSTRAINTS: Include at least 3 authentication errors and 2 performance-related tests

Follow-Up Refinement

Now rewrite those tests as Gherkin scenarios with clear Given/When/Then syntax.

Then create a 5-point summary of high-risk areas.


9.9 Key Takeaways

  • Structure first, detail second. A good prompt starts with context and ends with formatting.
  • Iterate logically. Each refinement adds precision.
  • Review ethically. Every prompt must respect data confidentiality.
  • Adopt as habit. Consistent prompting discipline equals consistent quality.

Inspired Testing Standard: Treat AI like a junior tester — guide it clearly, review its work and help it improve.

Leon Lodewyks

Chief Technology Officer, Inspired Testing

Leon started as a Test Consultant at Inspired Testing and has been serving in an Executive role since 2019, including that of Chief Delivery Officer and now Chief Technology Officer.

Leon spent 10 years as a Director at a software testing firm in London, UK before returning to South Africa. He has a keen eye for detail with quality always front of mind and has filled various test management and test lead roles during his decades-long career, particularly within the Retail Technology industry.

Leon is a certified Scrum Master and holds an Advanced Business certification from the Stellenbosch Business School, as well as an Advanced Level ISTQB certification.


Linkedin

Join the conversation on LinkedIn
Connect with our experts and read the latest industry insights on our dedicated LinkedIn page.