The 5 Gaps That Could Fail Audits1. No Vendor Contract ProtectionWhat we see:Teams using Azure OpenAI or ChatGPT Enterprise. When I ask:
"Show me the contract clause that says they don't train on your data" → Blank stares.
What inspectors are looking after?8 mandatory clauses in every AI vendor contract:
- Zero training (vendor can't use your data)
- Data residency (EU/US, not "global cloud")
- Model lock (30-day notice before updates)
- Audit trail (who asked what, when)
- Right to audit (you can review their SOC 2 / ISO 27001)
- Change notification
- Business continuity (disaster recovery plan)
- Liability (who pays if AI hallucinates?)
Real Audit Scenario:
Sponsor's contract said:
"Microsoft is not liable for AI output errors."Inspector:
"So if AI creates a drug interaction error in your IB, who's liable?"Sponsor: "We are?"
Inspector:
"Did you document this risk?"Sponsor: "No..."
Outcome:
Observation raised → Corrective action required (update risk assessment).2. No AI System InventoryWhat we see:QA doesn't know what AI tools are in use. During one audit, I found 8 unauthorized tools (ChatGPT, Claude, Gemini, Grammarly AI, Otter.ai...).
What inspectors may want to ask:A Master AI System Inventory (like your CSV inventory, but for AI):
- System name & version
- Vendor / model
- Intended use (e.g., "TMF classification" not "general productivity")
- Risk level (Low/Medium/High)
- Validation status
- Next review date
- Contingency plan (what if the vendor shuts down?)
Real Audit Scenario:Inspector:
"Is this tool on your inventory?"QA Manager:
"We don't have an AI inventory."Inspector:
"Then how do you know what's validated?"Risk: This could result in a major observation—lack of control over computerised systems (ICH E6(R3), Section 4).
3. No ValidationWhat we see:"We tested it on 3 examples and it looked good."
What inspectors want:For Medium/High-risk AI (e.g., TMF classification, CAPA drafting, AE coding):
- Gold Standard test set (100 documents/questions with known correct answers)
- Acceptance criteria (e.g., ≥95% accuracy)
- IQ/OQ/PQ documentation
- Traceability Matrix (requirements → tests → results)
Real Audit Scenario:Inspector:
"Show me your validation for this TMF classifier."TMF Manager:
"The vendor said it's pre-validated."Inspector:
"Vendor validation ≠ your validation. Did you test it with YOUR documents?"TMF Manager: "No..."
Risk: This could result in a critical observation—use of an unvalidated system for GCP records.
4. No Human OversightWhat we see:Teams "trusting" AI outputs without review.
During one audit, we asked the TMF Specialist:
"Show me a case where you disagreed with the AI."Response:
"We've never overridden it. It's 95% accurate."Red flag: Rubber-stamping AI = no human-in-the-loop (HITL).
What inspectors want to see:Evidence of human oversight:
- Override log: "AI suggested Zone 2.1, I corrected to Zone 2.3 because [reason]."
- QC sampling: Weekly spot-check of 10% of AI outputs
- Training records: Users trained on "AI limitations & when to escalate"
Real Audit Scenario:Inspector: "If you've never overridden the AI, how do I know you're actually reviewing it?"
Risk: This pattern could lead to a major observation—lack of documented human oversight.
5. No GovernanceWhat we see:When AI conflicts with SME judgment, no one knows who decides.
Real scenario:AI suggests classifying a document as Zone 2.1 (Protocol).
TMF Specialist thinks it's Zone 1.2 (Protocol Amendment).
Who's right? Who approves?
What inspectors want:An
AI Governance Committee (AIGC) with clear decision authority:
- Chair: Head of QA (veto power over GCP violations)
- Members: Legal, IT Security, Business SME
- Decision Matrix:
- Low-risk use case → Process Owner approves
- Medium/High-risk → QA + Process Owner (joint)
- Vendor refuses contract terms → Legal + QA escalate
Real Audit Scenario:Inspector:
"Who approved this AI for GCP use?"Team:
"IT bought it."Inspector:
"Did QA review it?"Team: "No, we didn't know about it..."
Risk: This could result in a major observation—lack of sponsor oversight (ICH E6(R3), Section 3.9).
The Regulatory Reality (2025-2026)This is not theoretical anymore. The regulations are here:ICH E6(R3) (Final, January 2025) — Section 4 on computerised systemsFDA/EMA "10 Guiding Principles" for AI in Drug Development (January 2026)EU AI Act (enforced 2026) — Clinical trials = HIGH RISK categoryGAMP 5 (2nd Edition) + GAMP AI Guide (ISPE, 2024)Key message:If you are using AI without documented governance, you're one inspection away from a major observation.
The Solution: A Practical FrameworkWe have spent the last few months building a
QA Usage Guide that addresses all 5 gaps.
It is based on:
- 10 FDA/EMA Guiding Principles (Human-in-the-Loop, Data Governance, Context of Use, Audit Trail...)
- GAMP 5 validation approach (Category 4 vs 5, IQ/OQ/PQ)
- Real audit scenarios (what inspectors actually ask)
What's inside:Section 1: The 10 Guiding Principles (FDA/EMA, January 2026)
Section 2: Risk Matrix (Low/Medium/High) — Do you need validation?
Section 3: The "Walled Garden" Rule (what data you CAN'T put in AI)
Section 4: 8 Mandatory Vendor Contract Clauses (+ how to verify)
Section 5: Vendor Qualification Questionnaire (7 questions, pass/fail criteria)
Section 6: GAMP 5 Validation (Category 4 vs 5, IQ/OQ/PQ templates)
Section 7: Inspection Readiness Checklist (7 questions auditors will ask)
Section 8: Change Control for AI (what to do when vendor updates model)
Section 9: Performance Monitoring (KPIs with alert thresholds)
Section 10: Use Case Controls (Clinical Writing, Data Management, Translation, PV)
Appendix A: AI Impact Assessment (AIIA) Template (1-page form, ready to use)
Appendix B: FAQ (8 common questions)
Total: 15 pages. Time to read: 20 minutes.What You Get (Free)15-page Guide (PDF) — Inspection-ready framework
AIIA Template (included) — 1-page form, fill-in-the-blank
Vendor Questionnaire (included) — 7 questions + scoring
AI System Inventory Template (included) — 12 fields
No paywall.The Bottom LineAI is not going away. By 2027, 80%+ of clinical trials will use AI (EMA estimate).
The question is not "Should we use AI?"
It is "How do we use AI responsibly?"
You have 3 options:
Option 1: Keep using consumer AI (ChatGPT/Claude) → Hope you don't get inspected
Option 2: Ban AI completely → Fall behind competitors
Option 3: Implement governance + validation →
Use AI compliantly Option 3 is the only sustainable path.And it's not that hard, if you have a framework.
Request
AI in GxP and Clinical Trials: A QA Usage Guide for Free.