Quality by Design in Clinical Trials under ICH E6(R3): Emphasizing Proactive Quality

Written by Maxim Bunimovich,

independent GCP auditor and clinical research expert at The QARP.

Quality by Design (QbD) in clinical trials means proactively building quality into the study’s design and processes, rather than relying only on inspections or after-the-fact fixes. It’s a common-sense approach where stakeholders identify critical-to-quality (CtQ) factors – the aspects of a trial essential for reliable data and participant safety – and plan the trial to protect those elements. The recently revised ICH E6(R3) Good Clinical Practice (GCP) guideline puts QbD front and center, aligning with the principles introduced in ICH E8(R1). Below, we explore what has changed in ICH E6(R3) regarding QbD (especially in relation to ICH E8(R1)), address common challenges and misconceptions in applying QbD, share real-world examples, and provide a practical checklist for implementing QbD in trial planning and execution.

ICH E6(R3) and the Evolution of QbD (Relation to ICH E8(R1))

ICH E6(R3) explicitly reinforces QbD principles that were first laid out in ICH E8(R1) “General Considerations for Clinical Studies.” ICH E8(R1) defined the QbD approach as ensuring “the quality of a study is driven proactively by designing quality into the study protocol and processes”. In practice, this means quality isn’t an afterthought – it is deliberately built into trial design from the start, focusing on what really matters for the study’s success. ICH E6(R3) takes these concepts from E8(R1) and translates them into GCP expectations for sponsors and investigators.
One of the clearest changes is the emphasis on proactive quality planning and risk management. The revised guideline “provides greater clarity on proactively designing quality into clinical trials, identifying critical-to-quality (CtQ) issues and adopting risk-proportionate approaches”. In contrast to earlier GCP editions, which some felt were “one-size-fits-all,” E6(R3) recognizes that trials vary and quality efforts should focus on the most important risks. It builds on key ideas from E8(R1) by:
  • Fostering a culture of quality: Quality isn’t just about compliance checklists; it’s about a mindset where the trial team continuously prioritizes participant safety and data integrity from the design stage.
  • Integrating quality into early planning: Quality considerations should be woven into the protocol and development plans from the outset, not bolted on later. This early integration is a cornerstone of QbD.
  • Identifying Critical-to-Quality factors: E6(R3) aligns with E8(R1) in urging teams to pinpoint what aspects of the trial are truly critical to quality. These CtQ factors might include, for example, the primary endpoint measurement, informed consent process, or key safety assessments – anything that, if compromised, would undermine the study. By explicitly using the terminology of CtQ, E6(R3) ensures a common focus on these high-impact elements.
  • Engaging stakeholders: The new guideline highlights the importance of involving all relevant parties (e.g. investigators, coordinators, patients, even regulators) in designing quality into the trial. Broad engagement helps uncover risks and practical issues early.
  • Using a proportionate, risk-based approach: E6(R3) introduces the principle of “risk proportionality.” Quality efforts (and trial oversight) should be scaled to the trial’s risks – significant risks get robust controls, while low-risk aspects aren’t overburdened. Principle 7 of E6(R3) explicitly states that risk controls should be proportionate, to minimize unnecessary burden on participants and investigators. This is very much in line with QbD’s aim to “focus on the errors that matter most.”

In practical terms, ICH E6(R3) now incorporates structural changes and clearer guidance to operationalize QbD. The guideline is organized into 11 overarching principles (supported by Annexes) that are intended to be flexible and future-proof. Notably, Principle 6 of E6(R3) states that “quality should be embedded in the scientific and operational design and conduct of clinical trials,” calling for applying QbD to focus on CtQ factors, identify risks to those factors, and safeguard data reliability. This is a direct endorsement of the QbD approach within GCP. Additionally, Annex 1 of E6(R3) consolidates guidance on quality management and risk. For example, Annex 1 (section 3.10.1.3 on Risk Control) formally introduces the use of Quality Tolerance Limits (QTLs) – predefined acceptable ranges for critical metrics – as a tool to help control risks to CtQ factors. This acknowledges that setting quantitative thresholds for important data (like query rates, protocol deviations, etc.) can signal when quality is deviating and trigger pre-planned actions. Overall, E6(R3) takes the high-level principles from E8(R1) and “embeds the proactive risk-proportional approach advocated throughout E8(R1) into trial design,” reaffirming expectations for QbD, critical thinking, and fit-for-purpose quality efforts.

For QA professionals and trial managers, these changes mean that regulators now expect sponsors to implement QbD in every trial’s planning and execution, not just as an ideal concept. A “quality by design” mindset is woven into GCP: ensuring that trial processes are designed to “get it right the first time” by focusing on what’s critical and anticipating issues. The outcome should be trials that are both more efficient and more reliable, because effort is spent where it counts most, in line with the new GCP guidance.


Common Challenges and Misconceptions in Applying QbD

Implementing Quality by Design sounds straightforward, but in practice organizations often face challenges and hold misconceptions that hinder its effectiveness. Here we address some of the most common issues and myths that QA professionals and trial teams encounter when applying QbD principles:

  • Misconception 1: “Compliance equals Quality.” One prevalent myth is that if a trial is following all regulations and SOPs, then quality is assured. In truth, compliance only establishes the baseline for conducting trials ethically and safely – it does not guarantee a trial is optimally designed or free of important errors. Quality by Design goes beyond tick-box compliance. It requires a proactive approach to preventing errors that matter, not just complying with minimum standards. A trial can be fully GCP-compliant yet still collect unreliable data if, for instance, the protocol is overly complex or critical assessments are not well thought out. QbD pushes teams to ask: are we doing this in the best way to ensure quality outcomes, or just doing it the “accepted” way? Recognizing that compliance alone is not sufficient is the first step toward a true quality focus.
  • Misconception 2: “Quality is the QA department’s job (only).” Traditionally, organizations silo quality responsibilities within a dedicated quality assurance unit. But QbD teaches that quality is a shared responsibility across all functions and team members. Everyone involved – from sponsor project managers, to clinicians, statisticians, monitors, site staff, and vendors – has a role in building and maintaining quality. For example, “every stakeholder in a clinical trial is able to contribute to maintaining high standards in data integrity and patient safety,” from investigators ensuring protocol adherence, to coordinators maintaining accurate records, to data managers safeguarding data integrity. If team members think “QA will catch it later,” then the mindset is reactive. QbD flips this: each contributor should be thinking ahead about how to do their part right from the start. A supportive quality culture where operational teams and QA collaborate (rather than work in isolation) is essential but can be hard to develop. Organizations sometimes struggle to break down these silos, but doing so is critical – quality must be built in, not inspected in.
  • Misconception 3: “More monitoring and checks will ensure quality.” It’s intuitive to think that the more you double-check and scrutinize, the higher the quality. However, simply layering on extra audits or 100% source data verification is usually an inefficient way to improve quality. ICH E6(R3) and E8(R1) encourage a risk-based approach instead. Not every data point in a study carries equal importance, and not every protocol deviation has the same impact. A common QbD pitfall is trying to “monitor everything” or address every conceivable risk – which can overwhelm teams and divert attention from the truly critical issues. In fact, focusing on too many trivial checks can obscure the serious problems. As CTTI notes, concentrating on critical aspects of a trial “significantly reduce[s] the burden on sponsors by alleviating the perceived need to address every potential risk”. In practice, this means identifying a handful of high-impact risks or data points to monitor closely (for example, primary endpoint data accuracy, or informed consent process fidelity), rather than spreading resources thin by checking all data with equal intensity. More is not always better – better is better. The goal is to collect the right data reliably, not simply to collect more data.
  • Misconception 4: “Quality by Design is a one-time exercise.” Some teams conduct a risk assessment or QbD workshop during protocol development, but then shelve those outputs once the trial starts. This reflects a misunderstanding; QbD is meant to be a continuous process. Risks and “errors that matter” should be reassessed throughout the trial lifecycle, because new challenges can emerge or initial assumptions may change. As one industry article put it, “no study will ever be perfect from the outset, making continuous improvement an inevitability that must be anticipated and embraced.” Regulators such as FDA explicitly “encourage sponsors to adopt a quality-by-design approach, which involves ongoing assessment and improvement of quality throughout the trial lifecycle”. Similarly, quality risk management guidance calls for periodic risk review and adjustment as the study progresses. In practical terms, this might mean the trial’s quality management team meets at defined intervals (e.g. after each interim analysis or monitoring visit cycle) to review if any new risks have arisen or if controls are effective, and then refine the quality plan. A QbD approach remains adaptive – it’s never “too late” to improve quality during a trial. Cultivating this continuous improvement mindset can be challenging, especially in organizations used to static monitoring plans, but it is a key to QbD’s success.
  • Challenge: Identifying the right Critical-to-Quality factors. A very practical challenge is determining what the true CtQ factors are for a given trial. Teams may start with a long laundry list of potential risks and data points, but QbD requires distilling these to the factors most critical to patient safety and decision-making. This takes critical thinking and often cross-functional input. Misunderstandings can arise – for example, an overemphasis on things that are easy to measure rather than what’s truly important, or disagreements between stakeholders on priorities. One case study noted that even after an initial QbD assessment, differing opinions (e.g. between investigators and regulators) might surface on whether a particular risk should be considered “critical”. The solution is to facilitate open dialogue and use data when possible to predict what could threaten trial outcomes. It’s also important to be willing to limit the list: QbD is about focus. As an illustration, the Duke Clinical Research Institute (DCRI) team on one trial categorized study elements into A, B, C, D (“Critical,” “Important,” “Nice to have,” and “Worthless”) and allocated their effort accordingly – only a small fraction of aspects were truly A-level critical and those got the bulk of resources, whereas a majority were “nice-to-have” features that received minimal effort. This kind of disciplined prioritization can be difficult at first, but it’s vital to prevent trying to do everything (and thereby doing nothing thoroughly).
  • Challenge: Organizational buy-in and understanding of QbD. Sometimes the very term “Quality by Design” can intimidate or confuse teams, who may perceive it as a buzzword or an additional bureaucratic requirement. It’s not uncommon to encounter resistance like “we already have too many meetings/paperwork.” In reality, QbD is less about formal documentation and more about critical thinking. Case examples have shown that framing QbD in plain language helps – e.g. focus on trial risks and how to handle them sensibly, rather than using jargon. Training and internal advocacy are often needed to shift the mindset. Having leadership support for a quality culture, and showcasing quick wins (e.g. how QbD prevented a costly protocol amendment or boosted recruitment) can help overcome reluctance. Importantly, QbD is not just for big pharma or expensive trials – even small studies benefit from avoiding errors that could invalidate results. As one commentary notes, being “patient-centric” and seeking input from sites and patients in protocol design is a form of QbD that any sponsor can do, and it often leads to more realistic, feasible trials. The challenge is ensuring all team members clearly see that quality by design is an investment, not an overhead – it pays off by reducing downstream problems.

By recognizing these misconceptions and challenges, clinical trial teams can address them head-on. In summary, Quality by Design is about doing the important things right, from the start, with everyone involved. It means narrowing our focus to what really matters, continuously watching and adjusting, and fostering a culture where quality is owned by all. Next, we’ll look at some real-world examples of QbD in action, and then provide a checklist that trial teams can use to implement these principles.


Real-World Examples of QbD in Action

To better understand how Quality by Design principles translate into practice, here are a few real-world case examples highlighting QbD approaches:
  • Streamlining a Complex Trial (Global Phase III Example): Company X applied QbD early during the design of a global Phase III trial that had to meet a very tight timeline. By engaging their cross-functional team in identifying the trial’s CtQ factors and eliminating unnecessary procedures, they were able to build a streamlined, simpler protocol without compromising scientific rigor. The result was a study design that was easier to execute and remained on schedule. This illustrates how upfront QbD planning can save time later – the focus on “just what’s needed” prevented avoidable delays (such as those caused by overly complicated eligibility criteria or extraneous data collection). In essence, the team asked at every design decision: “Does this contribute to our critical objectives, or is it extraneous?” and cut out anything not adding value. The success of this trial (completed on time with quality data) demonstrates QbD’s payoff in efficiency.
  • Ensuring Endpoint Quality and Reducing Burden (1000-patient Research A): An investigator at Company Y used QbD to thoughtfully design a 1,000-patient, multicenter trial evaluating a heart valve therapy. Key CtQ factors were identified at the outset. For example, the primary endpoint – thromboembolic events – was critical to patient safety and study conclusions. The team developed specific procedures to ensure no events were missed: they created a telephone script for site coordinators to consistently ask patients about symptoms and an algorithm to evaluate any potential thromboembolic events. These proactive measures built quality into how endpoint data was gathered, making it less likely that an important event would go unreported or be reported inconsistently. Conversely, the team examined other aspects of the trial that were not critical. They initially determined that collecting exhaustive data on every minor adverse event was not essential for this trial’s objectives, given the well-understood safety profile of the drugs. Instead of burdening sites with redundant reporting, they streamlined adverse event collection (e.g. focusing on certain key events and otherwise capturing non-serious AEs in a simplified manner). This decision was revisited after discussions with regulators – some argued that bleeding events should also be closely tracked. The compromise was to classify bleeding risk as “important but not critical,” ensuring it was monitored but with a lighter touch. By tailoring the data collection to what truly mattered, the trial avoided unnecessary complexity while still maintaining safety oversight. Notably, the DCRI team brought together all key stakeholders – the sponsor, regulators (FDA), clinicians, surgeons, investigators, and even the Data Safety Monitoring Board – early in the planning process to align on these critical factors. This inclusive QbD approach built consensus on trial priorities. The outcome was that they “eliminated multiple facets of the study that were adding unnecessary complexity, while still answering the primary question”. The trial was largely executed remotely and proved that focusing on CtQs (and not sweating the small stuff) can both maintain quality and improve practicality.
  • Enhancing Recruitment through Simplified Design (Long-term Outcomes Trial): Company Z (a smaller pharma) applied QbD principles in planning a 5-year outcomes trial in collaboration with two partner organizations. By using QbD to refine their trial design – for instance, simplifying visit schedules and data requirements to reduce patient burden – they created a study that was easier for sites and patients to participate in. This trial saw faster-than-expected recruitment as a result. In other words, quality by design had a positive side effect: by focusing on critical data and patient-centric scheduling, the study became more attractive to participants and investigators, accelerating enrollment. It’s a good reminder that a quality-focused design (one that is not overly onerous) often aligns with operational success as well.

Each of these examples underscores a few takeaways. First, identifying CtQ factors early guides where to invest your time and resources (whether it’s training site staff on an endpoint collection tool or cutting non-essential procedures). Second, QbD often simplifies trials – removing clutter that doesn’t serve a critical purpose – and this simplicity can improve compliance, enrollment, and overall trial performance. Third, stakeholder engagement and buy-in (from regulators to patients) is crucial to make sure the quality plan is robust and accepted by all. Real-world experiences show that when teams truly embrace Quality by Design, the result is not only a “higher-quality” trial in the abstract sense, but one that runs smoother and achieves its objectives more reliably.


Checklist: Implementing Quality by Design in Your Clinical Trial

For trial teams ready to put QbD into practice, here is a concise, actionable checklist. This list can serve as a roadmap during trial planning and execution to ensure that the principles of Quality by Design are effectively applied:
Figure: Key steps in a Quality by Design approach for clinical trials (checklist).
  • Establish a Quality Culture from the Start: Begin by making quality a core value in your team. Ensure leadership and all team members understand that “quality” means designing trials right, not just avoiding findings. Encourage open discussion of risks and empower team members to speak up about potential issues. (Remember, quality is not just the QA department’s responsibility – it’s everyone’s job to build it in.)
  • Form a Cross-Functional QbD Team: Convene a diverse team for trial design that includes clinical operations, medical, statistics, data management, monitoring, quality assurance, and if possible, site representatives or patient advisors. Engage these stakeholders early. This QbD working group will collaborate to identify risks and critical factors. Involving investigators and even patients in protocol planning can uncover feasibility or safety issues that internal teams might miss. Early stakeholder input leads to more robust, patient-centric protocols.
  • Identify Critical-to-Quality (CtQ) Factors: Clearly define what aspects of the trial are critical to achieving valid results and to safeguarding participant rights and well-being. Ask: “What are the key outcomes, data, and processes that, if flawed, would undermine the study?” Typical CtQs might include primary efficacy endpoint measurements, key safety endpoints (e.g. occurrence of serious adverse events), informed consent comprehension, participant retention, or investigational product handling. Keep the list focused – it’s better to single out a few truly critical factors than to name 20 items that can’t all be priorities.
  • Assess Risks to Quality (Risk Assessment): For each CtQ factor identified, systematically assess what could go wrong. Use tools like risk brainstorming or a failure modes analysis with your cross-functional team. Consider the likelihood of each risk, its potential impact on data integrity or patient safety, and how detectable or preventable it is. This assessment should result in a prioritized list of “errors that matter” – for example, risk of dose calculation errors at sites, or risk of misclassification of primary endpoint events. Prioritize risks that have high impact on critical factors. It’s expected (and encouraged by regulators) to focus on the most important risks rather than trying to mitigate every minor issue.
  • Develop Mitigation Strategies for Key Risks: Plan measures to control or reduce the top risks. This is where quality is “designed into” the protocol and trial processes. Mitigations might include protocol modifications (e.g. simplifying a cumbersome aspect of the trial), adding specific training or tools, setting limits and triggers, or establishing special monitoring procedures. Ensure each high-priority risk has at least one corresponding risk control. For instance, if a CtQ factor is accurate measurement of a primary endpoint, a mitigation could be a standardized assessment tool or central adjudication committee to ensure consistency. If patient retention is critical, a mitigation could be a patient engagement plan or flexibility in visit scheduling. Document these strategies in a trial Quality Management Plan, linking each critical risk to how it’s managed.
  • Streamline Protocol and Processes (Focus on Essentials): Leverage the QbD analysis to simplify your trial design. Remove or reduce any procedure, data collection, or requirement that is not serving a critical purpose. Every data point or process should trace back to a value (scientific insight or safety assurance) that outweighs its burden. Critically examine all “nice-to-have” elements – could they be eliminated or done in a leaner way? Strive for a protocol that is as straightforward as possible while still addressing your primary objectives. Simpler trials not only reduce site and patient burden (which can improve compliance and enrollment), but they also have fewer opportunities for error. As a sanity check, review your protocol through the eyes of a site coordinator or participant to ensure it’s realistic.
  • Define Quality Tolerance Limits (QTLs) and Metrics: As part of risk control, establish QTLs or threshold values for critical metrics – these are basically the “alarm limits” that, if exceeded, indicate a potential quality issue requiring action. For example, you might set a QTL for protocol deviations per site, query rate on critical data, or patient withdrawal rate. Additionally, define metrics for ongoing quality oversight (e.g. timeliness of data entry, consent process errors, etc.). Make sure these are actionable – the study team should know what to do if a threshold is breached (e.g. trigger a root cause investigation, retraining, protocol amendment consideration). Document these in the Quality Management Plan and monitor them regularly.
  • Incorporate Risk-Based Monitoring (RBM) and Oversight: Align your monitoring plan with the QbD priorities. Rather than visiting each site with the same frequency or verifying 100% of data, use a risk-based monitoring approach. Focus on sites or data points that the risk assessment flagged as higher risk. This can include centralized monitoring techniques – for instance, real-time data review to spot anomalies in critical data across sites. ICH E6(R3) explicitly allows and encourages proportionate monitoring strategies, including remote and centralized methods, as long as they ensure quality where it matters. Ensure that your CRO or monitoring team is on board with this approach and that it’s clearly described in the monitoring plan. Effective RBM will allocate more experienced oversight to, say, a complex efficacy endpoint at a few high-enrolling sites, instead of expending equal effort on every form at every site. This targeted oversight is a direct extension of QbD into trial conduct.
  • Train and Communicate with Your Team and Sites: Once the protocol and quality plan are set, invest time in training the study team and investigative sites on the critical elements of the trial. Don’t just train on the “what” of the protocol, but the “why” – emphasize which factors are critical to quality and what procedures are in place to ensure those are done right. For example, if accurate dose administration is critical, ensure every site understands the dosing process deeply and knows the importance of following it exactly. If patient-reported outcomes are critical, train sites on how to properly instruct patients to complete questionnaires. This targeted training aligns everyone with the QbD priorities so that the study’s critical factors get the attention they deserve. It’s often helpful to share the rationale (e.g. “We’re focusing on X because it’s crucial for data integrity”) – this can motivate site staff to be vigilant on those points. Clear roles and responsibilities should be documented: everyone should know who is responsible for each quality control activity. When each team member knows their part in the quality plan, nothing falls through the cracks.
  • Conduct Continuous Quality Reviews: Throughout the trial, periodically review quality metrics and risk indicators. Set a schedule (e.g. monthly quality meetings or after key milestones) to evaluate if any trends are emerging that threaten CtQ factors. Use tools like quality dashboards or reports from your centralized monitoring to spot issues early. For instance, if you see a spike in protocol deviations at certain sites or a concerning pattern in safety data, your team should discuss and act promptly. Continuous assessment means being ready to implement corrective and preventive actions (CAPA) during the trial, not just at the end. If needed, adjust your risk controls – QbD is iterative. Maybe an unforeseen issue becomes critical; then allocate resources to it. Conversely, if a risk was over-estimated, you might relax certain checks to focus elsewhere. Keep documentation of these reviews and any actions taken as part of the trial master file. Regulators will appreciate that you actively managed quality rather than passively following an initial plan.
  • Document Learnings for Future Trials: As the trial concludes (or at interim points), capture what worked well and what didn’t in your QbD approach. Were there critical risks that never materialized (perhaps thanks to mitigations) or ones that did despite controls? Gathering these insights helps refine the QbD process for subsequent studies – feeding a cycle of continuous improvement in your organization’s trial quality. Over time, teams build a sort of “QbD knowledge base” of effective strategies and common pitfalls to avoid. This organizational learning is how a quality culture strengthens.

By following this checklist, trial teams can systematically incorporate Quality by Design into their workflow. The steps ensure that from planning through execution, the trial remains focused on what truly matters for quality. The end result should be a study that not only meets regulatory requirements, but does so with fewer surprises and inefficiencies, because quality was engineered into its very blueprint.


Conclusion
Quality by Design under ICH E6(R3) represents a shift from seeing quality as a box-checking exercise to seeing it as an integral part of trial design and conduct. The updated GCP guidelines challenge us to be proactive: to plan our trials with careful thought to what is critical for success and to anticipate risks before they become problems. In practical terms, this means engaging stakeholders early, simplifying where possible, targeting our resources to high-risk areas, and maintaining a vigilant, adaptive oversight throughout the study. While adopting QbD can require cultural change and initial effort, the payoff is significant – more robust trials, data you can trust, fewer protocol amendments, and often a smoother path to completion. For QA professionals and clinical trial managers, embracing QbD isn’t just about complying with ICH E6(R3); it’s about elevating the quality and efficiency of clinical research. By implementing the principles and checklist above, trial teams can fulfill the spirit of ICH E6(R3) and ICH E8(R1): designing quality into trials from day one and delivering reliable results that ultimately better protect patients and advance science.