Determining Criticality–Process Parameters and Quality Attributes Part III: Process Control Strategies—Criticality throughout the Lifecycle

Publication
Article
BioPharm InternationalBioPharm International-03-01-2014
Volume 27
Issue 3

This series presents a practical roadmap in three parts that applies scientific knowledge, risk analysis, experimental data, and process monitoring throughout the three stages of the process validation lifecycle. In Parts I and II, risk analysis and process characterization studies were used to assign criticality risk levels to critical quality attributes and critical process parameters, and the concept of a continuum of criticality was established. In Part III, the author applies the continuum of criticality to develop the process control strategy and move through Stages 2 and 3 of the new process validation lifecycle.

With the most recent FDA (1) and Inter-national Conference on Har-monization (ICH) guidances (2-4) advocating a new paradigm of process validation based on process understanding and control of parameters and less on product testing, the means of determining criticality has come under greater scrutiny. The FDA guidance points to a lifecycle approach to process validation (see Figure 1).

Figure 1: Process validation lifecycle.

In Part I, the author used risk analysis and applied the continuum of criticality to quality attributes during the process design stage of process validation. After using process knowledge to relate the attributes to each process unit operation, the inputs and outputs of each unit operation were defined to determine process parameters and in-process controls. An initial risk assessment was then completed to determine a preliminary continuum of criticality for process parameters.

In Part II, the preliminary risk levels of process parameters provided the basis of characterization studies based on design of experiments. Data from these studies were used to confirm the continuum of criticality for process parameters.

At this point in the process development stage, the design space has been determined. It may not be rectangular (if there are higher-order terms in the models) and may not include the entire proven acceptable range (PAR) for each critical process parameter (CPP). In fact, the design space is not defined by the combination of the PARs for each CPP, given that the full PAR for one CPP ensures the quality of the critical quality attribute (CQA) only when all other CPPs do not vary. The design space represents all combinations of CPP set points for which the CQA meets acceptance criteria.

Overall, the design space developed from process characterization study models represents a level of process understanding. Like all models, however, the design space is only as good as the data that drives the analysis. The CQAs, on average, may meet acceptance criteria, but individual lots--and samples within lots--are at risk of failure when operating at the limits of the design space. For this reason, the operational limits for the CPPs are frequently tighter than the design space. This tighter space is the last part of the ICH Q8 paradigm (2) (see Figure 2) and is called the control space, which equates to normal operating range (NOR) limits for each CPP.

Figure 2: Knowledge, design, and control space.

Stage 1: From models to design space to control space
At the conclusion of the process characterization studies, the design space describes each CQA as a function of process parameters of various levels of risk, or continuum of criticality. Additionally, these models have been confirmed, by experiment or prior knowledge, to adequately represent the full-scale manufacturing process. This classical multivariate approach combines impact from each CPP to predict the response of the CQA as each CPP moves through its PAR. These mathematical expressions can be represented graphically as either contour or 3-D response surface plots.

Even this view of the design space is too simplistic. To ensure a process has a statistically high probability (e.g., > 95%) that a CQA will reliably meet the acceptance criteria for a combination of CPP requires a more involved computational analysis. This analysis may lead to revising CPP set points and ranges.

Several computational statistical methods are available for analysis of process reliability. Each of these requires specialized statistical software.
These methods include:
Monte Carlo Simulation inputs CPPs as probability distributions to the design space models and iterates to produce the CQA as a probability distribution. Capability analysis can be applied to the CQA’s acceptance criteria. This method is limited by the estimations of the CPP distributions from process characterization studies, which will not necessarily represent the same level of inherent variation as the commercial process. Using sensitivity analysis on these estimated distributions may enhance this approach.
Predictive Bayesian Reliability (5) incorporates the CPPs, uncontrolled variables such as raw material and environmental conditions, inherent common cause variability, and variation due to unknown model parameters, to determine a design space with high reliability of meeting the CQAs.

Design space models often become a series of complex, multifactor equations, which are not suitable for describing the required ranges for each CPP in a production batch record. Contained within the design space, the control space consists a suitable NOR for each parameter.

Table I provides some example methods for developing the NORs and the control space together with their advantages and disadvantages. Option 1 is included only since it represents a historical approach broadly employed in NOR establishment. This approach is not consistent with the current quality-by-design (QbD) approach to process validation and will not be sufficient to defend a final NOR establishment. Issues with Option 2 have been discussed previously. Of these three options the reliability approach (Option 3) is the most robust, but requires sophisticated statistical skills. This option may be reserved for only very high risk CPPs.

Table I: Example methods for determining a normal operating range (NOR) for a critical process parameter (CPP). CQA is critical quality attribute, PAR is proven acceptable range, QbD is quality by design.

Option

Advantages

Disadvantages

Span of range

1. NOR set equal to PAR for single CPP

Simple

No other CPP effects or interactions considered. Not consistent with QbD methodology (i.e., poor assurance of quality).

Widest

2. NOR same as design space
(optional: rectilinear space)

Account for other CPP effects on CQA

Based on process characterization models. Does not ensure lot-to-lot performance (model is “on average”).

Narrower than #1

3. NOR based on reliability methods

Accounts for other CPP effects on CQA. Ensures high reliability of meeting CQAs.

Based on process characterization models. Requires sophisticated analysis. May be tighter than available control capability.

Generally narrower than #1 and #2, possible narrower than all options

4. NOR based on design space with “safety margin”

Accounts for other CPP effects. Partial allowance for lot-to lot variation.

Based on process characterization models. Only partial allowance for lot-to-lot performance.

Narrower than #2

5. NOR set by control capability

Good for non-CPPs. Keeps CPP in tight range of control (lower risk by lowering occurrence)

Range is narrow and may not allow for future unknown variability. Exceeding range does not necessarily lead to CQA failure.

May be narrowest of all options depending on level of control

Option 4 is based on a “safety margin” that may be determined in a variety of ways. One choice is to measure how much an actual parameter will vary around its set point. For example, if a temperature set point is 30.0 °C, it may be observed to vary from 29.5 °C to 30.5 °C (± 0.5 °C). The safety margin of 0.5 °C is applied to narrow the CPP limit from the design space. Therefore, if the design space is 25.0 °C to 35.0 °C, the NOR becomes 25.5 °C to 34.5 °C. Additional factors, such as calibration error, can be added to provide a wider safety factor.

Option 5 is the narrowest method applied for determining the NOR. Here, the ability to control the parameter determines its range. For example, a pH set point of 7.0 may have a design space of 6.5 to 7.5. However, if the control of the pH is shown to be ± 0.2, then the NOR is 6.8 to 7.2. The primary disadvantage of such a narrow range is that even if the CPP’s NOR is exceeded, the CQA may not move outside of its acceptance range. Option 5 is suitable for setting the NOR of non-CPPs since the CQAs are not affected. For example, a mixing set point of 200 rpm is a non-CPP. If the mixer’s control is qualified for ± 20 rpm, then the NOR is 180-220 rpm.

The conclusion to process validation Stage 1 (process design) is documented by summarizing the control strategy per ICH Q10:
Control strategy: A planned set of controls, derived from current product and process understanding, that assures process performance and product quality. The controls can include parameters and attributes related to drug substance and drug product materials and components, facility and equipment operating conditions, in-process controls, finished product specifications, and the associated methods and frequency of monitoring and control (4).

The control strategy may be a single document or a package of several documents as described in the company’s process validation plan. This documentation includes or references the following:
• Continuum of criticality for process parameters
• Continuum of criticality for quality attributes
• The mechanistic or empirical relationships of CPPs to CQAs (design space)
• The set points and NOR for CPPs (control space)
• Acceptance criteria and sampling requirements for CQAs, in-process controls, and raw materials testing
• In-process hold (non-processing) time limits and storage conditions
• Key operating parameters (KOPs) and performance attributes (all non-critical), which are used to monitor process performance, set points, and ranges.

Stage 2: Criticality and process qualification
The FDA Process Validation Guidance requires qualification in that “utilities and equipment are suitable for their intended use” as part of Stage 2 (process qualification) (1). Suitability, or fit for purpose, is assessed either through risk-based commissioning and qualification (e.g., ASTM E2500 [6]), or through the traditional installation, operation, and performance qualification approach.

The acceptance criteria for qualification of equipment and utilities must be consistent with the Stage 1 control strategy. Operational qualification studies must show that the utilities/equipment is capable of controlling each relevant CPP throughout its NOR. The risk level of the CPP is used to determine the amount of testing so that high-risk CPPs require more replication and rigorous data analysis and low-risk CPPs require a simple verification. Equipment performance qualification can be coordinated with full-scale process characterization studies to test the manufacturing equipment under representative product conditions with the CPPs up to the limits of their NORs. These full-scale studies also provide the opportunity to confirm the suitability of planned sampling plans and acceptance criteria for process performance qualification (PPQ).

The continuum of criticality also influences the study design for PPQ including number of batches, sampling plans, and study acceptance criteria. Under the new FDA Process Validation Guidance, the purpose of the PPQ is to demonstrate that the process design and control strategy are capable of meeting CQA acceptance criteria for not just a fixed number of PPQ lots, but for future commercial lots (1). For this reason, while releasing the defined number of PPQ lot under protocol is certainly a necessary criteria, it is not sufficient in and of itself to provide assurance that the process is under control and will continue to produce releasable future lots. Trying to qualify an out-of-control process creates a possible scenario of having the individual PPQ lots pass their acceptance criteria (i.e., they each meet specifications) but the process performance qualification criteria do not pass.

Since the PPQ is a means of confirming process reproducibility under typical production conditions, CPPs and KOP are expected to be set at their normal set points and remain within their NOR. Studies executed at the limits of NOR ranges are done during Stage 1 (process design) and must be completed before Stage 2 (process qualification).

The number of PPQ lots and the study acceptance criteria are linked by both statistical and risk-based analysis of process characterization (Stage 1) data and existing process knowledge. Both the Parenteral Drug Association (PDA) Technical Report 60 (7) and the International Society for Pharmaceutical Engineering (ISPE) Product Quality Lifecycle Initiative (PQLI) guide series (8-10) provide several example methods and are excellent resources on the topic. Some possible choices for determining the number of lots are:
Structural: determined by process complexity, dosage form strengths, and number of equipment trains; this includes bracketing and matrix strategies and may involve separating groups of unit operations into separate PPQ protocols.
Risk-based: uses a comprehensive analysis to assess how much process risk remains after applying existing process knowledge and process design data.
Statistical: based on calculations targeting capability, tolerance intervals, or overall reliability of meeting CQA acceptance criteria.

Companies may combine these strategies with each other or with business requirements to produce sufficient batches for launch quantities. These strategies for selecting the number of PPQ lots and the specific steps used are described in or referenced by the process validation plan.

Where practical and meaningful, a statistical method of determining the number of batches is recommended, although there is no standard industry approach. Statistical methods inherently incorporate the risk component reflecting the level of understanding derived from Stage 1 of the process validation lifecycle. Misapplication of these risks may lead to unjustified confidence in PPQ batch results. The statement “all five PPQ lots pass CQA acceptance criteria” has no statistical meaning in determining the amount of risk the process control strategy has in producing future successful lots. A statistical criteria such as “results from the five passing PPQ lots will show that 90% of future lots are expected to meet CQA acceptance criteria with a 95% confidence” describe a well-controlled process that not only will produce five successful lots, but also is highly likely to produce successful lots in the future.

It is generally difficult to prove tight statistical criteria (90–95% confidence and 95–99% conformance or coverage) with PPQ lots alone. One strategy is to apply wider criteria (50% confidence) to a smaller number of PPQ lots with
monitoring of a larger number of CPV (Stage 3) lots to meet the target statistical criteria (95% confidence). These CPV lots use the same enhanced sampling and monitoring as the PPQ lots, but are released under normal batch acceptance criteria. A statistical approach to determining the number of PPQ batches is often used in combination with risk-based or structural strategies.

Per the FDA Process Validation Guidance, sampling plans for PPQ must be “more extensive than is typical during routine production” and “be adequate to provide sufficient statistical confidence of quality both within a batch and between batches. The confidence level selected can be based on risk analysis as it relates to the particular attribute under examination.” Since the continuum of criticality has been applied to CQAs and in-process controls, the risk level may be used to determine acceptance criteria and sampling requirements.

The following example uses statistical tolerance intervals (11) where coverage is the proportion of the expected “future” data to be contained within the acceptance limits:
High-risk CQA: 99% coverage at 95% confidence
Medium-risk CQA: 95% coverage at 95% confidence
Low-risk CQA: 90% coverage at 95% confidence.

The actual number of samples required varies based on the data type (discrete or continuous) and the expected distribution (normal or unknown). The samples are assessed by individual lot, or if no significant lot-to-lot variation is seen, by pooling the sample data across lots.

Stage 3: Criticality and continued process verification
At the conclusion of a successful PPQ, process validation activities move into an ongoing monitoring and review phase called continued process verification (CPV). It should now be clear that each of the previous two process validation stages have limitations. Stage 1 (process design) depends upon risk analysis, prior knowledge, and scientific principles, because not all possible parameters and interactions can be evaluated through experimentation. Many design of experiments (DOEs) are performed on smaller scale with limited material variation to conserve resources. Stage 2 (process qualification) has a limited number of commercial-scale runs to provide confidence in the control strategy developed in Stage 1 and cannot fully explore all raw material variability. PPQ studies can only provide a limited amount of statistical confidence that the CQAs will continue to meet their acceptance criteria in the future.  

Continued process verification is the recognition that process validation is a lifecycle that does not end with PPQ and the start of commercial production. In exchange for CPV, FDA will allow changes to a process without revalidation when process drift is observed for CPPs that have been identified so long as they do not violate the defined PAR and are based upon Stage 1 and Stage 2 process understanding.  

Despite best efforts, the design and control spaces are models limited by the data and circumstances from which they are developed. Process parameter and material attributes determined to have little or no impact may undergo sudden or subtle shifts, which may drive CQAs beyond their acceptance limits. CPV is an ongoing program for monitoring and statistical analysis of these critical inputs and their expected relationship with CQAs as defined by the design space.

Table II is an example of the monitoring and review frequency for a process based on the established continuum of criticality for parameters and attributes from Stages 1 and 2. In this example, low-risk parameters (such as non-critical, non-key) are not monitored due to their low impact on quality attributes. They may, however, be initially monitored for several of the first commercial lots following PPQ to confirm their low impact. The review frequency shown in the table is built on an assumption of a frequently manufactured product such that each review includes at least several lots of new data.

Table II: Example of continuous process validation (CPV) monitoring and review frequency. CPP is critical process parameter, CQA is critical quality attribute, CMA is critical material attribute.

 

Monitor per batch?

Statistical review frequency

YES

NO

Month

Quarter

Annual

None

CPP

High-risk

X

 

X

 

 

 

Medium-risk

X

 

 

X

 

 

Low-risk

 

X

 

 

 

X

Non-CPP

Key

X

 

 

X

X

 

Non-key

 

X

 

 

 

X

Raw Materials

CMA

X

 

 

X

 

 

Non-CMA

 

X

 

 

 

X

CQA

High-risk

X

 

X

 

 

 

Medium-risk

X

 

 

X

 

 

Low-risk

X

 

 

 

X

 

In-process Controls

 

X

 

X

 

 

 

Performance attribute

Key

X

 

 

X

X

 

Non-key

 

X

 

 

 

X

Other information

Change control

Per event

Included with relevant reviews

PM/Cal [QA: define PM/Cal]

Per schedule

Complaints

Per event

Frequency of formal statistical analysis varies with the level of risk impacting product quality. High risks are reviewed more often to respond more quickly to out-of-control conditions that could risk product quality. Since key process parameters and performance attributes are indicative of performance, not quality, the frequency of review may be decided on a case-by-case basis.

Sources of information from other quality and manufacturing systems such as change control; scheduled preventive maintenance, calibration, or production interruptions; and customer complaints should be made available during reviews. These may help explain unexpected shifts or variation in the production process.

Intuition and visual assessment of tabular data is inefficient at separating inherent lot-to-lot “common cause” variation from the “special cause” variation. A “special cause” is an event or trend in the data set that is statistically unlikely to occur if the process is maintaining a level of control. There are several quality and statistical references (12) that summarize the various graphical statistical tools (e.g., control charts and capability charts) and how to interpret them (e.g., Western Electric or Nelson’s Rules). Different statistical techniques may be employed commensurate with the risk of the parameter or attribute being analyzed and the frequency of its analysis. More complicated tools such as charts of cumulative sum (CuSum) can provide an earlier warning of changes in the mean than more standard control charts.

All statistical methods have the risk of false warnings or can suffer from over interpretation. Trigger events such as single out-of-control points, oscillations, trends, and mean shifts do not necessarily suggest a risk to quality or require immediate corrective actions. They may also provide long-term opportunities for continuous improvements to reduce variation. CPV is also a useful means of assessing the effect of process change control.

The design space relationships between CPPs and CQAs should be refined as the CPV process collects data on more and more lots. It is not as straightforward as a designed experiment because production data is confounded with several CPPs moving within their control space and potentially interacting in their effects on a CQA. Despite this, the CPV is an additional body of data and process knowledge, which includes more real-life variations in equipment, personnel, and materials than any planned study. Consequently, periodic assessment of the continuum of criticality should be made.

The knowledge derived from Stage 3 (continued process verification) can be used to drive continuous improvement initiatives. High-risk CPPs represent the greatest impact on CQAs and represent the best opportunity to improve quality through reducing variation. Control strategies such as process analytical technology (PAT) may be allowed for better control, a reduced NOR and, therefore, reduced variation relative to its PAR. A lower likelihood (occurrence) of exceeding its PAR may be sufficient to reduce the risk level of the process parameter. The fewer number of high-risk CPPs and critical material attributes (CMAs), the more robust a process becomes at producing quality product.

Conclusion: Continuum of criticality throughout the process validation lifecycle
The continuum of criticality as applied to parameters and attributes is a framework for assessing risk at each stage of the process validation lifecycle:
• In Stage 1 (process design):
a. Risk levels of CQAs are assigned based on severity to patients
b. Process parameters are related to CQAs by unit operation
c. Prior knowledge and scientific knowledge is applied to assign initial risk levels to process parameters
d. Risk levels are used to apply staged DOEs to process characterization studies
e. Models developed from DOEs quantify process parameter criticality and form a design space to ensure quality of CQAs
f. Control strategy defines the NOR of CPPs.
• In Stage 2 (process qualification):
a. Control strategy provides acceptance criteria for equipment qualification
b. Risk-based and statistical methods use the continuum of criticality to determine the number of PPQ lots required
c. Risk levels for CQAs determine statistical acceptance criteria and sampling plans for PPQ.
• In Stage 3 (continued process verification):
a. Risk levels determine monitoring and review frequency of parameters and attributes
b. CPV statistical tools are commensurate with the risk level of the parameter/attribute being analyzed
c. Ongoing verification supports and/or refines the design space
d. Low-risk or non-CPPs may be shown to have higher impact
e. High-risk CPPs offer opportunities for continuous improvement and potential to reduce risk-level of the parameter.

By applying a continuum rather than a binary method to criticality throughout the lifecycle, we can set priorities to focus time and resources in areas of greatest impact to quality including experimental design, design space development, acceptance criteria, data monitoring, and continuous improvement.

References
1. FDA, Guidance for Industry, Process Validation: General Principles and Practices, Revision 1 (Rockville, MD, January 2011).
2. ICH, Q8(R2) Harmonized Tripartite Guideline, Pharmaceutical Development, Step 4 version (August 2009).
3. ICH, Q9 Harmonized Tripartite Guideline, Quality Risk Management (June 2006).
4. ICH, Q10, Harmonized Tripartite Guideline, Pharmaceutical Quality System (April 2009).
5. J. J. Peterson, J Biopharm. Stat. 18 (5) 959-975 (2008).
6. ASTM, E2500-07, Standard Guide for Specification, Design, and Verification of Pharmaceutical and Biopharmaceutical Manufacturing Systems and Equipment (West Conshohocken, PA, 2012).
7. PDA, Technical Report 60, Process Validation: A Lifecycle Approach (Bethesda, MD, 2013).
8. ISPE, Product Quality Lifecycle Initiative (PQLI) Good Practice Guide, Overview of Product Design, Development, and Realization: A Science- and Risk-Based Approach to Implementation (Tampa, FL, Oct 2010).
9. ISPE, Product Quality Lifecycle Initiative (PQLI) Good Practice Guide, Part 1 - Product Realization using QbD, Concepts and Principles (Tampa, FL, Nov 2010).
10. ISPE, Product Quality Lifecycle Initiative (PQLI®) Discussion Paper, Topic 1 - Stage 2 Process Validation: Determining and Justifying the Number of Process Performance Qualification Batches (www.ispe.org, accessed Aug. 20, 2012).
11. ISO, ISO 16269-6:2014, Statistical Interpretation of Data, Part 6 - Determination of Statistical Tolerance Intervals (Geneva, Switzerland, Jan. 23, 2014).
12. PDA, Technical Report 59: Utilization of Statistical Methods for Production Monitoring (Bethesda, MD, 2012).

About the Author
Mark Mitchell is principal engineer at Pharmatech Associates.


Recent Videos
Christa Myers, CRB Group; Nadiyra Walker Speight, Fujifilm Diosynth Biotechnologies
Laks Pernenkil, PhD, principal and practice leader, US Life Sciences Product & Supply Operations, Deloitte
Related Content
© 2024 MJH Life Sciences

All rights reserved.