How statistical methods and novel indices can be used to monitor and benchmark variability, to guide continuous improvement programs.
lipik/Shutterstock.com
Published in 2004, FDA’s “Pharmaceutical cGMPs for the 21st Century, A Risk-Based Approach”(1) provided the initial momentum needed to help promote collaborative efforts within the pharmaceutical industry. Designed to modernize pharmaceutical manufacturing and make it more efficient, the guidance highlights the importance of continuous improvement to improve efficiency by optimizing processes, reducing variability, and eliminating wasted effort.
Reducing variability benefits both patients and manufacturers; therefore, regulators have voiced strong support for continuous improvement activities. For example, FDA and the International Council for Harmonization of Drugs for Human Use (ICH) support quality-by-design (QbD)-based product development. QbD, advanced in ICH Q8 (2), focuses on the use of multivariate analysis in combination with knowledge management tools to understand the impact of critical material attributes and critical process parameters on drug product quality attributes. The heightened product and process understanding that result from using this framework provides a foundation for continuous improvement.
With the implementation of a continued/ongoing process verification program at Stages 3A and 3B of product development, continual assurance is now available to detect any unplanned departures and allow manufacturers to adjust for them. As outlined in regulatory guidance on process validation established by regulatory agencies that include FDA (3), the European Medicines Agency (EMA), the Pharmaceutical Inspection Cooperation Scheme (PIC/S), and the World Health Organization (WHO), process verification identifies any potential risks and initiates continuous improvement activities when essential, thus helping prevent likely product failures.
However, in order for process verification to succeed, adequate statistical assessment tools are mandatory. These tools must simultaneously detect undesired process variability while guarding against overreactions.
The pharmaceutical industry uses multiple quality metrics to drive continuous improvement efforts, and FDA is developing guidance (3) to help simplify and standardize reporting of metrics. The agency’s draft guidance recognizes the utility of metrics, and recommends that FDA also use them to develop compliance and inspection policies and practices.
Regulators at FDA also see these metrics as key to developing practices, such as risk-based inspection scheduling, to improve the agency’s ability to predict future shortages and to encourage manufacturers to adopt better technologies with low process variability. The agency fully supports and endorses continuous improvement and innovation in the pharmaceutical industry.
A best estimate of process and method variability identifies the need for continuous improvement, enhances product and process understanding, and allows manufacturers to develop a better control strategy. Sources of variability can usually be attributed to the “six Ms:” man, machine, material, measurement, method, and mother nature. By developing measures of the major sources of variability, a best estimate of the process’ variability (manufacturing method) and analytical (measurement) method variability can be deduced.
The first step in isolating the variability due to the manufacturing process from the variability due to the analytical method involves defining the response variable, or the critical quality attribute, and the source of data wherein the other sources of variability may be minimized.
The Stage 3A batches, post-Stage 2 process performance qualification batches, are processed on the same model of qualified equipment (minimizing variability due to machine) by the same pool of trained operators, according to standard operating procedures (reducing variability created by man). Raw material used in the process must meet testing specifications and come from a common supplier (to minimize variability due to material), while environmental and facility controls and monitoring control the environmental variability (reducing the potential effects of mother nature). By minimizing these four sources of variability, the sources of manufacturing process and analytical method variability can be isolated.
Overall variability can be broken down to its main sources, as shown below [Equation 1].
[Eq. 1] S2Overall = S2Process + S2Analytical + S2Other
Given the controls on other sources of variability found in Stage 3A batches, S2Other can be assumed to be negligible. Any remaining variability can then be subsumed under the process and analytical sources to yield the partition of interest, namely [Equation 2]:
[Eq. 2] S2Overall = S2Process + S2Analytical
An estimate of the variability inherent to the process (IPV) and the variability due to the analytical method can then be attained by variance component analysis.
Variance component analysis is a statistical tool that partitions overall variability into individual components. The statistical model underlying this tool is the random-effects analysis of variance (ANOVA) model, which can be written as [Equation 3]:
[Eq. 3] yij= m+ai+eij where i = 1,..., r and j = 1,..., n
where yij is the jth measurement in the ith group, m is the overall mean (an unknown constant), ai is the effect attributable to the ith batch, and eij is the residual error.
It should be noted that, in this model, as opposed to a fixed-effects ANOVA model, ai is considered to be a random variable, where random conditions include different chemists, equipment, batches, and numbers of days (4). The random variables ai and eij are assumed to be independent with mean zero and variance and , respectively (5-7).
Inherent process variability (IPV) is a measure of batch-to-batch variability, while analytical (method) variability is a measure of the variability of material within the same batch (8). As such, estimates of measure inherent process variability, while measures analytical method variability.
Table I. The random-effects analysis of variance (ANOVA) table.
Table I. The random-effects analysis of variance (ANOVA) table.
Other measures of interest can be obtained from the above model. For instance, the ratio of these two variance components , provides a standardized measure of the variance of the population group means, while the intra-class correlation is a measure of the proportion of the total variance due to the process.
Estimates for these values can be obtained from the ANOVA data provided in Table I as follows [Equation 4]:
Other estimators are available, in particular for unbalanced data where a different number of measurements are taken per batch. The restricted maximum likelihood (REML) estimator is a viable alternative available in most statistical software packages.
This model can be fit to situations in which the batch effect is considered random, and each batch has n samples. For example, 20 batches might be considered to be a random sample from a larger pool of batches for a specific product. For each of these 20 batches, a random sample of 10 samples would be taken to measure finished product dosage uniformity.
The variability in the mean finished product dosage uniformity between the batches, , would yield an estimate of the inherent process variability, while would provide an estimate of the proportion of variability due to the process. Confidence intervals can be constructed for these estimates and are available in common statistical software packages.
The estimated IPV, as well as the ratio of total variance due to the process, can be used during Stage 3 batch monitoring to focus efforts on process improvement. As more information is gathered for a product, a rise in the IPV itself, or a rise in the proportion of total variance due to the process () could indicate the need to investigate possible process improvements.
On the other hand, a decrease in IPV or the decline in would indicate that the process is improving. In order to obtain a picture of how well the process is performing overall for a specific product, a comparison with other products employing the same process can be made by generating a benchmark.
The novel PaCS index (named for the first letters in the authors’ last names: Pazhayattil, Collin, and Sayyed-Desta) provides an indication of a current product’s process performance in comparison to other similar products.
To derive the PaCS index, a representative set of other products generated with the same process would be chosen. For each of these chosen products, the IPV would then be calculated as above. The PaCS index could then be calculated using the following [Equation 5]:
[Eq. 5] PaCS = IPVP/IPVB
where IPVB is the benchmark inherent process variability and IPVp is the inherent process variability for the product under consideration. IPVB is the median process variability of the selected products with processes similar to the current product.
A PaCS index greater than 1 indicates the process variability is high, while a PaCS less than 1 indicates that process variability is low compared to the benchmark. Therefore, a PaCS value that is less than 1 is preferred.
Because the distribution of the PaCS index is not analytically derivable, confidence intervals can be estimated using Monte Carlo simulation. The PaCS index together with IPV values and the other derived statistics provide a platform upon which further decision making can take place. For instance, high PaCS values would indicate that the process for a specific product is not performing as expected.
Estimation of inherent process variability (IPVP) allows for determining a PaCS index for the product, and helps in understanding the contribution to variability that comes from the manufacturing process and the analytical method used. In addition, PaCS is a metric developed in relation to the manufacturing process at a particular production site.
The index can be effectively used to determine continuous improvement projects at the site or for site transfer initiatives. PaCS provided with a tangible quantitative robustness figure for various supply chain decision making. The index can be a component of periodic process performance review by senior management as recommended by ICH Q10 (9).
In addition, APV and the PaCS Index may be used to decide such things as who should be primarily responsible for a specific continuous improvement project (i.e., whether process, analytical, or a combination). This is often a point of contention. It could also be used to determine which site has the best PaCS index with respect to a product. This factor will be considered when deciding for or against site product volume increases.
In summary, the PaCS is a single index that can provide valuable insight to decision makers, and help to drive continuous quality improvement programs in biopharmaceutical and pharmaceutical development as well as manufacturing.
1. FDA Final Report, “Pharmaceutical cGMP’s for the 21st Century- A risk based approach,” fda.gov, Sept. 2004.
2. ICH Q8 (R2), Pharmaceutical Development, (ICH, August 2009).
3. FDA, Draft Guidance, Submission of Quality Metrics Guidance, (FDA, November 2016).
4. K. Barnett K et al., “Analytical Target Profile: Structure and Application Throughout the Analytical Lifecycle,” USP Stimuli to the Revision Process (42) 2016, pp. 1-15.
5. R.K. Burdick, and F.A. Graybill, Confidence Intervals for Variance Components (Marcel Dekker, New York, 2006).
6. S.R. Searle et al., Variance Components (John Wiley, New York, 2006).
7. H. Sahai, H. and M. Ojeda, Analysis of Variance for Random Models. (Birkhäuser, Boston, 2006).
8. B. Nunnally, “
,” Journal of Validation Technology, Summer 2009; pp. 78-88.
9. ICH, Q10 Harmonized Tripartite Guideline, Pharmaceutical Quality Systems, (ICH, June 2008).
BioPharm International
Volume 30, Number 6
June 2017
Pages: 32–35
When referring to this article, please cite it as J. Collins et al., "A Novel Metric for Continuous Improvement During Stage Three," BioPharm International 30 (6) 2017.