To speed up the downstream process, you must get the right data, in the right amount, at the right time. Here's how.
ABSTRACT
Over the years, much has been learned about the downstream purification of monoclonal antibodies (MAbs). Standard processes are well established and there is extensive literature that describes methods and approaches.1 At the same time, there is a need to improve the process. In recent years, the US Food and Drug Administration has been influential in steering process improvement efforts through its 21st Century, Quality by Design (QbD), and process analytical technology (PAT) initiatives. This has led to industry reports such as, "A-MAb: A Case Study in Bioprocess Development," and novel methods for defining the design space.2
ImClone Systems
One driver for improvement is the desire to accelerate the process of getting products to market, which is a direct result of the speed of knowledge development. The speed of obtaining knowledge (especially quantitative knowledge) affects timing, cost, quality, and the ability to meet regulatory requirements (Figure 1). Costs go down with more knowledge to asymptote steady state cost. Speeding up knowledge acquisition means costs are lowered more quickly. So the time value of information grows fastest at the early stage of a project.
Figure 1
Strategies that emphasize the highest payout in quantitative knowledge per development time segment provide a significant increase in the time to review findings. The result is better decisions and better product quality.
Stephen Covey teaches, "Begin with the end in mind."3 Doing so will increase the likelihood that you end up where you want to be as quickly as possible. The building blocks of QbD (Figure 2) show us what needs to be done to get to the end result. QbD shows the sequence and linkage of steps in knowledge development, allowing the right data to be collected when you need them.
Figure 2
Along the way, four critical aspects of the downstream development process must be addressed:
1. development of process and product understanding (risk assessment, experimentation method development, defining the design space)
2. development of the process control (process control strategy, change control strategy)
3. regulatory filing and approval
4. continuous improvement.
The development of appropriate measurement systems is central to the QbD building blocks. Measurement can be thought of as part of the process modeling work shown in Figure 2, which cannot be done effectively without good inputs. QbD clearly is more than just creating the design space.
A commonly asked question is, "How much data?" It can't be said early on whether QbD will require more or fewer data; the answer is situational and depends on the approach currently being used. In the early stages of QbD implementation, you will likely collect more data than needed. Also, regulatory agencies are reviewing QbD approaches and will become more comfortable with data collected in ways different than in past filings. Over time, the amount of data required will decrease as we learn how to effectively target the most critical data. The FDA, and indeed good science, stress that to produce quality products on a sustainable basis, you need to understand your process. Process understanding exists when you can accurately predict the performance of your process. This leads to the conclusion that there are sufficient data when performance can be accurately predicted. The required accuracy (quality) and precision of the prediction vary over the life of the development process; it is higher at the end of development than at the beginning.
Of course, it is critical to balance speed and risk. An effective mantra is to be bold but not reckless. At each step in the development process, be aware of what data will be needed at various times during process development and manufacturing operations. The most important data to collect at any time are those needed to satisfy the short- and long-term objectives of the program. Over time, all of the important data will be collected. Non-important, or low value, data and information may become too expensive to collect, with low payout. The goal is to minimize risk at each point in time.
Experience has shown that effective experimental strategies are a function of the experimental environment at hand. The environment defines the appropriate designs to be used to collect the data.6 A major goal is process understanding, which is a function of knowing the variables that have a major effect on the process. There are three environments that are commonly encountered.
1. Little is Known About the Critical Process Variables
At the beginning of many studies, the critical variables are not known. In such cases, it is prudent to start with a screening study to be followed by characterization and optimization experiments that develop more detailed data on the variables, with the large effects identified in the screening experiment.6
The end product when using a prioritization matrix approach (i.e., a cause and effect matrix) is a ranked list of the variables and actions to be taken, including performing studies using designed experiments, refining the measurement system, and an assessment using a failure modes and effects analysis (FMEA).7 These are all building blocks of QbD.
2. A Major Core Relationship is Known
In many cases, the subject matter experts already know the core relationships that have effects so large that they swamp the effects of the other factors. In such cases, the first step is to run a small set of experiments to understand the nature of this core response relationship.
3. Generalizing the Core Relationship to Other Products
As we move up the development cycle, we may want to determine if the core relationship can be generalized across similar product types. The question here is whether we can spot key parameters that allow us to be more efficient in quickly identifying the experimental design space of interest, and understanding the nature of the underlying relationship. The objective is to be even more efficient with new pipeline products.
An example that illustrates many of the points made above is determining the operating range for low pH viral inactivation, a critical step in downstream purification. The purpose of this process step is to achieve maximum viral inactivation at an acidic pH condition that still maintains product quality. Based on prior knowledge, product quality can be significantly affected by several process parameters in addition to pH, including temperature, concentration, and hold time. The objective is to find the process design space of these process parameters for normal operation and the worst conditions.
A typical pH response curve is shown in Figure 3. A commonly used strategy is for a fixed set of process parameters to run several pH points to identify the knee of the curve, which is the critical pH range for viral inactivation. After the knee of the curve has been identified, the process parameters could be varied one factor at a time until the desired set of conditions is identified. This process requires a large number of tests and takes considerable time. There is a better strategy.
Figure 3
First, we must quantitatively understand the core relationship illustrated in Figure 3. This is similar to how the commonly used strategy begins. The first step is to run experiments for a small number of points along the range of the core response relationship, in this case pH and the quality attribute (QA) in the expected target setting of the other process parameters. Figure 4 shows this relationship. Knowing the nature of the core relationship, we can create the experimental design space in the region of interest.
Figure 4
In this case, the region of interest is where the derivative is changing, i.e., the "knee," where the curve is bending over. When the knee of the curve has been found, a designed experiment is set up in the region of interest to explore the effects and interactions of the factors of interest, as illustrated in Figure 4. The workhorse central composite design is centered at the knee. From a relatively small number of experiments we can gain a good quantitative calibration of the region of interest and can establish a reasonable design space (Figure 5).
Figure 5
We are now ready to establish the acceptable region of process operation. It has been found that a logit function models the relationship between pH and the QA very well. Knowing the non-linear model form, we can determine the optimal design points and the most precise estimate of the acceptance region using the following equation:
in which A is the asymptotic maximum, b is the point of inflection where the quality attribute (QA) is equal to A/2, and d is the steepness of the curve close to the inflection point.
The optimal design points to fit a logit model are known based on point of inflection and range. These are shown in Figure 6 for two different situations.
Different products have different points of inflection. Working from these points, a sensible design is found that takes into account the features of the nonlinear model and provides minimum error of prediction in the region of interest.
Figure 6
Knowledge of the general relationship or model can be used to learn more efficiently with new products of the same type. Using historical data and data from some small studies, we determined that many products share the same rate of decline (parameter d in the logit model) but differ by the point of inflection (half point b = (A/2)). Identifying the location for different products is equivalent to sliding the curve along the pH axis.
Knowing that the general model holds for most new products allows the quick identification of an optimal experimental design scheme for new products after running only a few preliminary experimental points. Thus, the knowledge allows us to obtain new knowledge more quickly, thus reducing development time.
After we have the design procedure worked out, the speed can be accelerated further by automating and validating design and analysis software. An intranet-based tool is being developed to produce an optimal design series. The tool performs analysis with standard polynomial and non-linear model fits. This feature will allow those not skilled in the details of the method to make effective designs and analyses output.
A focused and risk-based control scheme also is being developed using the knowledge gained and will be used to develop the design space.
The knowledge gained now provides an opportunity to work toward a general mechanistic model. Working with subject matter experts, the model is recast from the empirical model into a mechanistic form.8
Recognizing that organizations and individuals are in various states of maturity regarding QbD greatly facilitates the transition to the new development process.
For organizations that already have experience with QbD, improved use will result from periodic review (quarterly, semi-annually, and annually, depending on the organization) of its use of QbD and what changes need to be made. This review should include the QbD strategy, methods used, and results obtained. The assessment should also evaluate how QbD has been used at the various stages of development ranging from initiating development of a new drug, to regulatory submission, to product launch, to manufacturing scale-up, and long-term production.
For organizations just starting QbD, one approach is to use a strategy of "starting small, thinking big"; i.e., start with a modest plan, but have clear goals, milestones, and long-term goals. Behavior change will be needed; pay attention to the principles of organizational change.
In any new initiative, it is helpful to have a set of principles to guide the effort. We have found the principles summarized in Table 1 to be useful. We must begin with a sense of urgency that the new approach is essential to our success.9 Senior management must establish the sense of urgency, see that a systematic approach is developed and used, and keep emphasizing the importance of developing process understanding and its relation to process variation.
Table 1
Top management also has to keep the organization focused on using the new approach in high impact areas and should request supporting data and measures of all results. Management also must lead efforts to ensure that management systems are in place to sustain the approach and results over time, and celebrate successes to recognize results and reinforce the desired behaviors.
Table 2
The lessons learned highlight what is necessary to increase the time value of data: getting the right data at the right time in the right amount. Table 2 presents some principles that can be used to guide the applications of this approach. These principles generally teach a holistic, end-to-end view. You must keep an eye on what the end result looks like, follow a systematic, structured approach, and remember the measurement system. This will speed up downstream development by getting the right data, in the right amount, at the right time.
Anthony Lonardo is the associate vice president of statistics and quantitative sciences, and Bo Qi is the director of process development, both at ImClone Systems, 908.541.8240, anthony.lonardo@imclone.comRonald D. Snee, PhD, is the founder and president of Snee Associates, LLC.
1. Shukla AA, Hubbard B, Tressel T, Sam GS, Low D. Downstream processing of monoclonal antibodies—application of platform approaches. J Chromatogr B. 2007 Mar 15;848(1):28–39.
2. Peterson JJ. A Bayesian approach to the ICH Q8 definition of design space. J Biopharm Stat. 2008;18(5):959–75.
3. Covey, Stephen R. The 7 Habits of highly effective people—powerful lessons in personal change. New York, NY: Simon and Schuster; 1989.
4. Snee RD. Quality by Design—Four years and three myths later. Pharma Proc. 2009 Feb;14–16.
5. Snee RD. Building a framework for Quality by Design. Pharm Tech Online. 2009 Oct. Available from: http://pharmtech.findpharma.com/Special+Section%3a+Quality+by+Design/Building-a-Framework-for-Quality-by-Design/ArticleStandard/Article/detail/632988
6. Snee RD. Raising your batting average—remember the importance of strategy in experimentation. Qual Progr. 2009 Dec;64–8.
7. Hulbert, MH, Feely LC, Inman EL, Johnson AD, Kearney AS, Michaels J, Mitchell M, Zour E. Risk management in pharmaceutical product development—white paper prepared by the PhRMA drug product technology group. J Pharma Innov. 2008;3:227–48.
8. Box, GE, Draper, NR. Empirical Model Building and Response Surface. New York, NY: John Wiley and Sons; 1987.
9. Kotter J. A Sense of Urgency, Boston, MA: Harvard Business Press; 2008.