Can bioprocessing runs be consistently replicated in an inherently variable production environment?
anyaivanova/Shutterstock.com
Sean J. Morrison, the author of an editorial that analyzes scientific reproducibility, wrote, “Studies with revolutionary ideas commonly lead to many follow-on studies that build on the original message without ever rigorously testing the central ideas” (1). In addition to being a senior editor at the open-source journal eLife and an expert in stem cell biology, Morrison is the director of the Children’s Medical Center Research Institute at UT Southwestern, the Mary McDermott Cook Chair in Pediatric Genetics, and an investigator at the Howard Hughes Medical Institute.
If Morrison’s statement is true, the value of follow-on studies may be exponentially diminished, leading to an eventual erosion of scientific understanding over time. Revisiting and testing the protocols that were employed during groundbreaking studies, then, is a crucial element of damage control in the life-science industry. This idea is equally true for studies on bioprocessing techniques, especially as biologic drugs become more complex and new technology to manufacture them emerges.
The inability to reproduce results from a successful bioprocessing run is not only frustrating for manufacturers, but can also increase the costs of drug development and contribute to overall inefficiencies. In 2015, Leonard Freedman-who is the president at The Global Biological Standards Institute (GBSI) and has done work on reproducibility and cell lines-and coauthors estimated that the annual economic cost of irreproducible research in the life sciences is approximately $28 billion (2). For each run that does not produce the anticipated result, several thousands of dollars are likely lost to reagent and reference material costs, says Freedman. While study design, laboratory protocols, data analysis, and reporting can also contribute to irreproducibility (2), researchers lose precious time when reagents don’t perform as expected-and this is one of the variables of irreproducibility that is likely easiest to control.
Although reproducibility exercises can be expensive, refraining from critically evaluating previous research and conclusions from past experiments may be costlier in the long term than investing in reproducibility studies. If a reagent’s purity and homogeneity is not scrutinized early on, it is possible that studies testing its uniformity will have to be conducted at a later date anyway, contributing to the overall cost of manufacturing operations. Thus, improving the validation and characterization of biologic materials reduces operating costs.
In 2011, researchers from Bayer found they could only validate approximately 25% of the studies on therapeutic drug targets from published data (3).
In 2012, researchers from Amgen were only able to validate 11% (6 of 53) of “landmark” papers that were used to justify the launch of various drug-discovery programs (4). And a 2016 survey published in Nature revealed that 70% of researchers have tried to replicate another author’s research findings, but nearly half of these researchers were unable to verify original conclusions (5).
Efforts to monitor reagent origin and purity have not been the focus of widespread evaluation in academic papers on bioprocessing. The specific topic of reproducibility in bioprocessing has not been examined in much depth, either, although biomanufacturers do try to understand why high or low yields occur, says Veronique Chotteau of the KTH Royal Institute of Technology, given that yield has such a high impact on the economics of a process and on drug safety. Historically, Chotteau says that variation between runs in the same facility can be up to 50%.
Although process development and characterization improve reproducibility, biomanufacturers still regularly encounter differences between runs, notes Chotteau, who adds that cell growth is rarely exactly the same from batch to batch. Efforts to maintain reproducibility have prompted the biopharmaceutical industry to opt for serum-free media for the manufacture of cellular therapies (6); other studies stress that batch-to-batch reproducibility of glycosylation is essential to get a consistent end product (7).
Measurement of critical quality attributes (CQAs) for each run provides the most important information on run reproducibility, says Richard D. Braatz, the Edwin R. Gilliland professor of chemical engineering at Massachusetts Institute of Technology, who works on the mathematical modeling of bioprocessing plants. “In addition to direct measurement of CQAs for each run, reproducibility from run to run can be improved by operating a plant-wide mathematical model in parallel with manufacturing. Feeding process inputs from the real manufacturing process into the mathematical model would allow the model to make predictions that can be compared with the measurements.” Braatz adds, “Differences between the model predictions and measurements can be used to track changes in operations from run to run, and used to test hypotheses on potential causes,” a technique which he says has been applied in the oil refining, chemicals, and petrochemicals industry for decades, but has so far had limited applications to biologic pharmaceutical manufacturing.
Although some advances in design for manufacturing principles (8)-which factor cost of large-scale manufacture into design plans-have helped the industry plan for gaps in manufacturing reproducibility, and quality-by-design efforts have helped engineers gain better control over resulting outputs, there is little available literature testing the assertions of biomanufacturing models of best practice.
In addition, while there are many experiments in academic literature that use scale-down models to inform large-scale manufacturing, there are few examples of outside labs or researchers testing the so-called “optimal” set-ups touted by the original researchers in the academic papers. Nor are there studies that test what are thought to be the “optimal reference materials” for bioprocessing runs. Industry stakeholders say that a product is defined by a process, but who tests a process once it has been defined?
When a platform process does not deliver acceptable results, says Bruce Hamilton, head of characterization at antibody and life-sciences tool provider Abcam, suppliers refer to literature reports to examine relevant parameters of a product in development. Hamilton adds that process engineers “might also run a small-scale trial scouting lots of parameters [of the monoclonal atibody (mAb)] as a first test, but this is not primarily guided by the literature.”
The most efficient processes are those that can predictably be reproduced run after run. Given the concern circulating the industry about reproducibility in drug development experiments and clinical trials, there should also be industry concern that pivotal bioprocessing experiments cannot be replicated, both in scale-down models and at the commercial level. Production runs sometimes are not similar to one another after a process has been optimized for large-scale operation, or after a process has been transferred to a contract manufacturing organization. Are these run differences a result of subtle changes in protocol and methodology, or are materials the main culprit? When does the ability to reproduce a run correlate with good process understanding-and when is a good run simply a fluke?
“Reproducing a run does not mean a process is understood, only that the conditions perceived to be necessary could be performed again to obtain a similar result,” notes Tim Errington, who is metascience manager at the Center for Open Science and co-leader of the Reproducibility Project: Cancer Biology, a project which is attempting to replicate results from pivotal studies in cancer drug development (9). “Both runs could be the result of some unknown variable not thought to be a necessary part of the process-some aspect of the starting materials or a specific step.” Adds Errington, “The reverse is also the case, that an inability to obtain the same result does not correlate with poor process understanding. It could be an undocumented mistake confounding the result. But, importantly, as a result is replicated, it increases the reliability of the process and as more variables not thought to impact the result are varied (different lots/sources of reagents, different personnel, different manufacturing location, etc.) and the same result is achieved, it further increases the reliability and utility of the process.”
In the five initial studies that are part of the Reproducibility Project: Cancer Biology, Errington and his colleagues were only able to validate the findings in two of the studies (10). The cancer project is similar to the Reproducibility Project, which was conducted to test the reproducibility of key psychology experiments (11).
Errington and colleagues concluded that while errors can be caused by the improper execution of an experimental technique, they can also be caused by problems with samples and materials. The researchers “undertook authentication of key biological materials (such as [short tandem repeat] STR profiling of cell lines)” to prevent the likelihood that a validation failure was due to error. To improve the probability of replication, the authors of the original studies were asked to “share any original reagents, protocols, and data in order to maximize the quality and fidelity of the replication designs.”
C. Glenn Begley wrote in a Nature paper that validation of reagents is a key factor influencing whether a study’s findings can be replicated-and he found that many investigators relied on findings from earlier papers for information on reference material validation, even though many of those preliminary papers did not include specific validation data either (12). Begley added, “There are also examples of investigators using an antibody even when the manufacturer declares it unfit for that particular purpose.” Antibody supplier Cell Signaling Technology (CST) shares that even a validated reagent, if used incorrectly, “can induce changes in the specificity and sensitivity of an antibody.”
Anita Bandrowski, founder and CEO at SciCrunch, the company that runs the Antibody Registry, tells this publication that a reagent from a reputable company should include information about the protocols in which the reagents will likely work best. And, as CST notes, an antibody should be validated using the application in which it actually will be used in practice. The company says that validating with a Western blot, for example, is of little value to inform a researcher about an antibody’s ability to work by immunofluorescence or immunohistochemistry. Even if the correct validation method is chosen, Karin Lachmi, PhD, co-founder, chief scientific officer, and president of Bioz, a public website that chronicles reagent mentions in methodology sections of scientific studies, notes that achieving the exact binding or enzyme level in an assay is still difficult: “The cause of this inconsistency can be attributed to quality variability in antibody/enzyme batches, different target systems, and different techniques” used in labs.
Exact reagents are sometimes difficult to pin down because an original researcher has left an institution, taking his or her knowledge with him or her. A 2013 paper that reviewed several hundred journal articles found that 54% of resources were not uniquely identifiable in publications (13).
When reagents are not validated, researchers may draw conclusions from their experiments that are simply inaccurate. They run the risk of using an antibody that recognizes the wrong protein, or may base conclusions on a cell line that is not of interest to researchers. To make the situation even more problematic, sometimes antibodies cross-react and recognize the wrong targets (14), which is why some researchers use Western blots that rely on the recognition of at least two different epitopes. Gathering appropriate reagents, doing controls, and optimizing conditions is a major time sink, asserts Bandrowski, who is an expert on citing biological reagents. “If the original protocol is not adequately described, including a poor text description of the reagents, this part of the experiment can take weeks or months instead of days to weeks,” she says.
Although suppliers should make their best attempt to alert purchasers when changes in reagents occur, this can be challenging, says Bandrowski. “In academic labs there is a purchasing department and [supplier] companies often can’t track down who is actually making the purchase and using the product, so they are stuck with adding this information to their website and hoping that their customers come back before publishing.” Lachmi argues that revenue generation is a supplier’s top goal, and that currently, “changes that occur to reagents are insufficiently characterized and documented.” Because vendors have no incentive to alert customers about reagent changes, she reasons, a “vendor-neutral” clearing house such as Bioz is necessary to guide buying decisions.
NIH suggestions
Freedman says that to overcome the problem of reagent identification, standards and verification strategies for reagents must be developed for all reference materials (15). National Institutes of Health’s (NIH’s) Rigor and Transparency Guidelines, which went into effect in 2016, required that any researchers wishing to receive a grant for their research must incorporate a validation/authentication plan for every study (16). According to a section of the guidelines called “Authentication of Key Biological Resources in Grant Applications”, antibodies should be “validated by Western blot, ELISA, immunoprecipitation, immunofluorescence, or flow cytometry using knockdown cells and positive and negative controls, depending on the assay proposed” (17).
A big part of the scientific rigor that NIH supports-through the thorough description of methodology and experimental design-relies on the ability of outside scientists to replicate the study’s findings in some incarnation. Whether it is statistical similarity, or similar effect size, some of the parameters must be met. As the American Statistical Association suggests, “Reproducibility shouldn’t be thought of as a binary state-either reproducible or not reproducible-but as a continuum from hard-to-reproduce to easy-to-reproduce” (18).
In 2014, the NIH developed a Data Discovery Index where researchers could analyze raw data from experiments, and a commenting system in which researchers could contribute their concerns about research protocols and methodologies (19, 20). NIH’s Resource Identification Initiative (https://scicrunch.org/resources) was launched to help study authors authenticate (as well as properly cite) organisms and cell lines (21). Errington argues that if all data and methodology items in a paper were automated and shared, this would further drive the discoverability and reuse of information. He says the burden to communicate research findings should not solely be on the researcher, but should be built into the “research ecosystem.”
Because there are so many ways to get to a final product, and the types of products that are now in development are becoming increasingly complex (e.g., incorporating fusion proteins and bispecific antibodies with multiple active domains), researchers need to be as specific as possible when it comes to describing the starting raw materials used in studies and the characterization methods that were used to validate starting reagents. According to Freedman, there are some standards developed for antibody validation and cell-line authentication, but there is currently no standard for reporting serum characteristics (15).
Proper citation of an antibody is necessary for study reproduction, insists Tove Alm of the School of Biotechnology in the KTH Royal Institute of Technology. She says The Antibody Registry (http://antibodyregistry.org) assigns stable, unique identifiers (called Research Resource Identifiers, or RRIDs) to antibodies, which protects their identification should the companies that provide antibodies undergo mergers, acquisitions, or brand-name changes. Even antibodies under development (i.e., non-commercial antibodies) are assigned RRIDs in the Antibody Registry, says Alm. Assigning an RRID assures a product is “machine readable,” says Errington, which is important to track a product’s use and utility in an experiment. According to those who run the portal Antibody Registry, in 2017, more than 1% of papers (approximately 1700 papers) in biomedicine publish RRIDs for reagents-which is an increase from 2014, in which approximately less than 0.02% of papers included this information (22).
The publication Naturerequires that antibodies included in experiments be identified by genus and species and that trial sponsors list the citation, catalog number, and clone number for antibodies used in investigation (23). It also suggests adding information about the validation of an antibody, which is information that can be obtained through portals such as Antibodypedia, 1DegreeBio, and Biocompare. Other sites, such as Addgene Vector Database, allow users to search for expression vectors in published studies. Cell Press implemented STAR Methods in 2016, which are guidelines for reporting. Cell references Nature’s requirements, as well as the National Centre for the Replacement, Refinement, and Reduction of Animals in Research’s ARRIVE guidelines, which were first published in PLoS Biology (24). Another reproducibility initiative is the Center for Open Science’s Transparency and Openness Promotion (TOP) guidelines, which seeks to encourage “shared standards for open practices across journals” (25).
Antibodypedia is a database that allows antibody manufacturers to submit validation data on antibodies, inviting them to divulge information on antibody performance in different applications using “pillars” of antibody validation-first described by Uhlen et al.-to help researchers and editors determine if an antibody is properly validated and appropriate for a specific experiment. The thought is that through use of at least one of these parameters measuring antibody specificity, researchers can more accurately predict antibody performance and reproducibility of antibody function (26). One of the pillars described by Uhlen et al. is the use of an orthogonal strategy where an antibody-based strategy is compared with an antibody-independent method; to verify antibody specificity, the level of detected expression of the antibody’s target protein using a specific application (e.g., Western blot) is compared with the level of the target protein that is identified using mass spectrometry.
Antibody supplier CST says it defines the models to be used to validate an antibody in tandem with the selection of new targets for antibody development. This information includes definition of the most critical applications for which the antibody should be used, which cell lines and tissues express high or low levels of the protein in question, the feasibility and availability of knockdown or knockout reagents, and what orthogonal methods would help validate a new antibody. CST adds that if an appropriate validation system is not known when an antibody is under development, when a method becomes available, the company retroactively validates the product. CST says besides being one of the few companies that validates its products, it is also one of the only companies that prefers to sell reagents by dilution instead of concentration, because “changes in assay performance are often specific to a single assay instead of antibody performance overall.”
CST admits that as its portfolio of polyclonal antibodies has grown, so has the burden to generate consistent product lot to lot-a main factor that drove the company’s decision to move toward rabbit mAbs, which they say are less variable. However, the company says it still sells polyclonal antibodies because they are inexpensive, easier to generate, and are still preferred by many scientists. The company says this preference is likely due to the fact that polyclonal products are “highly cited in the literature” already. Lachmi concurs that researchers rely on literature sources (typically through PubMed, she says) for reagent identification and selection. “Market studies have demonstrated that product use in journal publications is the number one attribute that drives the purchase of life-science reagents by researchers,” she remarks.
Some of the latest techniques to validate antibodies include using mass spectrometry, peptide arrays, and knockout cell lines, says Hamilton. By using knockout cell lines-which were produced using clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 and Horizon Discovery’s library of haploid human cell lines-Abcam was able to validate more than 1000 antibodies. Abcam’s push to authenticate antibodies, dubbed the Knockout Validation Initiative, began at the end of 2015 and is still an ongoing project.
The resale of antibodies has become a particular concern as of late, especially when a researcher encounters an antibody that does not work as intended, or if he or she is looking for two antibodies binding to different parts of a target, notes Alm. It would be most helpful if antibody producers shared target sequences or provided details about epitope mapping studies, says Alm, but she says this information is typically not provided. She says using the compare function in Antibodypedia can usually help researchers tease out when an antibody is the same as one they have already tried, but is just being resold by another company. She adds that when identical images are presented for antibodies with different names in the portal, it is likely a case of antibody reselling at play. CST comments that product clones are sometimes obvious because of the inclusion of a company’s clone name or identifier, but that some suppliers intentionally leave information about clonality obscure.
According to Bandrowski, antibody reselling is a dangerous game, primarily because problems with an original product being resold may not be detected if there is no clear chain of provenance from which the reagent came. “Think of it as the discovery of E. Coli on a batch of lettuce from a farm-that lettuce may have entered many products and a recall needs to be complete otherwise people will get sick,” she says. “In the scientific literature, there is no mechanism to create a recall and this cripples all efforts to improve reproducibility in science.”
Even with a properly referenced RRID, supplier changes still confound the source of some products, says Bandrowski, mostly because people hold onto their products in deep freezers. She recalls an instance with a Chemicon product that was listed with its RRID in a paper, but was cited in different ways by other researchers referencing the paper. “By this time, Chemicon had been part of Millipore for over eight years and Millipore was transitioning between EMD and Sigma. Another author using the same product cited the company name as Chemicon/Millipore and a third [cited it as] EMD Millipore. She adds, “This is actually one of the easiest cases to deal with because even though we had some confusion about which product came from which company, the smaller companies were bought in whole and their catalog numbers did not change.”
The misidentification of cell lines has likely affected the conclusions of hundreds of thousands of papers, according to researchers (27). An editorial in Nature Cell Biology found that during a five-month period in 2013, only 19% of papers performed cell-line verification studies (28). And, according to numerous studies, approximately one-third of cell lines contain a mixture of species types, are contaminated, or are misidentified (29, 30). Another paper found that only 43% of cell lines were identifiable (13). As Bandrowski points out, some argue that product codes (outside of an RRID) should be generated for the use of the same cell line by two or more companies, “because the quality control steps in various companies can vary drastically-or at least the products at the end of the process are different.”
Agencies such as NIH are asking that cell lines be authenticated by chromosomal analysis or STR profiling prior to the agency making any decisions on fund allocation for grants. Many journals are now asking that submissions identify the cell lines used in experiments. Nature guidelines require researchers to clarify if their cell line is among those that have been commonly misidentified according to the International Cell Line Authentication Committee (ICLAC) database, and provide justification for using the cell line they have selected. Of the ICLAC database’s 488 listed cell lines, 451 are misidentified and have no known authentic origin (26, 31).
Surprisingly, some stakeholders from the pharmaceutical industry think that regulatory agencies should focus more on the quality of a finished drug product than on the cell lines used to produce biologics. In a piece that appeared in the journal Biologicals in 2016, representatives from Eli Lilly, Amgen, Biogen, Genentech, Janssen, Bristol-Myers Squibb, and Pfizer wrote that regulatory agencies should not focus on the “clonality assurance” of cell lines as heavily. They wrote the “genomic plasticity” of immortalized cell lines precludes “absolute genetic homogeneity,” so cell lines should not be referred to as clones; rather, cell lines are a population of cells that are “clonally derived” (32).
The representatives from pharma explained that when cells are grown in culture, genetic and phenotypic changes occur as a result of drift, regardless of where the precursor cells came from. The drift can occur even in the absence of any contamination. The ability of these cells to undergo genetic change is what allows them to accept transgenes via genetic engineering and adapt to changing process conditions, the authors argued. They noted that cell lines cannot be adequately controlled, and say that genetic heterogeneity in cell lines has not been measured by most cell-line vendors. Furthermore, they pointed out that many vaccines originate from non-clonal cell lines-and despite this fact, vaccines have a good track record of safe and effective use. While the authors acknowledged that a clonally-derived cell line can reduce the resulting heterogeneity of a cell population, they emphasized that health authorities should put more of a focus on the purity and stability of the final proteins produced from the cell lines, rather than on the cell lines themselves.
However, because genetic changes to cell lines can occur simply as a result of being cultured repeatedly (15), Freedman counters that periodic cell-line authentication studies should occur numerous times during the course of an experiment. Errington notes that it is also valuable to measure the flexibility of raw materials, or their ability to undergo small changes while supporting the same experimental outcome.
GBSI’s Freedman, an outspoken advocate of reagent validation, is currently working with stakeholders (including members of the industry, academic leaders, funders, and journal editors) to develop antibody validation guidelines. He says, “The producers engaged with our initiative are committed to assuring that quality antibodies are made available and developing new standards for quality assurance/quality control [QA/QC],” and shares that some biopharma players have already developed in-house QA/QC systems for the selection of cell lines.
The National Institute of Standards and Technology (NIST) launched a project in February 2017 to try to characterize the mouse cell lines that are using in the biomanufacturing of recombinant proteins (33). The number of short tandem repeat (STR) markers present in an assay is specific to an individual mouse cell line within a species, which NIST is able to measure with its new invention. NIST was granted a patent for this assay, which it says is the first of its kind for mouse cell lines (34).
The Medicines and Healthcare products Regulatory Agency (MHRA) in the United Kingdom is taking validation for stem cells into its own hands, announcing in February 2017 the launch of “regulator ready” stem cells for use in clinical development. The cell lines, released by the not-for-profit UK Stem Cell Bank (UKSCB) at the National Institute for Biological Standards and Control, will come with a certificate of analysis. UKSCB is also working on including a “starting materials dossier” for each of its cell lines to further inform researchers (35).
It’s clear that standardization, coupled with the public funding for the research of manufacturing best practices (such as the initiatives launched by NIST), can help shed light on the performance of a process by unearthing details about starting materials. Reputable suppliers should also support this approach, says CST: “Validating and optimizing a reagent prior to use should be a requirement for academic training, publication, and funding.”
Although there are ongoing efforts to provide open access to certain experimental data published in scientific journals (The Gates Foundation; The Wellcome Trust; and various preprint servers, such as the OSF Preprints, arXiv, bioRxiv, PeerJ, etc.), there are also companies that are beginning to emerge that seek to bolster protocol transparency. Ultimately, access to protocols in high-profile journals may help drive reagent selection and/or purchasing decisions by researchers in the industry-but because of paywalls, much of this information remains under lock and key. As Alm tells this publication, the success of an antibody in particular is dependent on its application and the context where the antibody will be used.
To search for how reagents are used in a specific context-for a precise biomedical application--academics can use online tools such as Bioz. Lachmi argues that researcher authentication of every reagent is impractical, and instead, researchers need “structured scientific article data to bring to the surface unbiased reagent and assay ratings.” A sample search of antibody vendors in Bioz, for example, yields 37 suppliers, although two of the oft-mentioned suppliers located in the United States--Sigma-Aldrich and EMD Millipore-have since merged and are now one company (MilliporeSigma), but sell under two brands. The suppliers are organized by category and assigned an aggregated rating ranging from 1 (concern) to 5 (very good). The rating is calculated by way of an algorithm. The algorithm is governed by nine parameters, which include the consideration of how recent a paper is, its protocol relevance, and the impact factor of the journal in which the reagent was mentioned.
Invitrogen, Abcam, Santa Cruz Biotechnologies, Jackson Immunoresearch Laboratories, CST, Covance, Life Technologies, Aves Labs, and GeneTex are among the 10 most highly rated companies within the antibody space, according to a Bioz search. Although some smaller antibody vendors appear in the list (albeit with mostly lower aggregated ratings), the largest vendors appear to dominate-a fact which is not surprising when one considers that most of the largest vendors actually buy antibodies from smaller vendors and relabel them to beef up their overall antibody portfolio offerings (36).
Lachmi states that the platform pulls information from 26 million life-science articles across 5000 academic journals, and the website already has 250,000 users from 191 countries. Plus, estimates Lachmi, “The number of Bioz users is growing rapidly, at a rate of 10% per week.”
Rather than risk sourcing antibodies from a vendor with which a researcher has no experience, a better option may be for the industry to shift paradigms and use standardized antibodies made in recombinant cells, proposed Andrew Bradbury and Andreas Plückthun in a 2015 Nature article (37). Recombinant antibodies are thought to have greater authentication potential, and this shift in practice could save researchers the costs of having to test expression levels and perform binding assays of purchased reagents. Says Abcam’s Hamilton, “Unlike polyclonal or hybridoma-based methods, recombinant antibodies are defined by a DNA and protein sequence, which means they have excellent batch-to-batch consistency and provide specific and reproducible results.”
But the plan to switch all antibodies to recombinant versions for better antibody quality control, while well-intentioned, would be difficult to enforce and would probably negatively affect smaller reagent biorepositories. In addition, the suggestion that all antibodies currently made with hybridoma technologies be switched to recombinant cell technologies could be dangerous from a patent protection perspective; there are patents, such as the Genentech-owned Cabily patents, that protect specific antibodies, methods of antibody manufacture, and the techniques that are necessary to isolate and purify antibodies. Changing the way antibodies are made to better serve potential clients could also make an antibody supplier vulnerable to patent infringement claims (38). Regardless of clonality or the antibody production method chosen (i.e., recombinant vs. hybridoma), CST says a bulk of the company’s activities would still be ensuring that each new lot performs identically to the previous lot.
Instead of changing the composition of all of the reagents currently in commerce, GBSI supports a simpler approach-encouraging reagent vendors to refrain from selling any products that aren’t validated. This change of paradigm, however, must be supported by all industry stakeholders to be truly effective, insists Freedman.
Reproducibility is crucial to the advancement of the field of bioprocessing, especially when it comes to testing new methods or technologies that could inform more cost-effective manufacturing techniques. “What I find most amazing in all of this is that biopharma loses millions of dollars in dead-end R&D and has not, thus far, stepped up to the plate to help NIH and journals improve reproducibility in science,” comments Bandrowski. While unknowns about reagent origin and composition can dramatically skew reproducibility attempts, Freedman reiterates that flaws in study design, laboratory protocols, and data analysis/reporting can also render an experiment irreproducible.
In a “publish-or-perish” world, there is not as much pressure (or incentive) to reproduce previous experiments. The focus of many new studies is to explore a brand new concept, and there is also little interest in funding replication studies. Verification studies, however, could test processes that are considered “the gold standard” in the bioprocessing industry and could uncover significant economic inefficiencies. The ability to replicate may be especially important for the optimization of end-to-end continuous processing for biologics, as much of the new published material on continuous processing for biologics focuses on finding the technologies, the sequence of processing steps, and the combination of reagents that will result in the highest yield and purity possible. Errington says he thinks an initiative for informing bioprocessing techniques, one similar to the Reproducibility Project: Cancer Biology, would be “extremely beneficial,” adding, “I’d be happy to have discussions with others to organize a similar project.” Errington says that another useful idea would be to “conduct a many-lab study to test the reliability of specific bioprocessing techniques” and to measure how certain techniques vary across facilities.
It is also possible, of course, that replication studies could be the source of new ideas, concludes Bandrowski. “The results that are not robust to replication may not be incorrect, but highly dependent on something that is not being directly controlled-some of the attempts to reproduce a study, when published, can lead to important discoveries.” She quips, “With a good, well-controlled experiment, if we still don’t get the same answer as another lab, perhaps there is another force at play-and that is where the biology gets interesting.”
References
1. S.J. Morrison, eLife 3:e03981 (2014). DOI: 10.7554/eLife.03981.
2. L.P. Freedman, I.M. Cockburn, and T.S. Simcoe, PLoS Biol. 13 (6), e1002165. doi:10.1371/journal.pbio.1002165.
3. F. Prinz, T. Schlange, and K. Asadullah, Nat. Rev. Drug Disc. 10, 712 (September 2011).
4. C.G. Begley, Nature 483, 531–533 (March 29, 2012).
5. M. Baker, Nature 533, 452–454 (May 26, 2016).
6. O. Karnieli et al., Cytotherapy 19 (2), 155–169 (February 2017).
7. A. Planinc et al., Anal. Chim. Acta 921, 13–27 (May 19, 2016).
8. M.I. Sadowski, C. Grant, and T.S. Fell, Trends Biotechnol. 34 (3), 214–227 (March 2016).
9. T.M. Errington et al., eLife 3:e04333 (2014), DOI: 10.7554/eLife.04333.
10. B.A. Nosek and T.M. Errington, eLife 6:e23383 (2017), DOI: 10.7554/eLife.23383.
11. Open Science Collaboration, Science 349 (6251), aac4716 (Aug. 28, 2015).
12. C.G. Begley, Nature 497, 433–434 (May 23, 2013).
13. N.A. Vasilevsky et al., PeerJ 1:e148, (2013), DOI 10.7717/peerj.148.
14. G.A. Michaud, Nat. Biotechnol. 21 (12), 1509–1512 (Dec. 2003).
15. L. Freedman, G. Venugopalan, and R. Wisman, “Reproducibility2020: Progress and Priorities,” bioRxiv, https://doi.org/10.1101/109017.
16. National Institutes of Health, “Principles and Guidelines for Reporting Preclinical Research,” www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research, accessed Apr. 3, 2017.
17. A.E. Nussbaum, “ASA Advice for Funding Agencies on Reproducible Research?,” http://community.amstat.org/blogs/amy-nussbaum/2016/10/12/asa-advice-for-funding-agencies-on-reproducible-research, accessed Apr. 3, 2017.
18. PubMed Commons, www.ncbi.nlm.nih.gov/pubmedcommons/, accessed Apr. 3, 2017.
19. F.S. Collins and L.A. Tabak, Nature 505, 612–613 (Jan. 30, 2014).
20. National Institutes of Health, “Resource Identification Portal,” https://scicrunch.org/resources, accessed Apr. 3, 2017.
21. M. Lauer, “Authentication of Key Biological and/or Chemical Resources in NIH Grant Applications,” https://nexus.od.nih.gov/all/2016/01/29/authentication-of-key-biological-andor-chemical-resources-in-nih-grant-applications/, accessed Apr. 3, 2017.
22. Antibody Registry, Tweet, https://twitter.com/antibodyregistr/status/837089327180050432, accessed Apr. 3, 2017.
23. Nature Publishing Group, “Reporting Checklist for Life Sciences Articles,” Nature.com, www.nature.com/authors/policies/checklist.pdf, accessed Apr. 3, 2017.
24. C. Kilkenny et al., PLoS Biol. 8 (6), e1000412 (June 2010).
25. B.A. Nosek et al., “Transparency and Openness Promotion (TOP) Guidelines,” (Feb. 8, 2017), http://doi.org/10.1126/science.aab2374, accessed Apr. 3, 2017.
26. M. Uhlen et al., Nat. Methods 13, 823–827 (2016).
27. J. Neimark, Science 347 (6225), 938–940 (Feb. 27, 2015).
28. Editorial, “An update on data reporting standards,” Nat. Cell Biol. 16, 385 (2014).
29. Korch et al., Gynecol. Oncol. 127 (1), 241–248 (October 2012).
30. P. Hughes et al., BioTechniques 43, 575–586 (November 2007).
31. International Cell Line Authentication Committee, “Database of Cross-contaminated or Misidentified Cell Lines,” http://iclac.org/databases/cross-contaminations/, accessed Apr. 3, 2017.
32. C. Frye et al., Biologicals 44, 117–122 (2016).
33. NIST, “NIST Patents First DNA Method to Authenticate Mouse Cell Lines,” www.nist.gov/news-events/news/2017/02/nist-patents-first-dna-method-authenticate-mouse-cell-lines, accessed Apr. 3, 2017.
34. J.L. Almeida et al., “Mouse cell line authentication,” US Patent 9,556,482, Jan. 31, 2017.
35. Medicines and Healthcare products Regulatory Agency, Press Release, “‘Regulator Ready’ Stem Cell Lines Now Available for Clinical Development,” www.gov.uk/government/news/regulator-ready-stem-cell-lines-now-available-for-clinical-development, accessed Apr. 3, 2017.
36. M. Baker, Nature 521, 274–276 (May 21, 2015).
37. A. Bradbury and A. Plückthun, Nature 518, 27–29 (Feb. 5, 2015).
38. R. Hernandez, “A Call for Antibody Quality Control,” BioPharmInternational.com, www.biopharminternational.com/call-antibody-quality-control, accessed Apr. 3, 2017.
Article DetailsBioPharm International
Vol. 40, No. 5
Pages: 14–21, 29
Citation: When referring to this article, please cite it as R. Hernandez, "Enhancing Bioprocessing Efficiencies through Run Reproducibility," BioPharm International40 (5) 2017.