Data Integrity Pitfalls

Publication
Article
BioPharm InternationalBioPharm International, May 2023
Volume 36
Issue 05
Pages: 28-29

Avoiding missteps in data integrity is contingent on the development of a holistic data integrity approach.

kentoh/Stock.Adobe.com

kentoh/Stock.Adobe.com

Given the rapid advances being made in technologies such as cell and gene therapies, personalized vaccines, and oral biologics, odds are high that there’s some hot ticket item demanding one’s attention at any given time. While it would be foolish to detract from the monumental discoveries being made in the biopharma field, new technologies should nevertheless be approached with some degree of caution. While innovation is crucial for the benefit of patients and industry alike, many start-ups, often brimming with ideas and ultimately unrealized potential, have been felled by trying to sprint before they even stand. Leadership cannot afford to split their attention if the nuts and bolts of an organization aren’t properly fitted.

Nowhere is this more evident than in data integrity, which measures data accuracy, completeness, and cohesiveness. It certainly doesn’t have that je ne sais quoi that entices ones like a monumental technological, but it is nonetheless foundational to the day-to-day operations and overall success of a company. Perhaps Big Pharma companies can afford the occasional $8.9 billion misstep (1); but whether your Amazon (2) or Santander (3), regulatory fines can easily balloon to a painful price point.

Particularly in an industry as heavily regulated as pharma, where data silos, compliance errors, or security breaches must be taken gravely serious because of their direct impact on patient outcomes, maintaining property data integrity is absolutely paramount to a company’s success. In this vein, this article explores the common pitfalls companies run into, the consequences thereof, and how one can structure their data integrity programs.

Firm Foundations

The crux of good data integrity are software and data management tools, as they are an immense boon when working with the voluminous amounts of data typical of the pharma industry. However, one must consider the types of data they are using before building out a data integrity process. Paige Kane, member of the Pharmaceutical Regulatory Science Team at TU Dublin, and Garry Wright, European laboratory compliance Specialist at Agilent Technologies, spoke at length on this topic in an episode of Pharmaceutical Technology®’s Drug Digest video series (4).

“We need to understand what our processes are, what data we are trying to capture, and make sure that we have the right tool for that application,” said Kane. “And when we do that, we need to take a risk-based approach and understand where the risks are, and where the operators are intersecting with that and where we can be error-proof. Leveraging automation is a great way to do that.”

Per Kane, companies that are looking to tighten up their operations should consider the purpose of the data they are collecting and why they need it. Superfluous data can result in greater system and operator burden, while missing requisite data fundamentally undermines the purpose of the system.

However, this doesn’t just apply to collection methods—it also impacts who can interface with the data, and in what ways. Wright noted that some of the most common pitfalls in data integrity concern have been plaguing the data integrity industry for as long as it has existed.

“We’ve been talking about data integrity for the last 10 years, but there’s still a lot of repeated data integrity violations out there,” said Wright. “Some of them are very basic, fundamental requirements of using software in order to protect data.”

Some of the most common problems Wright listed in the interview include:

Deliberate use of generic log-ons. This obfuscates which users accessed a particular system, and if given out broadly can make it difficult to trace the origin of errors or cases of intentional falsification. To enable a broadly successful data integrity system, it’s crucial to have unique logins that operators can track.

Unnecessarily elevated access. Often, scientists and various technicians are given more access to systems than needed for their role. This can lead to a conflict of interest where a bad actor approves a failed batch, or where bad test data are altered or deleted to avoid investigations or re-tests.

Lacking audit trails. Many companies have received warning letters because they have chosen to not activate certain audit privileges—making it impossible to track changes and properly oversee a system. Similarly, even when they are activated, users who shouldn’t be able to access audit technology are often able to do so, allowing bad actors to compromise data integrity.

While implementing a wholesale data integrity program can seem daunting, Kane was quick to note that there are many guidance documents and resources available that can help companies get over initial hurdles. What is pivotal is that company leadership act in good faith and invest in the types of processes and technologies that keep pharmaceutical data honest and intact—in short, enact a full-scale data governance policy.

“It’s more than just the audit trails,” said Kane. “There needs to be a holistic view and approach as to how we’re going to manage our data so that we can put forth that holistic picture to our inspectors and customers.”

The Human Element

Inherent to the discussion of data integrity is the discussion of automation, and within that the de-emphasis of human intervention in various tasks throughout the pharma lifestyle. Human error is a major concern for pharma companies, as one mistake (or deliberate falsification) can have severe knock-on effects for the patients ultimately receiving the compromised medicines. Automated processes are a fantastic alternative, particularly for precise, time-intensive tasks that need to be repeated thousands of times a day—the ubiquity of automated
production lines is a prime example of this principle in action.

However, it is crucial that companies understand that “de-emphasis” does not mean “elimination.” It’s common corporate philosophy to have workers perform to the highest function available—a CEO isn’t typically charged with data entry, even if they’re perfectly capable of doing so, because there are higher-level functions that they are uniquely qualified for. While automated services are doing tasks that would otherwise be performed by humans, that does not mean these people are eliminated—it means that their role must shift.

Claudio Fayad, vice-president of Technology, Process Systems, and Solutions at Emerson, tackled this misperception in a video interview with Pharmaceutical Technology (5).

“We’re not doing automation to eliminate humans—we’re eliminating human errors by upscaling them, by augmenting their capacity,” said Fayad. “As we can automate the more repetitive tasks, as we can give them a platform that is autonomous at the end but is giving them situational awareness all of the time, they are free now to look at the most important things and intervene when they need to intervene.”

Consequently, it is pivotal that humans maintain oversight over automated tasks. In terms of data integrity, automated digital tools are a fantastic tool for standardizing data collection and maintenance, but this is not a solution in of itself. There is consensus among experts that a proper data integrity process will conduct regular audits, consult with external regulatory experts, and implement crucial oversight systems operated by trained professionals; this, in turn, comes back to a proper data governance policy.

In practice, this could manifest itself as having an operator review random sections of internal log data generated by a regulatory program for inconsistencies. If there are mislabeled, incomplete, or missing logs, that operator can take the necessary actions to find out what caused it, determine what the potential impact may be, and work with colleagues to devise an appropriate solution as needed. In this sense, even though humans are not given edit access, minimizing the potential for human error, they provide oversight as a complementary force that is vital for proper software utilization. In other words, while software provides the means to data integrity, the humans interacting with it must engage in proper data governance to actualize its efforts.

“Every software platform can have different functionality and different controls, and different companies use different software in different ways,” said Wright. “Having that high-level governance strategy is your company’s commitment to how you’re going to control data within that individual software platform. It’s a starting point for the people who are going to configure and validate those systems, because they know what functionality they need to activate and what the best options are to control data.”

References

1. Playter, G. J&J Proposes $8.9 Billion Payment in Talc Powder Litigation. PharmTech.com, April 7, 2023.

2. BBC. Amazon Hit With $886m Fine for Alleged Data Law Breach. bbc.com, July 30, 2021.

3. Shome, A. FCA Fines Santander UK £108 Million for Prolonged AML Breaches. financemagnates.com, Sept. 12, 2022.

4. Playter, G. Drug Digest: Unpacking the Science Behind Data Integrity. PharmTech.com, March 3, 2023.

5. Playter, G. Addressing Data Security with Claudio Fayad. PharmTech.com, April 7, 2023.

About the author

Grant Playter is associate editor for BioPharm International.

Article details

BioPharm International
Vol. 36, No. 5
May 2023
Page: 28-29

Citation

When referring to this article, please cite it as Playter, G. Data Integrity Pitfalls. BioPharm International 36 (5) 2023.

Recent Videos
Lee Cronin, founder and CEO of Chemify
Jay Rajagopalan, senior director—Engineering & Product Management for Malema at PSG Biotech | Image Credit: BioPharm International
Related Content
© 2024 MJH Life Sciences

All rights reserved.