Computer System Validation (CSV) is crucial for any organization that needs to comply with Good Laboratory Practices (GLPs). GLPs are regulatory quality standards for conducting non-clinical laboratory studies that support the development of products for human or animal health. They require that laboratory data be reliable, consistent, and accurate. To meet these requirements, laboratories that are subject to GLPs must validate their computerized systems. In September 2022, the FDA released the Computer Software Assurance for Production and Quality System Software (CSA) guidance. The draft guidance provides recommendations for implementing risk-based CSA for non-product medical device software. Although the guidance was issued from the Center for Devices and Radiological Health (CDRH) the life science industry consensus is that the guidance can be leveraged by all “GxP” industries / organizations. Naturally GLP-regulated labs are no exception and can realize great benefits from implementing a CSV process that is in-line with the latest CSA concepts.
Validation is the process of establishing that a system meets its intended purpose and performs reliably and consistently. Computerized systems that are used to collect, manage, and report laboratory data must be validated to ensure that the data they produce is reliable, consistent, and accurate. The CSA guidance does not change any of these CSV fundamentals; it is still required to validate GxP applicable computerized systems to their intended use.
CSA realigns our industry’s CSV focus on risk-based, critical thinking to guide our approach. Instead of focusing on documentation production, CSA recommends a focus on right-sizing a validation based on the systems’ GLP applicability and risk to product quality and patient safety. For example, a system which has no GLP impact – perhaps a business information system like payroll management or HR functions – should not need any validation. That is – validation as required by the FDA – will not be necessary. The decision to not validate the system should be documented though.
GLPs apply to a wide range of laboratory activities, including the generation, collection, processing, and reporting of data. Therefore, any computerized system used to support GLP-regulated activities may need to be validated. Some examples of computerized systems that may need to be validated for GLPs include:
1. Laboratory information management systems (LIMS) – These systems are used to manage laboratory workflows, including sample tracking, data management, and instrument integration.
2. Electronic laboratory notebooks (ELN) – These systems are used to capture and manage experimental data, including protocols, observations, and results.
3. Chromatography data systems (CDS) – These systems are used to collect and process data from chromatography instruments, including HPLC and GC.
4. Spectroscopy data systems – These systems are used to collect and process data from spectroscopy instruments, including IR and UV-Vis.
5. Electronic data capture (EDC) systems – These systems are used to capture and manage clinical trial data, including patient information, laboratory results, and adverse events.
For these systems, the GxP applicability question has been marked “yes.” This is where CSA truly “kicks in”. Rather than approaching each system with the exact same set of validation requirements, an appropriate validation response is beneficial and warranted. Emphasis should be placed on assessing the GLP risk associated with the system. A traditional approach to a CSV project might blindly require certain deliverables, artifacts, and/or tasks to be accomplished no matter what the system’s risk/type/classification. With a risk-based approach like CSA, procedures are developed to allow for the appropriate deliverables, artifacts, and/or tasks to be created based on subjective and objective risk assessment. When we guide our clients through this process, we educate them on the options, alternatives, and possibilities we have seen in the field – using the CSA guidance as a framework. It is important for companies to understand that there are many ways to approach CSV/CSA based on risk. Furthermore – what is “right” or appropriate for one company, may be too much or too little for another.
CSV / CSA still follows the basic concepts of validation: Planning / Assessing (risk, 21 CFR Part 11 applicability, data integrity, validation planning), requirements specification (capturing intended use), testing, and reporting. However, the deliverables/artifacts may be combined, absent, or expanded based on system risk. Testing is significantly impacted by CSA. How we test, what we test, and how we document testing should all be considered and approached in the context of the system’s risks and GxP applicability.
Arguably the most important aspect of CSA is the idea of taking credit for what has already been tested. We should all be leveraging software vendor/supplier testing. This is particularly true for out-of-the-box functionality. Of course, GLP regulated organizations should qualify their software vendors to effectively implement a risk-based approach to CSA. The qualification should verify that they indeed have a high-quality Software Development Lifecycle (SDLC) with evidence of testing, issue handling, change control, etc. In the same vein, testing that the organization is already doing – perhaps informal user acceptance testing (UAT), exploratory, or unit/development may be used to supplement or augment validation testing. We often say “leverage” when referring to using vendor or development testing for validation testing. All that really means is that we are going to forgo or reduce our traditional validation testing since we have testing from the vendor or software development team. There is a lot of useful information in the CSA guidance on how to effectively do this – but it is really all about due diligence and assurance that the system meets the organization’s intended use.
Let us look at a chromatography system as an example. An HPLC/GC system usually includes a software component – thus comprising a computerized system. Often, these systems are out of the box – or perhaps have some minimal configuration. The vendor may provide an IQ/OQ testing package or have one for review. After risk assessment and validation planning, the organization may determine that the need for any functional testing may be zero or minimal. This is because the vendor testing should cover all the out-of-the-box functionality of the system, ensuring that it will function as intended. For some organizations, validation testing may be accomplished by running through critical data capture/analysis workflows only. It is required that validation demonstrates the system meets the intended use of the users – so a requirements specification (e.g., user requirements specification) is still likely the best way to capture this. However, validation testing of these requirements may point to vendor testing instead of newly written testing – for example.
In addition to complying with GLPs, validating computerized systems can also help organizations to save time and reduce costs. Validated systems are more efficient and reliable, which can result in faster data processing and analysis. This can lead to faster product development and reduced time-to-market.
In conclusion, organizations that are subject to GLPs must validate their computerized systems to ensure the accuracy, reliability, and consistency of laboratory data. Validating computerized systems is a critical step in complying with GLPs and ensuring the quality of laboratory data. Using the CSA guidance and other risk-based approaches to CSV allows for GLP regulated laboratories to meet validation requirements quicker without sacrificing any quality or testing.