Performance Validation recently featured an introductory post on risk-based computer system validation. It is an approach by which one can focus the validation effort on critical business and regulatory requirements and reduce the need for excessive testing and redundancy. A fundamental aspect of this approach is to leverage software vendor functional testing. This permits the validation effort to forego most functional (OQ) testing and hone in on the user acceptance and/or PQ testing. We mentioned that as a reference, one may look at ISPE’s GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems for more information. Yet, most guidance is not so specific that it instructs how to scale the validation for a risk-based approach. There a couple practical considerations and ways to do this.
We stated that the approach to a well-executed computer system validation project starts with planning. A formal planning or strategy deliverable, such as a Validation Plan, should foster the process of determining what is and is not in scope of the validation project. Also of importance, one must provide justification of the scope. A validation plan can say “we are not testing “X””, but clearly it is important to document why a function is not tested. In most cases, the reason for not testing will fall into three categories: the software functionality has been tested by the software vendor, it is not regulatory critical, and/or it is not business critical.
For example, let’s use an eQMS (electronic quality management system) as an example. An organization has conducted an audit of the software vendor and determined that the vendor has a robust QMS and SDLC (software development lifecycle), procedures (and they follow them), and documented evidence of functional testing. According to industry standards (such as GAMP 5), that organization can forgo functional testing and leverage the vendor’s testing. Depending on organizational requirements, you may still want to document the functional requirements and trace them to the vendor’s testing. Nonetheless, the activity of leveraging vendor testing is a valuable tool in executing a risk based approach.
Using the eQMS example, imagine a function that is off the shelf, but an organization does not use. In general, that function does not need to be validated. One still may want to include it in the requirements if, for example, it may be used in the future. Either way, acknowledgment of its presence and that it is out of scope is advisable so that it does not look like an oversight to an auditor. This scenario can be applied to business and regulatory requirements. If for example a function exists,but no regulatory or business requirements surround it, then one may be able to mark it as out of scope (no testing required). Testing efforts can also be scaled based on criticality and regulatory applicability. The decision process on how and what to test needs to be a collaboration between quality assurance, the business users, and any Information Technology stakeholders / subject matter experts. For example, testing a function to its full potential may require multiple scenarios. However, this may be a low business and regulatory function. As such, the project team may decide to test only one scenario; perhaps the most likely (PQ).
A risk-based approach to computer system validation is a great way to streamline a validation effort while maintaining quality. It is essential that the validation plan clearly documents what “is” and “is not” subject to the risk-based portion of the approach. As always, the “why” is most important. Understanding and being able to defend why something is not being tested (or scaled) gives credibility to the approach.