Audit Trail Review – A Data Integrity Issue

Data integrity demands a great amount of attention in the life science industry. This is now truer than ever with increased focus from the FDA, EU, and industry standards on data integrity issues and best practices. Audit trails for computerized systems are required for all FDA / EU regulated systems. It must capture the creation, modification, and deletion of regulated electronic records. Who created and when the record must be captured, as well as who, when and why the record was modified or deleted – as relevant. When a system is validated, an audit trail should be verified to ensure accuracy and that it meets all applicable regulatory and organizational requirements. Once the system is validated and in production, the audit trail should not be forgotten. A formal process to examine the audit trail to ensure data integrity is needed in the regulated environment. Let us consider audit trail review – how to approach it and a few items of interest.

Audit trail review refers to the process of periodically examining an audit trail based on a variety of factors. It is valuable to define audit trail review based on system risk. ISPE – as recommended by ISPE in the Records and Data Integrity Guide. Place a risk level on a system just as one would for any other computerized system risk assessment using criteria such as impacts to patient safety, drug/product efficacy, quality system, business risks, complexity/criticality etc. How often and to what degree the audit trail review occurs can then be assigned. Do include all system stakeholders in the criteria and assessment process including IT, QA, and business process owners.

It is important to develop procedures and processes for audit trail review or incorporate them into a Validation Master Plan and/or Quality Management System. The review itself might only be a spot check for a very low risk system or it could be a comprehensive analysis and tracing of data and metadata. Metadata is one aspect that should not be overlooked. The audit trail review cannot be adequate (in most cases) if information that makes the data meaningful (metadata) is not available. This is a time when putting on an investigator or QA “hat” is imperative. Audit trail review should (again, based on risk level) look with scrutiny at reruns and fails of data capture and modification. Procedurally and scientifically, it may be acceptable for rerunning and failing instrument runs, for example. However, does the audit trail capture these events? If so, how and is it complete? Again, risk is key, but these are questions and answers that are important. This is also an opportune time to review training records, access controls, and general system security – as applicable.

Audit trail review is an essential component to data integrity for any computerized system. There are guidelines and industry best practices out now which are very helpful in developing a process to manage the reviews. Yet it is important to understand the system’s risk and criticality so as to approach the assessment process efficiently. Use the audit trail review to put the pieces of data capture, modification, and deletion together – using metadata to give scale and meaning to the data and information. An audit trail review may be easy to overlook or curtail, but its contribution to overall data integrity and thus patient safety is very significant.

Medical Device Programming System Validation

The service engineering department of a leading medical device manufacturer developed a fully custom medical device programming system for distribution to its global service centers.  This new programming system was a down-scaled version of a larger and more complex automated system that the client used in production for the testing and programming of electronically driven motorized surgical devices.

The core of the new system was a software program designed to read data stored the device’s onboard programmable electronic control component, acquire real-time performance parameter values from the device, diagnose the state of “fitness” of the device by comparing the acquired data against a set of configuration specifications, and adjust the device software for optimum performance.  The programming software was designed to interface with the devices’ software installation files and configuration specifications files, stored in controlled network directories, via network connection.  At the end of the testing/programming sequence,  a printable device history report  was generated which includes all the recordable service actions and the device status at the time of service.

The new system also included a custom peripheral hardware fixture designed to mechanically interface with the devices at the battery/programming port, power the device, and provide logical connection to the onboard programmable electronic control component of the device.  The interface fixture was also equipped with a connection port for a multimeter to allow the service technician to gather electrical current readings from the device during testing.

Due to the universal design of the battery/programming port and the onboard programmable electronic control component, the programming system would accommodate the servicing of  a wide number of related device models, each with its own unique combination of device software application and configuration specifications.

Performance Validation professionals were called upon to complete the validation of this system while operating within the client’s new electronic validation system.

The PV Advantage

Performance Validation provided a dedicated team with Validation Specialists experienced in managing Computer System Validation projects.  Modeled after ISPE’s GAMP 5, a risk based approach was executed to maximize quality, efficiency and minimize cost.

Performance Validation professionals worked closely with the client’s engineering team to ensure that all risks with regard to the medical device programming system performance across the range of affected device products was taken into consideration and mitigated. This approach was  then duly documented in qualification.

The Solution

The project for the device programming system included:

Initial Assessment: an initial assessment was documented to establish the system’s GMP impact and applicability to 21 CFR Part 11 regulatory requirements.

User Requirements Specifications document (URS):  Given the URS document from the related production version of the device programming system (previously validated by the client) and engineering development reports for the new programming system, the PV CSV Validation Specialist was able to extract and derive the set of user requirements need to complete the URS.

Functional Design Specification document (FDS):  Given the FDS document from the production version of the device programming system and engineering development reports for the new programming system, the PV Computer System Validation Specialist extracted, derived and modified the applicable sub-set of functional specifications need to complete the FDS.

Measuring System Analysis (MSA):  MSA was performed and documented to qualify the use of a specified make/model of multimeter used in conjunction with the interface fixture to ensure the accuracy and reliability of the current readings.  MSA was also performed against the use of a specified make/model of photo tachometer to be used by field technicians to measure the speed of rotation/oscillation of the device for manual entry into the diagnostic data set.  MSA testing and documentation was developed and executed by a qualified PV Validation Engineer.

Computer System Validation Qualification (CSVQ):  

The CSVQ testing documentation was developed and executed by the PV Computer System Validation Specialist.  The client’s service engineering subject matter experts were consulted to ensure that their knowledge and experiences in testing and programming of the devices were considered to ensure reasonable mitigation of any known risks of failure in the testing and programming process.

The CSVQ included Installation Qualification (IQ) that verified:

  • the controlled state of system related installation instructions and service manual documents
  • the calibration of test instrumentation (multi-meter, tachometer)
  • the test installation of the interface fixture hardware
  • the test installation of the software system
  • the controlled storage of the software backup files

The CSVQ included Operational Qualification (OQ) that qualified:

  • the configuration of the testing configuration source data files to be logically interfaced by the system software during runtime (configuration values compared against approved product engineering design documentation)
  • the functionality of all graphical user interface screens and components
  • the fully automated software sequencing for testing and programming, with separate test cases designed to address specific device types under best case conditions
  • the pass/fail and remediation logic associated with diagnostic, programming, calibration, and optimization
  • the reliability of the software system to perform consistently over multiple test instances for multiple device models
  • the application of the software system functionality to all device models for which the system was intended
  • the specified design of the report and the accuracy of the data represented within the report
  • Traceability of the CSVQ testing to its related User Requirements was established within  the client’s new electronic validation system.

Performance Validation provided the necessary services and solutions to complete the programming system validation project, while remaining flexible and responsive to the customer’s schedule and budgetary constraints.  The validation project was completed successfully, and the new medical device programming system  was placed back production in a timely manner to the customer’s satisfaction.

The Benefits

The advantages of tailoring each validation effort based on system risks and complexity were realized.  Through complete and quality-driven validation planning, testing was minimized and remained focused on the system’s intended use and all critical quality attributes.  This ensured that timelines were met, while assuring the client that their programming system could be distributed to their service centers with full confidence.

For more information please contact:

Kevin Marcial,
CSV Services Manager
Performance Validation, LLC.
5168 Sprinkle Road
Portage, MI 49002
(269) 267-4020 Mobile

Data Integrity – Audit Trail Review

Data integrity demands a great amount of attention in the life science industry. This is now truer than ever with increased focus from the FDA, EU, and industry standards on data integrity issues and best practices. Audit trails for computerized systems are required for all FDA / EU regulated systems. It must capture the creation, modification, and deletion of regulated electronic records. Who created and when the record must be captured, as well as who, when and why the record was modified or deleted – as relevant. When a system is validated, an audit trail should be verified to ensure accuracy and that it meets all applicable regulatory and organizational requirements. Once the system is validated and in production, the audit trail should not be forgotten. A formal process to examine the audit trail to ensure data integrity is needed in the regulated environment. Let us consider audit trail review – how to approach it and a few items of interest.

Audit trail review refers to the process of periodically examining an audit trail based on a variety of factors. It is valuable to define audit trail review based on system risk. ISPE – as recommended by ISPE in the Records and Data Integrity Guide. Place a risk level on a system just as one would for any other computerized system risk assessment using criteria such as impacts to patient safety, drug/product efficacy, quality system, business risks, complexity/criticality etc. How often and to what degree the audit trail review occurs can then be assigned. Do include all system stakeholders in the criteria and assessment process including IT, QA, and business process owners.

It is important to develop procedures and processes for audit trail review or incorporate them into a Validation Master Plan and/or Quality Management System. The review itself might only be a spot check for a very low risk system or it could be a comprehensive analysis and tracing of data and metadata. Metadata is one aspect that should not be overlooked. The audit trail review cannot be adequate (in most cases) if information that makes the data meaningful (metadata) is not available. This is a time when putting on an investigator or QA “hat” is imperative. Audit trail review should (again, based on risk level) look with scrutiny at reruns and fails of data capture and modification. Procedurally and scientifically, it may be acceptable for rerunning and failing instrument runs, for example. However, does the audit trail capture these events? If so, how and is it complete? Again, risk is key, but these are questions and answers that are important. This is also an opportune time to review training records, access controls, and general system security – as applicable.

Audit trail review is an essential component to data integrity for any computerized system. There are guidelines and industry best practices out now which are very helpful in developing a process to manage the reviews. Yet it is important to understand the system’s risk and criticality so as to approach the assessment process efficiently. Use the audit trail review to put the pieces of data capture, modification, and deletion together – using metadata to give scale and meaning to the data and information. An audit trail review may be easy to overlook or curtail, but its contribution to overall data integrity and thus patient safety is very significant.

2017 Society of Quality Assurance Annual Meeting

Are you planning to attend the 2017 Society of Quality Assurance Annual Meeting March 26-31, in National Harbor, Maryland?

If YES, please plan to stop by Booth 315 to meet Kevin Marcial of the Performance Validation team.

Performance Validation is a Value Added Reseller of the Adaptive GRC solution. Adaptive GRC Solution, which is a cost-effective approach to Governance, Regulation, and Compliance.  Adaptive GRC offers a suite of flexible, FDA compliant, and cloud‐based software suite to manage audit, risk, compliance, and quality activities. The solution can be implemented enterprise‐wide out of the box or configured for your specific requirements.

Adaptive GRC Key Capabilities:

Vendor Risk Management, IT/Information Security & Risk Compliance Oversight, Quality (CAPA) & Deviation Management, Enterprise Risk Management, Document Management, and Audit Management.

AdapativeGRC was originally built from experience in the Life Sciences sector. It has full Part 11 audit trail and electronic signature capabilities. It also has a baseline set of IT controls to allow more rapid use and deployment. Using AdapativeGRC can help to get gaps identified and analyzed with less effort. No local installation is required (operates over a standard web browser). You can get access to a full eGRC system for a much lower cost than was previously possible.

Adaptive GRC Demo Video

Governance, Risk, and Compliance (GRC) Basic Concepts

Governance, risk, and compliance or GRC is a term one in the pharma or biotech world might not hear all that often. It is a concept most often employed in financial, legal, and information technology divisions. “Governance” refers to the processes/procedures/activities used to manage the organization – such as management processes. This includes the GRC process itself. “Risk” refers to the assessment and mitigation (or management) of risks to the organization. This may be from a business and/or a compliance perspective – for example. Lastly, “compliance” applies to how the organization achieves adherence to internal (SOPs) and external requirements (regulatory bodies and authorities). GRC is comparable to the Quality Management System (QMS) concept found in pharma and medical device. The strength of the process comes from not only assessing/identifying/mitigating/controlling GRC elements but understanding how each relates to one another.

A quality GRC process is well-integrated into the business processes. Data collected from the various arms of GRC needs to garner information that can show trends and concerns to allow for mitigations and preventative actions to be timely and effective. This means that data collection must be accurate and timely. A software tool is useful for this. There is great value in forecasting risks based on compliance or governance activities. An interconnected GRC solution allows for visualizing data to understand how two seemingly disconnected activities impact each other – for example.

GRC Software as a Solution

A software solution can certainly aid in managing GRC activities, but GRC isn’t as simple as buying a software. In fact, it is important to define an organization’s GRC needs as an act outside of the consideration of software. Too often, software is thrown at a problem as a solution. The reality is that the business processes, such as GRC, are the root cause of the problem. Implementing a software won’t fix a bad process (not likely at least). To create a well-oiled process, start with mapping the business needs. One can use a tool like six sigma and/or a kaizen exercise to ascertain core activities and look for inefficiencies or faults. A nice mind mapping tool, like Xmind can be useful to aid in the process. Once the process has been well-designed and achieves the necessary compliance and business objectives, a software can be a nice tool to automate that process. A GRC software suite can automate audit and risk assessment processes, for example. True value can be realized though when analytics and dashboarding is utilized for business intelligence. Understanding how aspects of GRC relate and impact each other, as mentioned above, is fundamental in obtaining meaning from the tools (such as software) in place.

Testing and Risk-Based Computer System Validation

Performance Validation recently featured an introductory post on risk-based computer system validation. It is an approach by which one can focus the validation effort on critical business and regulatory requirements and reduce the need for excessive testing and redundancy. A fundamental aspect of this approach is to leverage software vendor functional testing. This permits the validation effort to forego most functional (OQ) testing and hone in on the user acceptance and/or PQ testing. We mentioned that as a reference, one may look at ISPE’s GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems for more information. Yet, most guidance is not so specific that it instructs how to scale the validation for a risk-based approach. There a couple practical considerations and ways to do this.

We stated that the approach to a well-executed computer system validation project starts with planning. A formal planning or strategy deliverable, such as a Validation Plan, should foster the process of determining what is and is not in scope of the validation project. Also of importance, one must provide justification of the scope. A validation plan can say “we are not testing “X””, but clearly it is important to document why a function is not tested. In most cases, the reason for not testing will fall into three categories:  the software functionality has been tested by the software vendor, it is not regulatory critical, and/or it is not business critical.

For example, let’s use an eQMS (electronic quality management system) as an example. An organization has conducted an audit of the software vendor and determined that the vendor has a robust QMS and SDLC (software development lifecycle), procedures (and they follow them), and documented evidence of functional testing. According to industry standards (such as GAMP 5), that organization can forgo functional testing and leverage the vendor’s testing. Depending on organizational requirements, you may still want to document the functional requirements and trace them to the vendor’s testing. Nonetheless, the activity of leveraging vendor testing is a valuable tool in executing a risk based approach.

Using the eQMS example, imagine a function that is off the shelf, but an organization does not use. In general, that function does not need to be validated. One still may want to include it in the requirements if, for example, it may be used in the future. Either way, acknowledgment of its presence and that it is out of scope is advisable so that it does not look like an oversight to an auditor. This scenario can be applied to business and regulatory requirements. If for example a function exists,but no regulatory or business requirements surround it, then one may be able to mark it as out of scope (no testing required). Testing efforts can also be scaled based on criticality and regulatory applicability. The decision process on how and what to test needs to be a collaboration between quality assurance, the business users, and any Information Technology stakeholders / subject matter experts. For example, testing a function to its full potential may require multiple scenarios. However, this may be a low business and regulatory function. As such, the project team may decide to test only one scenario; perhaps the most likely (PQ).

A risk-based approach to computer system validation is a great way to streamline a validation effort while maintaining quality. It is essential that the validation plan clearly documents what “is” and “is not” subject to the risk-based portion of the approach. As always, the “why” is most important. Understanding and being able to defend why something is not being tested (or scaled) gives credibility to the approach.

Risk Based Computer System Validation – A Primer

Risk-based computer system validation is a term widely used in our industry now, but understanding and implementing it can be challenging. Often, organizations want “cheaper, faster, better” but when the details of a risk-based computer system validation (CSV) plan are defined, they may find that their expectations have not been met. There are natural concerns with risk-based CSV such as loss of project quality and data integrity. Yet, there are straightforward ways to approach a validation plan to ensure that patient safety and product quality are being met in a risk-based computer system validation project

First, there are a couple of excellent resources to aid in the process of a risk-based computer system validation. The first is ISPE’s GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems. This is the seminal guideline on how to execute risk-based CSV. It is also one of the premier industry standards on CSV in general. One can follow the strategy and deliverable guidance outlined in the document to develop and execute a compliant and efficient project. For general CSV and some risk-based guidance, I also recommend the Drug Information Association (DIA) Computerized Data Systems in Nonclinical Safety Assessment-Current Concepts and Quality Assurance. This document complements GAMP 5 in many ways and offers another viewpoint on computer system compliance and quality. Of note – see our recent blog post on the FDA-propose update to the GLPs for nonclinical studies if you work in that industry and are considering computer system compliance.

The approach to a well-executed computer system validation project, as in any project, starts in planning. A formal planning deliverable, such as the Validation Plan, Validation Strategy, or Testing Strategy will drive the project’s scope, strategy, and documentation activities. These plans can include risks assessment that will refine the focus, approach, and criteria for success of the validation project. One must do more than claim a risk-based approach is being used. There should be a documented assessment to demonstrate due diligence when leveraging risk in a project.

There are many ways to develop a system risk assessment for the purpose of a risk-based validation. In general, one examines the business and regulatory risks involved with the system and identifies any areas that are more and less critical. For example, at the system level an application that collects data for study or production work is considered both business critical and subject to regulatory requirements. A web application that serves as a dashboard to the data collection system, but does not allow for data manipulation / transformation, might be looked at as less critical. The categories and criteria for assessment should be as objective as possible. Some suggested elements include:

  • GAMP category (off the shelf, configured, customized – see GAMP 5 for more information on categorization)
  • Vendor status (driven by a vendor audit)
  • System functionality and complexity (what does it do? This is an important driver)
  • Regulatory applicability and criticality (another major aspect of the assessment)
  • The novelty of the system to the organization (such as in-house experience and adherence to infrastructure requirements / SOPs)
  • The system’s use in the given industry (e.g. is it widely used in the CGMP world?)
  • Business criticality (e.g. an organization may deem a system critical even though it is not subject to GxPs).

I recommend looking at RAMP (Risk Assessment and Management Process): An Approach to Risk-Based Computer System Validation and Part 11 Compliance (by Richard M. Siconolfi and Suzanne Bishop, 2007). Much of what I have laid out here is recommended in this approach. It is important to identify the stakeholders that should be involved in the assessment and to ensure they are included in the decision-making process.

Risk-based computer system validation can be done in a clean, clear-cut way that clearly documents the rationale for the project’s approach to validation and defines its objectives. The significant motivator behind a risk-based approach to computer system validation is clearly the justification for focusing testing efforts on the most business and regulatory critical aspects of the system. I will address how to actually do this in the next blog post.

Part 11 Compliance Considerations for SCADA Systems

Supervisory Control and Data Acquisition (SCADA) systems are tremendous assets in the pharmaceutical, medical device, and other FDA regulated manufacturing industries, providing cost efficiencies and improving the consistency of product quality. They have also enabled the industries to move from hardcopy to electronic production records. Systems used for manufacturing pharmaceuticals, medical devices, and other regulated health care products must comply with current Good Manufacturing Practices (cGMP). Production related electronic records are subject to compliance with the same predicate rules (e.g. 21 CFR Part 211 and 21 CFR Part 820) that would apply under paper-based quality systems. In addition, the systems that produce and retain these records are required to comply with 21 CFR Part 11 rules for electronic records and signatures.

SCADA system records and functions that are subject to 21 CFR Part 11:

• User access security restriction
• Electronic signatures
• Graphical user interface (GUI) displays, operator entries, and controls
• Recipe creation, editing, and version control
• Recipe sequence enforcement
• Electronic logging of recipe procedures executed by system with time-stamped audit trails
• Data collection, storage, protection, audit trails, and retrieval.
• Production data historian with audit trails and reports
• SOP’s for life-cycle management

Functional Specifications for new systems must include 21 CFR Part 11-related requirements, and qualification testing must clearly challenge and document them.

Production data is acquired primarily from manufacturing equipment instrumentation via process logic controllers, as well as operator workstations and interfaced peripherals such as barcode readers. Production data is also generated in the form of batch identification and target parameters from the recipe and batch databases. Interfacing manufacturing execution systems (MES), and enterprise resource planning systems (ERP) may also provide data for the batch record. Each data parameter value within the electronic batch record must be traceable back to its specific source, and it must be linked to its production batch identifier. All critical data collection functions must be qualified to ensure the integrity of the data acquired by the SCADA software system and stored in the resulting electronic batch records.

Loss of critical data associated with a lot or batch can result in the loss of product. Data security and automated periodic or real-time backups of production data should be implemented to prevent data loss. Historical production records must remain accessible and readable throughout the lifecycle of the system or the record retention period required by regulation, whichever comes first. Finally, all data records must be comprehensive, complete, bound to their associated records, and easily retrievable for audit by the FDA.

FDA Proposed Rule for a GLP Quality System

Those of us who have worked in pharmaceuticals or medical device are familiar with the concept and requirements of a Quality System. On August 24, 2016, the FDA issued a proposed a rule to the Good Laboratory Practices (GLPs, 21 CFR part 58), titled  “Good Laboratory Practice for Nonclinical Laboratory Studies”. The concept is akin to the one dictated in the cGMPS. One primary aspect of this proposition is to implement a GLP Quality System defined as “the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management in the conduct of nonclinical laboratory studies.” (Good Laboratory Practice for Nonclinical Laboratory Studies, 2016). No doubt, many organizations that conduct these studies already have a quality system or procedures that make up a quality system. These organizations will be well positioned to manage the required procedural changes from this proposed rule.

The proposed rule is a reflection of the FDA’s increased focus on data integrity. Data integrity is perhaps the largest driver of this proposed rule. We have seen the FDA take an increased regulatory concern towards issues with data integrity – through 483s and the April 2016 Data Integrity and Compliance with CGMP Draft Guidance for the Industry. There are many propositions that address data integrity in the document. One standout is the FDA formalizing the concept of “ALCOA” into the GLPs. The concept of ALCOA means that data are “accurate, legible, contemporaneous, original, and attributable”. The proposed rule does not define those terms, however, more specification may still come. ALCOA has been used for a long time in pharma, so the specific definitions should not deviate much from the industry standards.

Mandating GLP Quality System gives the FDA authority to regulate studies outside of the United States. Part of the rule deals with the issue of multisite studies, which are quite common now. Often, components of the study are managed in multiple countries. The GLP Quality System will help monitor data integrity from oversea studies by requiring the inclusion of all data generated with the study in the submission. This may include electronic audit trails from data collection, for example.

A couple of points are noteworthy from a computer systems validation perspective. In the Equipment Design section (21 CFR 58.61), the FDA proposes clarifying that computerized systems are equipment. I do not believe this will change the way organizations validate computerized systems or add new scope to validation, but it does reinforce the need for computerized system validation and qualification. Also, the FDA proposes to update provisions in Part 58 to “address electronic data capture and maintenance” (Good Laboratory Practice for Nonclinical Laboratory Studies, 2016). They state that this is an effort to keep 21 CFR Part 11 and Part 58 consistent but that they do not want to duplicate Part 11 in Part 58. Basically, they are stating what organizations understand in the industry that Part 11 applies to Part 58. They are also adding a definition of “validation”, like the one found in Part 820 and requirements to have SOPs defining computer system validation.

So, what do you think of the proposed rule? It seems a natural and perhaps obvious addition to the GLPs. We all want to find efficient ways to increase data integrity and most organizations take this seriously. Increased attention on data integrity by the FDA may have extra costs (see the proposed rule for the FDA’s information on the estimated costs of the rule), but it is a worthwhile effort. The proposed rule comment period ends November 22, 2016.