Posts in Validation and Verification

Preventing Potential Recall by Testing Right Your Product

June 23rd, 2018 Posted by Recall, Risk Management, Safety, Tip Of The Week, Validation and Verification, Workshops 0 thoughts on “Preventing Potential Recall by Testing Right Your Product”

On the QA Geek Week giving the following lecture based on a selected example of recalls researched by Orcanos to provide some preventive actions methods on the validation and verification methodology. These recalls give an example of how to pay attention to simple observational engineering faults that could harm the patient. All are true stories that just happened during 2017 – 2018 years which were already reported the increase in recalls during Q1 2018. The numbers shows that it is to be the largest recall quarter since 2005. Orcanos R&D effort is to continue to be a market leader by daily investigating these events and integrating into actions in Orcanos ALM/QMS system, helping to prevent the next recall to our customers.

Try for Free NOW!

 

WHITE PAPER ACHIEVING ISO 26262 COMPLIANCE WITH ORCANOS ALM and QMS

December 8th, 2017 Posted by ISO 14971, Risk Management, Validation and Verification 0 thoughts on “WHITE PAPER ACHIEVING ISO 26262 COMPLIANCE WITH ORCANOS ALM and QMS”

ISO 26262 is an automotive standard that places requirements on the quality of software, for which tools such as ORCANOS ALM and QMS are ideally positioned to enforce. With the highest adoption in the industry and a strong heritage in safety-critical applications, ORCANOS ALM and QMS have been certified as being “fit for purpose” to be used as tools by development teams wishing to achieve ISO 26262. This document describes the parts of the standard that are addressed by using ORCANOS ALM and QMS.

Read More: https://www.linkedin.com/pulse/achieving-iso-26262-compliance-orcanos-alm-qms-rami-azulay/?published=t

Effective baseline management for ORCANOS | ALM Test Management

June 23rd, 2016 Posted by IEC 62304, Test Management, Tip Of The Week, Validation and Verification 0 thoughts on “Effective baseline management for ORCANOS | ALM Test Management”

ORCANOS | ALM  versions and baseline management have similar behavior as a source control, thus provides powerful and intuitive tools to manage and track versions, provides a full change history, prevent duplication, and allows reuse, using pointers (copy as link). orcanos

Overview

Rather than manage versions manually, ORCANOS | ALM has a built-in versions management engine, thus allows storing multiple versions views for single text case, or group of test cases, and handle each version separately, and this without duplicating information.

Test Execution

For instance, the historical steps results can be viewed at the point the execution of the test was performed. ORCANOS | ALM “Execution Set” is used for grouping test for execution. Once executed, the execution set stores historical data throughout each test cycle, and the full change history can be viewed in the “Execution History” tab.

Version views and baseline

As opposed to the common use of other test management tools, that physically duplicating test cases when project version advances, ORCANOS | ALM system maintains pointers of the previous version (which would act as a baseline), and track the changes in the current baseline. Test Cases information from older versions are kept, and once a test needs to be updated, Orcanos provides a BRANCH option that splits the test case to 2 instances. One in the previous version and one in the current. both instances are linked, so tracking changes is easy. 

So, in summary, BRANCHING Test Cases creates an instance of the test cases on the new version while preserving the original as a baseline. BRANCHING also easily manages and searches through test case versions, by setting up a multiple views to query the data.

Preserving Test Case Base Line as Test Runs

ORCANOS | ALM  uses both test cases and test runs. Test cases define the set of conditions, actions, expected results, and other criteria used to determine if a product component works correctly and meets its traced, specified requirements. Test cases can change over time to reflect functional changes in your product requirements. Such changes need to be identified easily and reflected on the correct baseline, which goes under testing.

Execution Runs on the other hand, are snapshots of test cases that are generated at a milestone in the testing cycle, such as when a new software release is provided by the development team. A test run contains all information from the related test case, in addition to the results of a specific instance of the test.

So while the test case may change in the future to reflect new user logic modifications, the test run steps that were executed in the past, remain in the history of the execution run, and will never change according to most restricted regulation standards for electronic systems, such as 21 CFR Part 11. By saving test runs, managers and auditors can look back on the exact steps followed during testing years, long after the tests have been completed.

A single test case can have one or more related execution runs such as Functional, Progression, Regression, Load and Stability to mention a few; depending on the test variants defined by test parameters selected when test runs are being executed. For example, should an application support multiple browsers, such as Chrome, Edge, and Firefox variants, these variants can be selected when you execute the test, creating a separate test run for each selected variant. The test runs include identical information, except for the test variant value, which indicates the browser used in testing.
orcanos

 

Viewing the Full Change History

As noted, test cases change over the course of the development cycle. Should  you want to view these changes, you may  select a test case, and then click the Execution History tab to view a detailed change report, including who made the change, when it was done, and what was changed.

Change reports display details of content added to and removed from a test case (or any item, for that matter), each time it is saved. ORCANOS | ALM keeps audit logs of changes according to best practices of  regulated design change management. These reports identify the specific changes made to a test case over time. Change reports are ALWAYS turned on and available for both historical item information logging, and detailed audit trail loggings for test cases.

 

orcanos

Once you are in the Change Report window, you will see content which has been added or removed from the test case. You can also view the electronic signature in cases where changes are made and signed (in ORCANOS | ALM 2.0 and later), and you have permission to view the audit log, If there is an attachment, a link to it will be visible.

Reuse Test Cases

Should you want to add test cases that share the same basic information, ORCANOS | ALM saves time by taking the test to the next baseline automatically. You can then decide if you would like to keep it as it is, make changes,  or just make deletions from that next baseline version. There is no need to duplicate an existing test case and then edit the new test case. Simply select the test case from the next baseline version on the product tree module, followed by selecting Edit Test Case by pressing the Edit button.

ORCANOS | ALM gives you the option to link the altered test case with the original test case into a new execution set. You can also specify the traceability between those test cases to a requirement, and any change to already existing traceability will be managed, based on the baseline it has been associated with.

For example, you might create a “Baseline Test Case” on version 1.0 traced to a requirement in 1.0, then you make changes to that test case in version 2.0. ORCANOS | ALM will alert you that there is a suspicious link needing attention. Allowing you to easily identify and manage changes.

Test Case Versioning in ORCANOS - IMG5

BRANCHING test cases also “copies” information you select from the original test case. Besides the results, all other information including attachments, are always included in BRANCHED test cases. The newly BRANCHED test case now has a new baseline for Execution Run that is kept in tandem with the previous version results. In such cases, you can measure the Feature Maturity by looking at the history of the test over executed version, and see that it gets stable. History of the test will be kept through both version 1.0 and 2.0 at the same audit log, as well as file attachments, links, and more, from the original test case.

Test Case Versioning in ORCANOS - IMG6

orcanos

Original Test Case

2016-06-28_13-28-45

Test Case after Branch

2016-06-28_13-29-09

For more information about duplicating test cases, along with step-by-step instructions, see ORCANOS | ALM online help.

Querying Using Views

The ORCANOS | ALM view designer can be used to easily manage and search through different versions of test cases. In the View dialog box, simply select “test case” from the dropdown list, to add specific criteria to the view by selecting the field VERSION, and use the operands to search the data as you wish.

The end result should look something like this:

orcano

You can use this field criteria to identify the test case version for which each test case is valid, and make it visible at a glance in the test case list window. In the example below, you can see test cases and their version, as well as identify whether they were BRANCHED or not. You can also see the traced requirement of that test case.

Test Case Versioning in ORCANOS - IMG9

orcanos

Test Case Versioning in ORCANOS - IMG11

In this example, the user has also displayed the column for Version Baseline Test Case Traceability relations links, enabling you to see which test cases are linked. Clicking on the blue bar links directly to that traced item.

When the 3.0 update is released, ORCANOS | ALM will advance the project, and automatically all test cases from version 1.0 and 2.0 will be available and will be managed separately. Any current test case that does not change, or does not need an update, will keep its original version. If a test case does need an update, the user would add that test case to Execution Run built into version 3.0, without duplicating the test case, Execute the test cases, and it will be marked as valid for 3.0.

Creating a general custom field,  allows you to filter by many other attributes and creates KPI measurement on your overall productivity. ORCANOS | ALM enables email notifications about changes based on product version, enabling field-level security, and more.

Again, for more information about creating custom fields, see ORCANOS | ALM online help.

Folders are another option for managing test case versions in ORCANOS | ALM. Check out the online support and learning resources at www.orcanos.com for more information.

How to go About Good Practice in Validating Computer Systems in a Regulated Environment

May 30th, 2015 Posted by Company News, Orcanos Cafe, Presentation, Validation and Verification 0 thoughts on “How to go About Good Practice in Validating Computer Systems in a Regulated Environment”

Overview

Computer System Validation is the technical discipline that involves the use of life sciences companies, to ensure applications provide the information they were intended to. FDA monitoring and regulations evidence the need for strict quality measures, The Food and Drug Administration (FDA)  which ensure specific controls and procedures during the Software Development Life Cycle (SDLC/ALM). Regulations such as those of the FDA also underscore the importance and need of not only following checks and procedures, but that these procedures are well documented. Said documents must be able to stand up to scrutiny by trained inspectors, especially since the financial penalties in the absence of an audit, can be exorbitant. Among the implications of not following relevant protocols in a  Life Science Software application, include the loss of life. In applying the appropriate SDLC/ALM protocols such as documentation, are all part of the technical discipline of Computer System Validation. In effect, Computer System Validation involves what many IT people consider testing software.

Definition

According to the FDA, process validation is “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes” (1987).

In 2011, the FDA defined process validation as “the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product.”

Guidance of Validation Process

Validation involves all aspects of a process (including buildings, equipment, and computer systems) meeting requirements of quality, and compliance with applicable rules, regulations and guiding product quality, safety and traceability.

In 2011 three stages were involved in the validation process:

  1. Process design, the commercial process is defined based on knowledge gained through scale-up activities and development.
  2. Process qualification, the process design is evaluated and assessed to determine if the process is capable of reproducible commercial manufacturing.
  3. Continued process verification, ongoing assurance is gained during routine production that the process remains in a state of control

Purpose of Validation

To validate, is to confirm that a product or service meets the needs of its users. Validation starts at the planning stages, and continues through to the maintenance and operation phases. It is important to consider all the documentation that comes out of validation, and the entire process, so as to ensure that one’s system, and state remain validated over a period of time.

Model

We are already familiar with the model of validation, and that the model follows a particular pattern of documentation. It is the relationship with these documents that is critical at the end, to not only maintains quality assurance of these documents, as would any cGMP document. It should be noted, that one must also consider the traceability of validation as one proceeds through an initiative.

Importance

When we talk about validation, we are referring to the whole validation and verification process. The process which is without validation and verification is a waste of considerable time, energy, money and resources, as the validation of unnecessary things can occur. So from the perspective of the specialist, validation is also very important.

Consideration of Validation

I often observe in my practice, organizations going overboard looking at  commercial off-the-shelf applications, which have very low risk, much in the same manner as custom developed applications with very high risk. So verification without validation is a factor that every organization that is looking into a validation process should take note of. Who is involved in validation? When we look across validation we observe that it is a process involving many organizations, and many individuals; especifically quality assurance, which is at the heart of software validation.

Key Role

Upon observing key roles such as the validation manager, he or she is really the driver, the overall architect of your software validation initiative, if you will. We look at the business system owner whose is consistently concerned with the business requirements. The input of business owners in the validation process is a vital key. When we consider the role of project managers, they are duly responsible for the overall implementation and execution of the validation program.

So in summary, the project manager’s role, and that of the head of quality assurance, is absolutely critical throughout the process, saving time and money. No validation effort is complete or effective, without the input of quality assurance. A technical lead is also needed to ensure that all technical requirements are addressed for the IQ and OQ procedures, as well as that development progresses in the manner intended. The role of the validation manager I think is one of the most critical roles within any validation initiative, as this individual is responsible for the overall methodology and the execution of the validation initiative. The validation manager also works hand in hand with the quality assurance manager and the development organization or your technical organization to ensure that the project is on time, within budget, and meets regulatory guidelines.

Importance of Quality Assurance

If quality assurance staff lack the skill set necessary or the quality assurance background to ensure the effectiveness of your validation initiatives as you goes through the validation process, variations in workload will result. This may determine when you actually decide to bring in consultants or when you decide to augment your existing staff. At the beginning of the project it may be noticed that the business system owners and the requirements development people, are very much involved at the beginning. Because what you are doing is establishing the requirements essential for this validation initiative. So when you look at validation as mapping the process according to the intended use, it is very important that was that these requirements are established up front, and those of you who are using off-the-shelf software, hold your vendors’ feet to the fire, in terms of getting the requirements down for their software application. Keep the intended use principle in mind during the early stages. As you get to the middle part of your validation initiative, you will find more resource load on the design or either the system’s integration part. So as we go through our systems integration, or as you’re deploying off-the-shelf software, you are required to deal with this aspect, and this may involve a significant number of resources during the middle of the project. And then as we move to the tail end, you may see more testing resources that are now need to be brought to bear.

Validation testing

Validation testing is an area that software vendors really focus a lot of their attention, specifically on the IQ, OQ and PQ of validation. However, that’s not all there is to software validation. There’s a whole process involved here. So as you look at the resource load across your validation initiative, be sure that you have the right number of resources at the right time during your validation process. As you look at validation as a dynamic process, maintaining the state a validation is absolutely critical. You need to make sure that the system is validated the first time correctly, as well as over time; the system maintains a state of validation. You need to be concerned with change control than with configuration management control. As you look at your system, there may be operating system changes, network system changes, as well as security changes. You have a number of changes that come up throughout the validation initiative. It is important to make sure that these changes are addressed overtime, and that more importantly, they are documented and follow procedures.

Validation initiative

Upon establishing a validation initiative, standard operating procedures must also be put in place. These procedures include backup and recovery processes for security Training is a crucial part of a validation initiative. It is also necessary for example to look into a comprehensive incident management procedure. This is often overlooked when organizations are validating off-the-shelf software. But it must be ensured that every incident is actually tracked so as to monitor, and take corrective action processes to be put in place validating that the system is corrected after such reported incidents, and all procedures are done in a controlled manner.

Validate

Validate is a system to which ensures that incidents are corrected, and so corrected in a controlled manner. When we look at what triggers software validation or revalidation if you well, every software installation or the integration of new software applications and/or modules could actually trigger software validation. Maintenance upgrades such as the upgrade of an operating system, or changes in your network, could also trigger software validation. The additional new hardware, software could trigger it, or systems integrated requirements. Regulations as you well know are constantly changing. Over time different product control requirements could also trigger revalidation. Within a validation master plan, there should not only be the triggers for software validation, but also there should be follow-ups to ensure that the system is maintained in a validated state. Who are the consultants that one can use for software validation, and what’s the business case for consultants? Consultants can play a really key role. For one, they bring independent expertise to the table. When you look at software offenders with commercially off-the-shelf software, and you are given IQ, OQ, QP scripts, those scripts are designed to work the first time, but they don’t necessarily offered the independence that a software consultant could to bring to the table. So first of all they bring independence, but more importantly expertise in software validation can help accelerate the process and deliver best practices for software validation.

Been There Done That, or NOT!

I run into a lot of companies that have never done a validation project. They may have never validated an ERP system, or never validated an integrated system. Software consultants can be invaluable, helping to save time and money, from having to go and learn things, and accelerate the learning curve for your organization. These consultants can be very valuable. Consultants can also deliver predefined validation protocols, and package deliveries of methodology that can help accelerate your process. But more importantly, ensure that there is a quality process at the heart of your validation initiative. Finally they can augment your existing staff. Recalling the workload I discussed, the validation workload varies over the entire life cycle of your validation initiative. Consultants could come in, and at different points in time during that process, help you accelerate retaliation initiative. So there’s a good reason for using experience qualified validation consultants and I recommend that you should look at those if you have an either complicated validation initiative or if you’re looking to validate a system for the first time. Consultants can be invaluable, giving assistance through the whole process. So why consultants should get involved early in the process? The early involvement of consultants,  are so that they understand the requirements and keep in mind that intended use principal If you are validating a system according to Intended use, you need all consultants to understand what the intended use of the system is, so that it affects the usability. So get the consultants involved earlier, they can help you to optimize your validation process if you don’t have standard operating procedures in place. They can assist with optimizing, and as I have seen in some organizations, validating low risk systems, in the same manner that they validate higher risk systems. It is strongly recommended that you conduct a comprehensive risk assessment prior to your validation, to ensure that you’re not expending valuable resources validating systems that are very low risk. So you want to look at the strategy for your overall validation, and you want to get consultant’s to really help you with, that so that you can have a comprehensive validation assessment around your particular initiative. And also one of the most overlooked areas is that of migration. Organizations are either developing custom systems, or their migrating from system ‘A’ to system ‘B’, and they don’t consider migration. Migration is absolutely important key. Sometimes when you look at custom systems, validation efforts can be about 30%, of your validation initiative. But if you have a large migration initiative, it can be even more. I strongly recommend that you take validation and migration into consideration and be careful not to overlook this area. Records management is also important when looking at validation initiatives. It is crucial to ensure that all records associated with your validation and nation are properly archived and stored for future use. It is also a good idea to conduct an audit management on your validation systems over time. Conducting a risk assessment is also a valuable tool.

Conclusion

So if you are about to plan a validation project you can follow those highlighted principles to assure you have the correct resources to get this project done

  1. Asses your system going under validation
  2. Analyse your intend of use
  3. Put in place all resources on time
  4. Consider all possible aspects of the system to include migration factor
  5. Use best practice knowledge either from internal or external resource

 

Rami Azulay

ALM Master @ Orcanos.com

 

Download PDF: How to go About Good Practice in Validating Computer Systems in a Regulated Environment

 

 

Elcam Medical Mitigating The Approval Risk – By Placing their documents on Pre-Audit alert system that implemented the complete V&V Medical Device module

July 14th, 2013 Posted by 510(k), Company News, KPI, Software Lifecycle Management, Validation and Verification 0 thoughts on “Elcam Medical Mitigating The Approval Risk – By Placing their documents on Pre-Audit alert system that implemented the complete V&V Medical Device module”

Based 2,323 California biomedical companies research, here are the main 10 threats those companies has reported on

  • #1 – FDA regulatory / environment
  • #3 – R&D productivity
  • #7 – Intellectual property protections
  • #8 – Ability to demonstrate effectiveness
  • #9 – Product liability
  • #10 – Unprepared workforce

The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, CellPNAS, etc… – can’t be repeated. More than that if there was a development behind those studies it was impossible to recreate the documentations requires utilizing their business potential.

According to the same research the following answers given to the question: Why did the company delay the research or development project? Were as follows:

  • 40.2% – Funding not available (Second Round)
  • 27.8% – Regulation (FDA, EPA, SEC)
  • 25.8% – Change in corporate priorities or strategy
  • 4.1% – layoffs
  • 7.2% – Other

Orcanos Implementing NPI (New Product Introduction) for one of the legacy Israeli Medical Device company Elcam Ltd. In this project the focus was on getting the project started on the correct regulatory path and have taken the initiative documents created by the R&D group into a preset system that control and governance the regulatory path selected by the organization. The overall idea was to define the development path in which the specific product shall be using and to match the perfect system that will control and governance each step in the development lifecycle. To achieve this goal we have selected QPack Medical™ system that accepted the validation documents and for each document the system created set of KPI (Key Performance Indicators) as well as KRI (Key Regulatory Indicators) that triggered QPack Medical alert system with Pre-Audit notifications.

 

Mitigating Audit-Submission Risks

 

 

For example:

The insertion of MRD documents during the Idea/Concept stage trggered the following alerts

KPI

  • Market Requirements missing coverage matrix
  • Market Requirements maturity based on functional test results
  • Market Requirements missing due date
  • Unapproved market requirements
  • Readiness for PDR review

 

KRI

  • Missing Market Requirement Document
  • Missing Market Requirement  Specifications
  • Market Requirements missing traceability to product requirements
  • Market Requirements missing validation procedures

 

 

 

 

 

QPack Medical Webinar, October 2012

November 11th, 2012 Posted by IEC 62304, ISO 13485, ISO 14971, Requirements Management, Risk Management, Software Lifecycle Management, Test Management, Validation and Verification 0 thoughts on “QPack Medical Webinar, October 2012”

Software Validation and Verification

June 22nd, 2011 Posted by Validation and Verification 0 thoughts on “Software Validation and Verification”

Software Verification

The Goal is to provide objective evidence that the software meets all the specified requirements

> building the thing right

Software Validation

The Goal is to confirm that the software meets the user needs and intended uses

> building the right thing

FMEA Risk management best practice (ISO 14971)

April 5th, 2011 Posted by ISO 14971, Risk Management, Validation and Verification 0 thoughts on “FMEA Risk management best practice (ISO 14971)”

The following process is based on QPack FMEA Risk Management Module.

Phase 1: Risk Assessment – Intended use and safety related characteristics

You can build a risk assessment document in QPack and add the safety related questions.

Example: Add risk assessment document and add safety related questions by category

Set the paragraph to be of type “Safety Question”

The paragraph has a short and simple workflow:

Open – new safety question was added

Estimated – the safety related question was responded by risk object

NA – safety question is not applicable for this product.

Click to enlarge

Now start identifying risks to your product. Each risk is linked to the safety question so can verify that every question was addressed by risk.

Example: Safety questions traceability

Click to enlarge

The outcome of this phase is the risk category (failure mode) as shown here:

Example: Risk category list (failure mode)

Click to enlarge

Phase 2: Hazard Identification

Setup risk status to “Identify Hazard

Use QPack to add new risk object.

Setup risk name, and failure mode.

Example: Setup new risk

Risk name: Failure in power supply

Failure mode: Energy Electromagnetic

Cause of failure: Short circuit

Effect of failure: Shock to patient

Click to enlarge

Click to enlarge

Phase 3: Risk estimation

change risk status to “Estimation

Use QPack Risk estimation form in order to calculate the RPN

Set the RPN parameters in order to calculate the risk zone (Acceptable/Alarp/ Unacceptable):

Example: Calculate RPN before mitigation

  • Probability (P1)
  • Detectability (D1)
  • Severity (S1)

Click to enlarge

Phase 4: Identify and setup risk control

Change risk status to “Identify Controls

Identify preventive actions in order to reduce risk severity/probability, or improve detectability.

Example: Risk recommended actions

Risk reduction: Front panel lights will not be off indicating power supply fault.

Click to enlarge

At this phase, or in later phases, we will setup risk estimated cost, assign to the relevant person and setup due date

Example: Risk cost, due date and assignment

Click to enlarge

In case a software requirement is used as a recommended actions – add a software requirement in your SRS

Setup requirement “Risk Mitigation” indication to “Yes

Example: Add software requirement for mitigation

Software requirement: Use the alert mechanism to control warning lights in front panel

Click to enlarge

Link the software requirement to the relevant risk for mitigation traceability. Use the “Risk Mitigation” link type.

Example: Software requirement is linked to the risk

Click to enlarge

Based on the risk mitigation – setup the new RPN value

Example: Revised RPN is automatically calculated

Click to enlarge

Phase 5: Completeness of risk control

Software team will develop the software requirement

Once software requirement is finished, the software requirement status is changed to “Done

Example: software requirement is implemented

Click to enlarge

Show a filter of all software requirements that are used for risk mitigation, in status “Done

Example: report of implemented software requirements used for mitigation

Click to enlarge

Open the related risk of each requirement and change the risk status to “Control Implemented

Example: risk controls are implemented

Click to enlarge

Phase 6: Risk verification

Add a software test case to the STD and link it to the software requirement in the SRS

Example: traceability of test case to software requirement used for mitigation

Click to enlarge

SQA team execute the tests, and when test passes its status is set to “Pass

Example: Get all requirements that are used for risk mitigation and trace their verification status (derived from test execution status)

Click to enlarge

When all tests were passed for the software requirement – open the related risk and change the risk status to “Verified

Example: risk is set to “Verified”

Click to enlarge

Phase 7: Risk management report

Create a filter that retrieves all risk items

Example: risks report

Click to enlarge

Build the risk management document in QPack based on your template and embed the risk report filter in the relevant chapter.

Example: risk management document in QPack

Click to enlarge

Generate the risk management document and save it as attachment

Example: Generated risk and hazards document

Click to enlarge

IEC 62304 For Software Lifecycle In Medical Device

February 22nd, 2011 Posted by IEC 62304, Validation and Verification 0 thoughts on “IEC 62304 For Software Lifecycle In Medical Device”

IEC 62304 (and EN 62304) is the international standard for software life cycle for medical device. IEC 62304 specifies life cycle requirements for the development of medical software and software within medical devices. It is harmonized by the European Union and the United States. Compliance with this standard fulfills the FDA 21 CFR820 requirements as well as the Medical Device Directive 93/42/EEC.

Main activities described in IEC 62304 that are fully supported in QPack

  • Requirements traceability
  • Integrated risk management process
  • Test Management (Unit/Module/Integration/System)

See QPack main features

Orcanos

Contact

8Beit Oved Street
Tel Aviv, Israel
+972-3-5372561
info@orcanos.com

Copyright © Orcanos, All rights reserved. | Privacy policy | Terms of use