On December 7, 2017, the US Food and Drug Administration (FDA) announced several digital health policy documents designed to “encourage innovation” and “bring efficiency and modernization” to the agency’s regulation of digital health products. The three documents include two draft and one final guidance which address, in part, the important changes made by Section 3060 of the 21st Century Cures Act (Cures Act) to the medical device provisions of the Federal, Food, Drug, and Cosmetic Act (FDCA), which we previously summarized, that expressly excluded from the definition of medical device five distinct categories of software or digital health products. FDA Commissioner Dr. Scott Gottlieb emphasized that these documents collectively “offer additional clarity about where the FDA sees its role in digital health, and importantly, where we don’t see a need for FDA involvement.”

To read the full advisory click here.

In a recent article published in Intellectual Property & Technology Law Journal, and expanding on our previous post, we discuss the legal and regulatory implications of applying artificial intelligence (AI) to the EU and US healthcare and life sciences sectors.

AI software, particularly when it involves machine learning, is being increasingly used within the healthcare and life science sectors. Its uses include drug discovery (e.g., software that examines biological data to identify potential drug candidates), diagnostics (e.g., an app that analyses real-time data to predict health issues), disease management (e.g., mobile-based coaching systems for pre- and post- operative care) and post-market analysis (e.g., adverse event data collection systems).

Given the healthcare and life science sectors are highly regulated, the development and use of AI requires careful scrutiny of applicable legal and regulatory obligations and any ongoing policy developments. The article discusses how AI may contribute to the research and development of health products, to the care and treatment of patients, and the corresponding legal and regulatory issues surrounding such technological advances.

In Europe, depending on its functionality and intended purpose, software may fall within the definition of ‘medical device’ under the Medical Devices Directive. However, classification of software is fraught with practical challenges because, unlike classification of general medical devices, it is not immediately apparent how the legal parameters apply. The European Commission has published guidelines to interpret the Directive’s requirements, but these are not legally binding (although were recently endorsed by the Advocate General of the Court of Justice of the European Union, as discussed in our advisory). The new EU Regulations adopted on April 5, 2017, which come into effect on May 26, 2020, will widen the scope of the regulatory regime considerably, and will require all operators to re-assess product classification well in advance of this deadline.

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices. However, there are a variety of legal, regulatory, and compliance issues that may arise for AI developers based on the intended use of the product. Once a product is classified as a medical device, its class will define the applicable regulatory requirements, including the type of premarketing notification/ application that is required for FDA clearance or approval. As the use of AI becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems.

Further important considerations, given the capability of AI to capture various forms of personal data, are data protection and cybersecurity, which will become very important to ensure sustainability of the technology. In the EU, these rules are soon to be overhauled by the General Data Protection Regulation, which applies from May 25, 2018. And in the US, regardless of the product’s classification, AI developers will need to assess whether the HIPAA rules apply, and any design controls and post-manufacture auditing that may also apply in the cybersecurity space.

The U.S. Food and Drug Administration (FDA) issued a Warning Letter on April 12, 2017 requiring an explanation of how St. Jude Medical plans to correct and prevent cybersecurity concerns identified for St. Jude Medical’s Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators, and the Merlin@home monitor.

The Warning Letter follows a January 2017 FDA Safety Communication on St. Jude Medical’s implantable cardiac devices and the Merline@home transmitter. The safety alert identified that such devices “contain configurable embedded computer systems that can be vulnerable to cybersecurity intrusions and exploits. As medical devices become increasingly interconnected via the Internet, hospital networks, other medical devices, and smartphones, there is an increased risk of exploitation of cybersecurity vulnerabilities, some of which could affect how a medical device operates.” FDA conducted an assessment of St. Jude Medical’s software patch for the Merlin@home Transmitter and determined that “the health benefits to patients from continued use of the device outweigh the cybersecurity risks.” Consequently, FDA’s safety alert provides recommendations to healthcare professionals, patients and caregivers to “reduce the risk of patient harm due to cybersecurity vulnerabilities.”

The following month, FDA conducted a 10-day inspection at St. Jude Medical’s Sylmar, CA facility and concluded that St. Jude Medical has not adequately addressed the cybersecurity concerns. Notably, FDA observed failures related to corrective and preventive actions (CAPA), controls, design verification and design validation.


In one instance, FDA found that St. Jude Medical based it’s risk evaluation on “confirmed” defect cases and not considering the potential for “unconfirmed” defect cases and therefore underestimated the occurrence of a hazardous situation related to premature battery depletion. Moreover, FDA found that St. Jude Medical failed to follow its CAPA procedures when evaluating a third party cybersecurity risk assessment report. Finally, FDA found that St. Jude Medical’s management and medical advisory boards did not receive information on the potential for “unconfirmed” defect cases and were falsely informed that no death resulted from premature battery depletion issue.

For all instances, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA.

Control Procedures

On October 11, 2016, St. Jude Medical initiated a recall for Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators due to premature battery depletion. Despite the recall, FDA noted that some devices were distributed and implanted. Again, FDA was unable to determine whether the St. Jude Medical’s corrective actions were sufficient because St. Jude Medical failed to provide evidence of implementation.

Design Verification and Validation

In addition, FDA found St. Jude Medical failed to ensure that “design verification shall confirm that the design output meets the design input requirements,” and failed to accurately incorporate the findings of a third-party assessment into updated cybersecurity risk assessments for high voltage and peripheral devices like the Merlin@home monitor. Specifically, the Merlin@home monitor’s testing procedures did not require full verification to ensure the network ports would not open with an unauthorized interface. Further, the cybersecurity risk assessments failed to accurately incorporate the third party report’s findings into its security risk ratings. Also, even though the same reports identified the hardcoded universal unlock code as an exploitable hazard for the high voltage devices, St. Jude Medical failed to estimate and evaluate this risk.

For all violations, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA. FDA has given St. Jude Medical 15 days to explain how the company plans to act on the premature battery depletion issue (despite related injuries and one death) as well as the improper focus on “confirmed” cases, and the distribution and implantation of recalled devices. FDA warns that St. Jude could face additional regulatory action if the matters are not resolved in a timely manner.

The Warning Letter, together with the January 2017 Safety Communication and a December 2016 Guidance on Postmarket Management of Cybersecurity in Medical Devices (which we have previously summarized here and here), demonstrates FDA’s continued scrutiny on the cybersecurity of medical devices. It appears that FDA is trying to communicate the need for device manufacturers to incorporate cybersecurity checkpoints throughout a product’s lifecycle to prevent patient harm and potential regulatory action. Not a bad idea for an increasingly tech-savvy world.

We previously described some of the ways in which life sciences companies are exploring the potential of IBM’s supercomputer, ‘Watson®’, to assist with product development and disease treatment.  Such uses raise important questions about how Watson and other software are treated under medical device regulations.  These questions are particularly important as tech companies find themselves wading into the healthcare arena and may be unaware of the heavily regulated industry they are entering.

The regulation of medical software has been controversial and subject to the vagaries of guidelines and subjective interpretations by the regulatory authorities. We consider below the regulatory minefield and the circumstances in which a software is regulated as a medical device in the EU and U.S.


How is software regulated?

In the EU, a medical device means any instrument or other apparatus, including software, intended by the manufacturer to be used for human beings for the purpose of, among other things, diagnosis, prevention, monitoring, treatment or alleviation of disease. There is no general exclusion for software, and software may be regulated as a medical device if it has a medical purpose, meaning it is capable of appreciably restoring, correcting or modifying physiological functions in human beings. A case-by-case assessment is needed, taking account of the product characteristics, mode of use and claims made by the manufacturer. However, the assessment is by no means straightforward for software, which is particularly complex because, unlike classification of general medical devices, it is not immediately apparent how these parameters apply to software, given that software does not act on the human body to restore, correct or modify bodily functions.

As a result, software used in a healthcare setting is not necessarily a medical device. The issue is whether the software can be used as a tool for treatment, prevention or diagnosis of a disease or condition. For example, software that calculates anatomical sites of the body, and image enhancing software intended for diagnostic purposes, is generally viewed as a software medical device because it is used as a tool, over and above the healthcare professionals’ clinical judgment, in order to assist clinical diagnosis and treatment. For the same reason, software used for merely conveying or reviewing patient data is generally not a medical device.

What about Watson?

The main benefit of IBM’s cognitive computing software is its ability to analyse large amounts of data to develop knowledge about a disease or condition, rather than treatment options for an individual patient. Currently, its uses are largely limited to research and development. On the basis of these uses, the software may not be considered as having the medical purpose necessary for it to be classified as a medical device.

However, uses of the software that aim to enhance clinical diagnosis or treatment of a condition may potentially alter the regulatory status, especially if the function of the software goes beyond data capture and communication. Similarly, some of the new partnerships recently announced, described in our previous post, are aimed at developing personalised management solutions, or mobile coaching systems for patients. These may be viewed as having a medical purpose in view of the health-related information they acquire to provide informed feedback to the patient on self-help, or decision-making relating to the patient’s treatment plan. As the uses for Watson increase, and become more involved in treatment decisions, this change in regulatory status is likely to increase.

Will there be any change under the new Medical Device Regulations?

The EU legislative proposal for new medical device Regulations, which have reached broad agreement by the EU legislature but have not yet been adopted, contain additional provisions that specifically address software medical devices. Of particular relevance, software with a medical purpose of “prediction and prognosis” will be considered as coming within the scope of the Regulations. This means that software and apps that were previously excluded from being regulated, may in the future be “up-classified” and be susceptible to being regulated as medical devices. Along with a number of initiatives in the EU, the EU institutions recognize the importance of mHealth in the healthcare setting, and are seeking to ensure it is properly regulated as its use increases.


How is software regulated?

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA considers a medical device to be an instrument or other apparatus, component, or accessory that is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease in man or other animals, or that is intended to affect the structure or function of any man or other animal but which is not dependent on being metabolized (i.e., a drug) for achievement of that purpose.   FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices.

One type of software FDA has not issued guidance on is Clinical Decision Support Software (CDSS). CDSS is software that utilizes patient information to assist providers in making diagnostic or treatment decisions. Until recently, CDSS was approached in a similar fashion to FDA’s framework for mobile apps. In other words, CDSS was viewed as existing on a continuum from being a Class II regulated medical device, to being subject to FDA’s enforcement discretion, to not being considered a medical device at all. On December 13, 2016, however, the 21st Century Cures Act was signed into law, clarifying the scope of FDA’s regulatory jurisdiction over stand-alone software products used in healthcare.

The 21st Century Cures Act contains a provision – Section 3060 – that explicitly exempts certain types of software from the definition of a medical device. As relevant for CDSS, the law excludes from the definition of a “device” software (unless the software is intended to “acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system”):

  1. Displaying, analyzing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines);
  2. Supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
  3. Enabling health care professionals to independently review the basis for such recommendations so that the software is not primarily relied upon to make a clinical diagnosis or treatment decision regarding an individual patient.

Thus the Act generally excludes most CDSS from FDA jurisdiction. However, it is worth noting that FDA may bring CDSS back under its jurisdiction if it makes certain findings regarding: (1) the likelihood and severity of patient harm if the software does not perform as intended, (2) the extent to which the software is intended to support the clinical judgment of a health care professional, (3) whether there is a reasonable opportunity for a health care professional to review the basis of the information or treatment recommendation, and (4) the intended user and use environment.

What About Watson?

Based on this regulatory framework, IBM’s Watson would not generally be regulated as a medical device if simply used as a tool to assist physician review of medical data. In many uses, Watson is still dependent on human intervention and therefore does not make independent patient-specific diagnoses or treatment decisions. Importantly, statements about Watson also show that it is intended to be used simply as a tool by physicians and it is not intended that physicians rely primarily on Watson’s recommendations.

As such, in many applications, Watson is likely to be the kind of CDSS statutorily excluded from the definition of a medical device. However, as Watson and other forms of artificial intelligence advance and become capable of making or altering medical diagnoses or treatment decisions with little input or oversight from physicians, or transparency as to underlying assumptions and algorithms, these technologies will fall outside of the exclusion. As the use of such forms of artificial intelligence becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems. Additionally, state laws may be implicated with regard to how such technology is licensed or regulated under state public health, consumer protection, and medical practice licensure requirements.

Published in Privacy & Cybersecurity Law Report’s April 2017 issue.

In the closing days of last year, the FDA issued its final guidance on postmarket medical device cybersecurity. This guidance is a corollary to the previously issued final guidance on premarket cybersecurity issues, and the pre and post market pieces should be read, and fit, together. In both cases, the FDA sets out a comprehensive, and lifecycle approach to managing cyber risk. Under this guidance, the FDA is asking companies to operationalize a structured way to think through and act on these product, hardware, software, and network issues. Last year, we wrote about 5 things companies can do now to get ahead of the curve on the premarket guidance, and they still apply.

The final postmarket guidance follows much of the 2016 draft guidance, with a few important changes. We wrote a detailed piece on the 2016 draft guidance. The two big changes are:  a change in focus from possible cyber impact on the product (what was called the “essential clinical performance” of the device) to a focus on the health impact on the patient if a vulnerability were exploited (what is now called the “patient harm”); and a fleshing-out of the recommended vulnerability disclosure process and time frames. Focusing on the possible impact to the patient seems like a good change. Cyber risk is a function of threat, vulnerability and consequence, and with medical devices, the consequence surely revolves around the patient. It is the second change – around vulnerability disclosure, timing for disclosure, and required information sharing with an industry-wide “Information Sharing Analysis Organization (ISAO)” that will take real thought, work and finesse.

Under the final guidance, if there is an “Uncontrolled Risk” given the exploitability of the vulnerability and the severity of patient harm if exploited, that risk should be remediated “as quickly as possible.” As for notice to the FDA and customers, you must report these vulnerabilities to the FDA pursuant to part 806 (which requires manufacturers to report certain device corrections and removals), unless the manufacturer meets four specific requirements: (1) there are no known serious adverse events or deaths; (2) within 30 days of learning of the vulnerability the manufacturer communicates with its customers and user community describing at a minimum the vulnerability, an impact assessment, the efforts to address the risk of patient harm, any compensating controls or strategies to apply, and commit to communicating the availability of a future fix; (3) within 60 days of learning of the vulnerability, the manufacturer fixes the vulnerability, validates the change, and distributes the fix such that the risk is reduced to an acceptable level; and (4)  the manufacturer participates in the ISAO and provides the ISAO with any customer communications upon notification of its customers. If you meet these obligation and timelines, you do not have to report under part 806 – but if you don’t meet these obligations you do have to report and then are subject to the usual 806 reporting.

So, to avoid part 806, you want to follow the four conditions. But they are more complex than one might think at first glance. As a general matter, information technology companies do not like to notify users of a vulnerability until there is a fix. A known vulnerability without a fix can easily (and often are) exploited by adversaries. Customers are less secure. Therefore, generally companies announce vulnerabilities and fixes together, so that customers can protect themselves before bad guys can exploit. Usually, only on rare occasions, when there is a known active exploit, would you notify customers before you have a fix. The FDA and the medical device industry seem to be searching for the appropriate approach for medical devices, where there is potential for non-trivial patient harm, and an existing regulatory structure and overall public health mission.  The issue of vulnerability disclosure is complex, and subject to much debate (the U.S. Commerce Department just published the results of a year-long study, concluded there is still much work to be done to get it right). Similarly, the issue of information sharing about cyber threat and vulnerability information with others in industry, and with the government is still an area of much discussion. A year ago, Congress passed an information-sharing bill to help reduce potential barriers to information sharing, including provisions for some amount of liability protection for sharing cyber threat and vulnerability information with others. Today, companies are still finding their way around the business and legal issues , even under the new legislation.

Therefore, to meet the 30 and 60 day notice requirements, and the information sharing requirement, medical device companies will have to carefully craft their notices to both meet the specificity requirement in the final guidance, and not disclose enough that adversaries will be alerted to the possibility of a vulnerability, figure out what function, method, process or technology is implicated, focus on that topic and exploit it before a fix is developed, shared, and implemented. The same considerations hold for sharing vulnerability and notice information with the ISAO, whose members will include competitors and which information could (depending on the ISAO rules and information classification decisions) be further shared with government and security industry partners. Net-net, a clear understanding of the technical vulnerability, possible consequences, ability to fix, and appreciation of the line between notification and usefulness for exploit is required. It may also be true that no fix can be had in 60 days, and that if there are many reported vulnerabilities backed-up and in the queue to be fixed, a company may fix the priority tickets first, and then the least priority items may take longer than 60 days to address and fix as a matter of band-width and expertise. Consequently, over time, companies may be faced with decisions about whether to try to meet the 806 exception conditions, or file 806 notices with the FDA and deal with the potential implications. None of this is to say that the benefits of the 806 exception are not worth it, or are trivial, it just means that your approach has to be clueful and strategic.

One more issue, of course continues to be quite important – the global rules must be rationalized. Medical device companies build-once and sell globally, and the security, integrity, vulnerability, and disclosure rules and best practices have to work globally. As these new guidelines get rolled-out, significant education globally will be critical.

Over time, and most likely, like most things in security, this ‘final guidance’ will be a work in progress, as companies and the FDA and regulators globally begin to deal with specific use cases that push the boundaries of what situation is a “Controlled Risk,” an “Uncontrolled Risk,” and what 30 and 60 day notifications, and fixes, and ISAO information is required, helpful, and not helpful. As we always say – ‘security is a journey, not a destination’ – and so too will the postmarket cyber guidance be.


The 21st Century Cures Act (Cures Act) was signed into law on December 13, 2016, following a multi-year, bipartisan, and bicameral legislative effort to accelerate the pace of the discovery, development, and delivery of new treatments and cures. The Cures Act packages a wide range of medical innovation measures − including increased research and Food and Drug Administration (FDA) funding and provisions aimed at accelerating FDA’s processes for reviewing and approving new drugs, biologics, and medical devices − with funding for the opioid abuse crisis, mental health reforms, and modifications to various Medicare payment policies.

The Cures Act also incorporates significant health IT policies, which generally aim to streamline and promote the use of interoperable electronic health records (EHRs) and support coverage of telehealth services. In addition to addressing FDA regulation of software, the law includes provisions to: reduce regulatory and administrative documentation requirements in the use of EHRs; promote the facilitation of secure, interoperable exchange of electronic health record data while protecting patient privacy; and require the Department of Health and Human Services (HHS) and MedPAC to submit reports to Congress on expanding Medicare coverage of telehealth services. Of note, an earlier version of the Cures Act would have gone a step further by expanding telehealth coverage.

One of the most significant health IT-related provisions included in the Cures Act would enforce the prohibition of information blocking on behalf of health IT developers and exchange networks. Health IT developers will be banned from interfering or preventing the access, exchange, or use of electronic health information to allow for greater data interoperability. In addition, the HHS Office of the Inspector General will be permitted to investigate complaints against the practice of data blocking and subsequently enforce monetary penalties.

For additional information and a comprehensive overview of the Cures Act, we recommend that you read Arnold & Porter’s Advisory or listen to our webinar on the topic.

The Food and Drug Administration (FDA) recently introduced a new webpage for reporting allegations of regulatory violations by medical device manufacturers or marketers. The new webpage, launched on October 21, 2016, enables any person—including current or former employees, competitors, or even plaintiffs’ attorneys—to submit a report to FDA regarding a broad variety of potential violations. Illustrating the types of allegations it expects to receive, FDA identifies:

  • non-FDA-approved promotion or advertising;
  • failing to submit required safety reports;
  • failing to comply with design or manufacturing responsibilities;
  • marketing a device without proper FDA clearance;
  • importing a device without satisfying the applicable legal requirements;
  • forging or falsifying an export certificate;
  • failing to register and list a device; and
  • knowingly deceiving FDA.

FDA encourages reporters “to include supporting information and contact information in case additional information is needed for FDA to understand the allegation and act on the report.” The agency also permits anonymous reporting and guarantees that it will maintain reporters’ anonymity unless legally required to do otherwise.

According to the new webpage, all reported allegations will be reviewed by the Center for Devices and Radiological Health (CDRH). CDRH is then charged with prioritizing its review based on the level of potential risks to patients. Following an assessment of the allegation, CDRH has the option of issuing a warning letter, conducting an inspection, or even requesting a recall. CDRH may also request additional information from or simply monitor the medical device manufacturer.

FDA implemented a similar reporting mechanism in 2010, the “Bad Ad Program,” which only addresses reports of potentially untruthful or misleading prescription drug advertising and promotion, rather than the broad array of violations addressed by the new website. Also, although anyone may submit a complaint to FDA, the Bad Ad Program “is focused primarily on health care professionals” and is “designed to educate health care professionals about the role they can play in helping FDA ensure that prescription drug advertising and promotion is truthful and not misleading.” Despite its comparatively limited scope, reports submitted through the Bad Ad Program led to the issuance of a number of enforcement letters. The new website is broader in scope and may have a similar, if not greater, impact. At any rate, whether or not this new website generates additional FDA actions against medical device manufacturers, records of inquiries and investigations completed by CDRH will potentially be available to plaintiffs’ attorneys through requests under the Freedom of Information Act.

Last month, the US Food and Drug Administration’s (FDA) Center for Device and Radiological Health (CDRH) issued a Draft Guidance for industry entitled Software as a Medical Device (SaMD): Clinical Evaluation (Draft Guidance).  The Draft Guidance was developed by the International Medical Device Regulators Forum (IMDRF), of which FDA is a member, and demonstrates FDA’s continued focus on the growing importance of software in healthcare (our discussion of other software-related FDA guidances is available here).  Once finalized, the Draft Guidance will classify the whole gamut of SaMD and establish globally harmonized risk-based criteria for assessing the software’s safety and effectiveness.

Comments on the Draft Guidance are due to CDRH by December 13, 2016 (Docket No. FDA-2016-D-2483).  Once the comment period closes, a final version of the Draft Guidance will be submitted to the IMDRF management committee in February 2017.

The Draft Guidance relies on IMDRF concepts and definitions to define SaMD, outline clinical evaluation methods that would be useful for SaMD and to determine the appropriate level of clinical evidence required by SaMD in categories established in a 2014 IMDRF final guidance.

“Based on the significant impact SaMD has on clinical outcomes and patient care, a SaMD manufacturer is expected to gather, analyze, and evaluate data, and develop evidence to demonstrate the assurance of safety, effectiveness and performance of the SaMD,” the Draft Guidance states.

What is a SaMD?

As defined in the Draft Guidance, SaMD is “software intended to be used for one or more medical purposes that perform these purposes without begin part of a hardware medical device.”  SaMD runs on general computing platforms and is not intended to drive a hardware medical device; therefore, it does not come into direct contact with patients.  SaMD may be used in combination with other products including medical devices, and also may be interfaces with other medical devices, SaMD software, and general purpose software.

Mobile applications that meet the above definition are considered SaMD and are therefore subject to the applicable requirements under the Draft Guidance.  Note that these new requirements are in addition to the requirements imposed on mobile medical application vendors by the 2015 FDA Guidance for Industry: Mobile Medical Applications (MMA Guidance).

Other examples of SaMD include software that diagnose a condition using the triaxial accelerometer that operated on the embedded processor on a digital camera, performs image post-processing for the purpose of aiding in the detection of breast cancer running on a general purpose computing platform located in the image-acquisition hardware medical device, conducts treatment planning by supplying information used in a linear accelerator, and allows commercially available smartphone to view images for diagnostic purposes obtained from a MRI.

Risk-Based Evaluation of SaMD

Consistent with the MMA Guidance’s risk-based enforcement approach, the Draft Guidance lays out a risk-based criteria on how to regulate different kinds of software and what kind of evidence is needed for each regulatory category.  The Draft Guidance proposes to stratify software into categories I-IV based on two factors: whether the device informs care, drives care, or treats/diagnoses and whether the condition in question is non-serious, serious, or critical.  As such, software that treats or diagnoses a critical condition is in the highest risk category, while software that informs care about a non-serious condition is in the lowest.  The Draft Guidance does not mention how the SaMD categories fit together with the FDA medical device classifications and associated regulations.

Section 8.5 of Draft Guidance includes a chart summarizing the clinical evidence and expectations by SaMD category.  Types of evidence required include analytical validity (i.e., the technical performance related to accuracy, reliability, and reproducibility), scientific validity (i.e., the association of the SaMD output to a clinical condition/physiological stated), and clinical performance (i.e., the ability of a SaMD to yield a clinically meaningful output associated to the target use of SaMD output in the healthcare situation).  Also, the higher the risk level and category of the SaMD, the higher the importance of independent review of the evidence.

The Draft Guidance recognizes the need to incorporate continuous clinical evaluation into the lifecycle of all software devices, regardless of clinical significance.  As such, depending on post-marketing data, including real-world evidence, a SaMD may be re-categorized.

*             *             *

Overall, the Draft Guidance is a significant step in establishing common clinical evaluation principles for demonstrating the safety, effectiveness, and performance of SaMD.  Nonetheless, there are some holes.  For example, while the Draft Guidance acknowledges the value of collecting real-world evidence, it does not clearly articulate the circumstances when such evidence can replace a premarket clinical study.  Further, it is difficult to discern exactly what data is required for clinical evaluation.  Lastly, FDA has not explained how the Draft Guidance fits in with IMDRF’s other publications or whether IMDRF’s related publications will be incorporated by reference into the Draft Guidance.  The final version of the Draft Guidance may shed light on some of these issues.

In early August 2016, the US Food and Drug Administration’s (FDA or Agency) Center for Device and Radiological Health (CDRH) issued a Draft Guidance for industry entitled Deciding When to Submit a 510(k) for a Software Change to an Existing Device (Draft Guidance). When finalized, the guidance will assist industry and CDRH in determining when a software (including firmware) change to a 510(k)-cleared or a pre-amendments device subject to 510(k) (existing devices) may require a manufacturer to submit and obtain FDA clearance of a new premarket notification (510(k)).

Comments on the Draft Guidance are due to CDRH by November 7, 2016 (Docket No. FDA-2011-D-0453). In addition, CDRH held a webinar on August 25, 2016 to discuss the Draft Guidance.

FDA also announced a second draft guidance to industry on Deciding When to Submit a 510(k) for a Change to an Existing Device, which would supersede FDA’s 1997 guidance of the same name when finalized. This new draft guidance addresses non-software modifications.

Continue Reading Time for a Reboot? FDA Issues Draft Guidance on When to Submit a 510(k) for a Software Change to an Existing Device

On August 25, 2016, investment firm Muddy Waters Capital issued a report claiming that St. Jude Medical’s implantable cardiac devices are susceptible to cybersecurity attacks, allegedly putting more than 260,000 individuals in the U.S. at risk.  St. Jude strongly rejected the report and disputed the alleged security risks of its devices.

The report claims that MedSec Holdings Ltd., a cybersecurity firm, was able to demonstrate two types of cyberattacks on St. Jude’s implantable cardiac devices. The first type of attack — a “crash” attack — enables a hacker to remotely disable cardiac devices, and in some cases, cause the cardiac device to pace at a dangerous rate.  The second type of attack — a battery drain attack — remotely runs cardiac device batteries down to 3% of capacity within a 24-hour period.  However, the report concludes that patients’ personal health information appears to be safe as the report states that patient data is encrypted.

The report argues that the cybersecurity risks of the devices are due to security deficiencies in accessories to the implantable devices including devices located in physician offices that display data from the implanted devices, the network that manages and transmits data, and the at-home device which communicates with the implanted device via radio frequency within a 50 foot range.  Some of the alleged deficiencies require attackers having access to device accessory hardware or being within 50 feet of the target(s).

Continue Reading A New Kind of Heart Attack: Allegations of Cybersecurity Risks in Cardiac Pacemakers