Join us on Tuesday, April 18, 2017 from 12:00–1:00 pm ET for a webinar that will address, among others, the following issues:

  • Overview of EU security rules
  • Securing the IoT
  • Product liability and other potential claims
  • Health care reimbursement and fraud issues
  • Cyber liability insurance

Click here to register.

Speakers include:

1 hour CA and NY MCLE credit is pending. CLE credit for other jurisdictions is also pending.

The European Commission has published a report on the cost-effectiveness of standards-driven eHealth interoperability; the exchange of data between IT systems. This is one of a number of parallel initiatives from the Commission to advance e-Health interoperability, such as the EURO-CAS project launched in January this year, and is an essential part of the EU Digital Agenda.

The ultimate goal of the Commission’s efforts on eStandards for eHealth interoperability is to join up with healthcare stakeholders in Europe, and globally, to build consensus on eHealth standards, accelerate knowledge-sharing and promote wider adoption of standards.

The eStandards project is working to finalize a roadmap and associated evidence base, a white paper on the need for formal standards, and two guidelines addressing how to work with: (a) clinical content in profiles, and (b) competing standards in large-scale eHealth deployments. An initial roadmap has already been prepared. The final roadmap aims to describe the actions to be taken by standards development and profiling organizations (SDOs), policymakers in eHealth, and national competence centers, to warrant high availability and use of general and personal health information at the point of care, as well as for biomedical, clinical, public health, and health policy research.

The objective of this discrete cost-effectiveness study is to support the preparation of the final roadmap. The study contacted 3 categories of stakeholders: i) Centers of Competence; ii) Vendors (mostly small and medium-sized companies) on the European market; and iii) Standards Organizations (mostly international). It has shown that stakeholders use the same tools in different projects across Europe, which should facilitate communication of best practices between them.

Its main findings are that:

  • All stakeholders consider that using standards and standards-driven tools contribute to better quality products.
  • Vendors and Centers of Competence share the same benefits as a result of the efficiency of the project (e.g. the continuous improvement of the specifications, and their effectiveness).
  • In terms of economic results, the study shows clearly that using and reusing existing tools and content saves effort and time, as well as money. It standardizes methods of working and increases professionalism of the project team. However due to the complexity of the eHealth domain, training is one of the major challenges for increasing the adoption of profiles and standards.
  • The study also indicates that standards are available, but the challenge is their adoption.

The study proposes a few practical recommendations for promoting the use of the standards-driven tools:

  1. Develop a strategy to communicate and disseminate the use of standards-driven tools, showing evidence of their positive impact in the development of projects and products;
  2. Develop simple indicators and/or refine the indicators used in this study in order to quantify the progress of adoption of standards-driven tools;
  3. Identify the weaknesses and limitations associated with deploying standards and tools;
  4. Develop conformity assessments and testing platforms for better adoption of the standards.

These initiatives complement the new guidance published on 23 March by the Commission for digital public services in its new European Interoperability Framework, which is meant to help European public administrations to coordinate their digitalization efforts when delivering public services.

Last week, the New York Office of the Attorney General (“OAG”) announced settlements with three mobile health application developers to resolve allegations that the companies made misleading claims and engaged in “irresponsible privacy practices.” The three companies that entered into settlements are:

  • Cardiio, a U.S.-based company that sells Cardiio, an app that claims to measure heart rate;
  • Runtastic, an Austria-based company that sells Runtastic, an app that purports to measure heart rate and cardiovascular performance under stress (downloaded approximately 1 million times); and
  • Matis, an Israel-based company that sells My Baby’s Beat, an app which Matis previously claimed could turn any smartphone into a fetal heart monitor, without FDA approval for such use.

With respect to Cardiio (settlement) and Runtastic (settlement), OAG alleged that both companies failed to test the accuracy of their apps under the conditions for which the apps were marketed (e.g., failed to test the product on subjects who had engaged in vigorous exercise, despite marketing the app for that purpose”). In addition, the OAG alleged that both companies’ apps claimed to accurately measure heart rate after vigorous exercise while using only a smartphone camera and sensors. OAG also alleged that Cardiio’s marketing practices included false endorsements. For example, Cardioo was charged with making claims that “misleadingly implied that the app was endorsed by MIT,” when Cardiio’s technology was based only on technology licensed from MIT and originally developed at the MIT Media Lab.

With respect to Matis (settlement), OAG alleged that the company deceived customers into using the My Baby’s Beat instead of a fetal heart monitor or Doppler, even though the app was not FDA-approved for such use and the company had “never conducted … a comparison to a fetal heart monitor, Doppler, or any other device that had been scientifically proven to amplify the sound of a fetal heartbeat.”

In each settlement agreement, OAG cites various claims made by the companies on the App or Google Play Stores (including product reviews by consumers), company websites, and other promotional materials. The OAG asserted that the “net impression” conveyed to consumers about such apps by these claims were misleading and unsubstantiated. In addition, OAG alleged that each company failed to obtain FDA approval for their apps and noted in the settlements that FDA generally regulates cardiac monitors as Class II devices under 21 C.F.R. § 870.2300 and fetal cardiac monitors as Class II devices under 21 C.F.R. § 884.2600.

Under the settlements, Cardiio and Runtastic each paid $5,000 in civil penalties, and Matis paid $20,000. Further, each company is required to take the following corrective actions:

  1. Amend and correct the deceptive statements made about their apps to make them non-misleading;
  2. Provide additional information about the testing conducted on their apps (e.g. substantiation);
  3. Post clear and prominent disclaimers informing consumers that their apps are not medical devices, are not for medical use, and are not approved or cleared by the FDA; and
  4. Modify their privacy policies to better protect consumers

With respect to privacy, the companies must now require the affirmative consent to their privacy policies for these apps and disclose that they collect and share information that may be personally identifying. This includes users’ GPS location, unique device identifier, and “de-identified” data that third parties may be able to use to re-identify specific users.

In addition, if the companies make any “material change” to their claims concerning the functionality of their apps, the companies must: (1) perform testing to substantiate any such claims; (2) conduct such testing using researchers qualified by training and experience to conduct such testing; and (3) secure and preserve all data, analyses, and documents regarding such testing, and make them available to the OAG upon request.

The OAG explained that the settlements follow a year-long investigation of mobile health applications, which include “more than 165,000 apps that provide general medical advice and education, allow consumers to track their fitness or symptoms based on self-reported data, and promote healthy behavior and wellness.” Of these apps, the OAG appears to be focusing its enforcement on a “narrower subset of apps [that] claim to measure vital signs and other key health indicators using only a smartphone [camera and sensors, without any external device], which can be harmful to consumers if they provide inaccurate or misleading results.”

Referred to as “Health Measurement Apps,” the OAG expressed concern that such apps could “provide false reassurance that a consumer is healthy, which might cause [them] to forgo necessary medical treatment and thereby jeopardize [their] health.” Conversely, Health Measurement Apps “can incorrectly indicate a medical issue, causing a consumer to unnecessarily seek medical treatment – sometimes from a hospital emergency room.”

The OAG’s risk-based approach appears to be consistent with FDA’s risk-based approach for regulating general wellness products, which Congress expressly excluded from the definition of medical “device” in Section 3060 of the recently enacted 21st Century Cures Act (read our Advisory here).

Ultimately, this settlement demonstrates that in addition to traditional regulators such as the FTC and FDA, which have taken a number of recent enforcement actions against mHealth app developers (as we’ve discussed here, here, and here), state consumer protection laws may also be implicated by such products. Accordingly, companies should continue to establish, implement, and execute robust quality or medical/clinical programs to support any research needed to substantiate claims made about mHealth products. And, more importantly, digital health companies should create strong promotional review committees that consistent of legal, medical, and regulatory professionals who can properly vet any advertising or promotional claims to mitigate potentially false, misleading, or deceptive claims that could trigger enforcement by regulatory agencies and prosecutors.

We have previously published a post on the potential uses of mobile apps in clinical trials, and the accompanying advantages and limitations. Recent research published in The New England Journal of Medicine (NEJM) confirms the increasing number of innovative studies being conducted through the internet, and discusses the bioethical considerations and technical complexities arising from this use.

Apps used in clinical research

The vast majority of the population, including patients and healthcare professionals, have mobile phones. They are using them in a growing number of ways, and increasingly expect the organizations they interact with to do the same. Clinical research is no exception. As we discussed previously, smartphones are becoming increasingly important as a means of facilitating patient recruitment, reducing costs, disseminating and collecting a wide-range of health data, and improving the informed consent process.

A major development in relation to app-based studies occurred in early 2015 with the launch of Apple’s ResearchKit, an open-source software toolkit for the iOS platform that can be used to build apps for smartphone-based medical research. Since then, similar toolkits, such as ResearchStack, have been launched to facilitate app development on the Android operating system.

Several Institutional Review Board-approved study apps were launched shortly after the creation of ResearchKit, including MyHeart Counts (cardiovascular disease), mPower (Parkinson’s disease), Gluco-Success (type 2 diabetes), Asthma Health (asthma) and Share the Journey (breast cancer).

The NEJM publication refers to data from MyHeart Counts to emphasize particular features of app-based studies. The MyHeart Counts study enrolled more than 10,000 participants in the first 24 hours: a recruitment figure that many traditional study sponsors would regard with envy. While this figure appears, at least in part, to result from expanded access to would-be participants who are not within easy reach of a study site, it may carry with it a degree of selection bias. For example, the consenting study population in MyHeart Counts was predominantly young (median age, 36) and male (82 per cent), reflecting the uneven distribution of smartphone usage and familiarity across the population in the demographics of app-based study participants. The MyHeart Counts completer population (i.e. those who completed a 6-minute “walk test” at the end of seven days) represented only 10 per cent of participants who provided consent. The reasons for low completer rates in app-based studies are not mapped out, but may relate to participants’ commitment to partake in and contribute to the study in the absence of face-to-face interactions.

Regulatory and legal challenges for digital consent

Conduct of clinical trials is guided by good clinical practice (GCP) principles, which seek to ensure that:

  • trials are ethically conducted to protect the dignity, privacy and safety of trial subjects; and
  • there exists an adequate procedure to ensure the quality and integrity of the data generated from the trial.

Informed consent is one of the most important ethical principles, and an essential condition both for therapy and research. It is a voluntary agreement to participate in research, but is more than a form that is signed; it is a process during which the subject acquires an understanding of the research and its risks.

The challenges of conducting clinical research using digital technology are, to name a few:

  1. how to ensure that the language used in the informed consent is engaging and user-friendly to promote greater understanding of the nature of the study and the risks relating to participation in the trial;
  2. how to assess capacity and understanding of trial subjects remotely;
  3. how to assess voluntary choice without the benefit of body language and tone; and
  4. how to verify the identity of the person consenting (although this risk may be mitigated in the future through biometric or identity verification tools).

Moreover, there are practical challenges with using these technologies. For example, relating to the assessment of patient eligibility, and monitoring of trial subjects to ensure clinically meaningful data of an acceptable quality are collected and collated during the trial to comply with the GCP principles and support regulatory submissions.

Because of some of these challenges, the NEJM publication suggests that app-based research may be most suitable for low-risk studies. However, it is likely that these risks will be mitigated in the future as the technology develops and researchers and patients become more familiar with its use.

2017 has started with a bang on the data protection front. There have been several developments these past few months, ranging from updates on the new EU General Data Protection Regulation (“GDPR”), coming into force in May 2018, to the establishment of a Swiss-EU Privacy Shield. In relation to mHealth specifically, the Code of Conduct for mHealth is still with the Article 29 Working Party (the EU data protection representative body, or “WP29”) – such codes of conduct have a raised status in the GDPR and are likely to play a more significant role going forwards. We provide a snapshot of the latest developments below.

Firstly, there have been several steps forward in relation to the GDPR. The UK data protection regulator, the “ICO”, has been consistent in its support for preparation of the GDPR in the UK following the Brexit vote last year. In January, we have seen the ICO provide an update on the GDPR guidance that it will be publishing for organizations in 2017, and the WP29 adopt an action plan and publish guidance on three key areas of the GDPR. MP Matt Hancock (Minister of State for Digital and Culture with responsibility for data protection) also suggested in December and February that a radical departure from the GDPR provisions in the UK after Brexit is unlikely, despite being careful not to give away the intentions of the UK government.

On the electronic communications front, the European Commission published a draft E-Privacy Regulation in January, which is currently being assessed by the WP29, European Parliament and Council. The new Regulation is designed as an update to the E-Privacy Directive, and will sit alongside the GDPR to govern the protection of personal data in relation to the wide area of electronic communications, whether in the healthcare sector or otherwise (such as those via WhatsApp, Skype, Gmail and Facebook Messenger).

In relation to global personal data transfer mechanisms, in January the Federal Council of Switzerland announced that there would be a new framework for transferring personal data (including health data) from Switzerland to the US; the Swiss-EU Privacy Shield. As with the EU-US Privacy Shield, the Swiss-US Privacy Shield has been agreed as a replacement of the Swiss-US Safe Harbor framework. The establishment of the new Swiss-EU Privacy Shield means that Switzerland will apply similar standards for transfers of personal data to the US as the EU. Organizations can sign up to the Swiss-EU Privacy Shield with the US Department of Commerce from 12 April 2017. If organizations have already self-certified to the EU-US Privacy Shield, they will be able to add their certification to the Swiss-US Privacy Shield on the Privacy Shield website from 12 April 2017.

These developments need to be taken into consideration by organizations that are creating and implementing digital health products, such as mHealth apps, which operate in a space that can bring up several regulatory questions. Further information can be found in our recent advisory.

The National Institute for health and Care Excellence (NICE) provides guidance to the NHS in England on the clinical and cost effectiveness of selected new and established technologies through its healthcare technology assessment (HTA) program. Using the experience it has gained from this program, NICE intends to develop a system for evaluating digital apps. The pilot phase for this project was set in place in November 2016, and, from March 2017, NICE will publish non-guidance briefings on mobile technology health apps, to be known as “Health App Briefings”. These briefings will set out the evidence for an app, but will not provide a recommendation on its use; this will remain subject to the judgment of the treating physician.

The existing HTA program consists of an initial scoping process, during which NICE defines the specific questions that the HTA will address. NICE then conducts an assessment of the technology, in which an independent academic review group conducts a review of the quality, findings and implications of the available evidence for a technology, followed by an economic evaluation. Finally, an Appraisal Committee considers the report prepared by the academic review group and decides whether to recommend the technology for use in the NHS.

The new program builds on the current Paperless 2020 simplified app assessment process, which was recommended in the Accelerated Access Review Report discussed in a previous post. It has many parallels with the HTA program. In particular, it will be a four-stage process, comprising: (1) the app developer’s self-assessment against defined criteria; (2) a community evaluation involving crowd-sourced feedback from professionals, the public and local commissioners; (3) preparation of a benefit case; and (4) an independent impact evaluation, considering both efficacy and cost-effectiveness.

NICE is currently preparing five Health App Briefings, of which NICE’s Deputy Chief Executive and Director of Health and Social Care, Professor Gillian Leng, has confirmed one will relate to Sleepio, an app shown in placebo-controlled clinical trials to improve sleep through a virtual course of cognitive behavioral therapy.

We understand that future Health App Briefings will also focus on digital tools with applications in mental health and chronic conditions, consistent with NHS England’s plans to improve its mental healthcare provision and, in particular, access to tailored care.

For apps that have evidence to support their use and the claims made about them, the new Innovation and Technology Tariff, announced by the Chief Executive of NHS England in June 2016, could provide a reimbursement route for the app. This will provide a national route to market for a small number of technologies, and will incentivize providers to use digital products with proven health outcomes and economic benefits.

We previously described some of the ways in which life sciences companies are exploring the potential of IBM’s supercomputer, ‘Watson®’, to assist with product development and disease treatment.  Such uses raise important questions about how Watson and other software are treated under medical device regulations.  These questions are particularly important as tech companies find themselves wading into the healthcare arena and may be unaware of the heavily regulated industry they are entering.

The regulation of medical software has been controversial and subject to the vagaries of guidelines and subjective interpretations by the regulatory authorities. We consider below the regulatory minefield and the circumstances in which a software is regulated as a medical device in the EU and U.S.

EU

How is software regulated?

In the EU, a medical device means any instrument or other apparatus, including software, intended by the manufacturer to be used for human beings for the purpose of, among other things, diagnosis, prevention, monitoring, treatment or alleviation of disease. There is no general exclusion for software, and software may be regulated as a medical device if it has a medical purpose, meaning it is capable of appreciably restoring, correcting or modifying physiological functions in human beings. A case-by-case assessment is needed, taking account of the product characteristics, mode of use and claims made by the manufacturer. However, the assessment is by no means straightforward for software, which is particularly complex because, unlike classification of general medical devices, it is not immediately apparent how these parameters apply to software, given that software does not act on the human body to restore, correct or modify bodily functions.

As a result, software used in a healthcare setting is not necessarily a medical device. The issue is whether the software can be used as a tool for treatment, prevention or diagnosis of a disease or condition. For example, software that calculates anatomical sites of the body, and image enhancing software intended for diagnostic purposes, is generally viewed as a software medical device because it is used as a tool, over and above the healthcare professionals’ clinical judgment, in order to assist clinical diagnosis and treatment. For the same reason, software used for merely conveying or reviewing patient data is generally not a medical device.

What about Watson?

The main benefit of IBM’s cognitive computing software is its ability to analyse large amounts of data to develop knowledge about a disease or condition, rather than treatment options for an individual patient. Currently, its uses are largely limited to research and development. On the basis of these uses, the software may not be considered as having the medical purpose necessary for it to be classified as a medical device.

However, uses of the software that aim to enhance clinical diagnosis or treatment of a condition may potentially alter the regulatory status, especially if the function of the software goes beyond data capture and communication. Similarly, some of the new partnerships recently announced, described in our previous post, are aimed at developing personalised management solutions, or mobile coaching systems for patients. These may be viewed as having a medical purpose in view of the health-related information they acquire to provide informed feedback to the patient on self-help, or decision-making relating to the patient’s treatment plan. As the uses for Watson increase, and become more involved in treatment decisions, this change in regulatory status is likely to increase.

Will there be any change under the new Medical Device Regulations?

The EU legislative proposal for new medical device Regulations, which have reached broad agreement by the EU legislature but have not yet been adopted, contain additional provisions that specifically address software medical devices. Of particular relevance, software with a medical purpose of “prediction and prognosis” will be considered as coming within the scope of the Regulations. This means that software and apps that were previously excluded from being regulated, may in the future be “up-classified” and be susceptible to being regulated as medical devices. Along with a number of initiatives in the EU, the EU institutions recognize the importance of mHealth in the healthcare setting, and are seeking to ensure it is properly regulated as its use increases.

U.S.

How is software regulated?

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA considers a medical device to be an instrument or other apparatus, component, or accessory that is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease in man or other animals, or that is intended to affect the structure or function of any man or other animal but which is not dependent on being metabolized (i.e., a drug) for achievement of that purpose.   FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices.

One type of software FDA has not issued guidance on is Clinical Decision Support Software (CDSS). CDSS is software that utilizes patient information to assist providers in making diagnostic or treatment decisions. Until recently, CDSS was approached in a similar fashion to FDA’s framework for mobile apps. In other words, CDSS was viewed as existing on a continuum from being a Class II regulated medical device, to being subject to FDA’s enforcement discretion, to not being considered a medical device at all. On December 13, 2016, however, the 21st Century Cures Act was signed into law, clarifying the scope of FDA’s regulatory jurisdiction over stand-alone software products used in healthcare.

The 21st Century Cures Act contains a provision – Section 3060 – that explicitly exempts certain types of software from the definition of a medical device. As relevant for CDSS, the law excludes from the definition of a “device” software (unless the software is intended to “acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system”):

  1. Displaying, analyzing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines);
  2. Supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
  3. Enabling health care professionals to independently review the basis for such recommendations so that the software is not primarily relied upon to make a clinical diagnosis or treatment decision regarding an individual patient.

Thus the Act generally excludes most CDSS from FDA jurisdiction. However, it is worth noting that FDA may bring CDSS back under its jurisdiction if it makes certain findings regarding: (1) the likelihood and severity of patient harm if the software does not perform as intended, (2) the extent to which the software is intended to support the clinical judgment of a health care professional, (3) whether there is a reasonable opportunity for a health care professional to review the basis of the information or treatment recommendation, and (4) the intended user and use environment.

What About Watson?

Based on this regulatory framework, IBM’s Watson would not generally be regulated as a medical device if simply used as a tool to assist physician review of medical data. In many uses, Watson is still dependent on human intervention and therefore does not make independent patient-specific diagnoses or treatment decisions. Importantly, statements about Watson also show that it is intended to be used simply as a tool by physicians and it is not intended that physicians rely primarily on Watson’s recommendations.

As such, in many applications, Watson is likely to be the kind of CDSS statutorily excluded from the definition of a medical device. However, as Watson and other forms of artificial intelligence advance and become capable of making or altering medical diagnoses or treatment decisions with little input or oversight from physicians, or transparency as to underlying assumptions and algorithms, these technologies will fall outside of the exclusion. As the use of such forms of artificial intelligence becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems. Additionally, state laws may be implicated with regard to how such technology is licensed or regulated under state public health, consumer protection, and medical practice licensure requirements.

Interoperability has been identified as one of the greatest challenges in healthcare IT. It is defined as the ability of organizations to share information and knowledge, by means of the exchange of data between their respective IT systems, and is about bringing to life fruitful collaborations between different healthcare environments, with electronic means.

With this in mind, the eHealth Interoperability Conformity Assessment for Europe (EURO-CAS) project launched on 26 January 2017. With a budget of €1 million (approximately $1.1m), it is one of the projects being funded under the European Union’s (EU) Horizon 2020 research and innovation program.  The launch of this project shows the EU’s recognition that eHealth has become increasingly important within healthcare, and that the use of such technologies needs to be streamlined.

The aims of the project are two-fold:

  1. to develop models, tools and processes to enable an assessment of the conformity of eHealth products with international, regional and national standards.
  2. to provide a method for manufacturers of conforming technology to demonstrate that conformity to the public. As set out in previous posts, a key concern with the proliferation of apps and eHealth products is how to demonstrate to patients and payers that they are safe, effective, and protect patients’ privacy. The hope is that the project will address these concerns and promote the adoption and take-up of eHealth products, and the use of the various standards that are being developed.

These aims will be achieved through the development of the ‘CAS’ scheme by a consortium led by IHE-Europe, and consisting of EU member state representatives, experts and international associations. Six ‘work packages’ have been set up to deliver the project, each focusing on discrete aspects of the project:

work packages

Fig 1: https://www.euro-cas.eu/work-packages

The project’s key deliverables (and corresponding timelines) have also been outlined, with the final scheme to be presented to the public in November 2018.

The overarching plan behind EURO-CAS is to pave the way for more eHealth interoperability in the EU. The project will build on the findings of a series of EU-funded projects concerning eHealth interoperability over the past years. The scheme also aims for consistency with the ‘Refined eHealth European Interoperability Framework’, which identified the importance of interoperability for eHealth to be truly useful in healthcare, and was endorsed by representatives from all EU member states in 2015.

EURO-CAS states that it is “committed to transparency and openness”. Interested parties are invited to partake in project events, to engage through the project’s Twitter channel (@EURO_CAS), or LinkedIn group, or to provide feedback on the deliverables that will be submitted for public consultation in due course.

Updating our earlier blog post, ‘Next Up: European Consultation on the Safety of Apps’ that consultation has now closed and the Summary Report was published on November 14, 2016.

As previously explained, the consultation is one of a series of consultations and draft guidance through which the European Commission is seeking to develop appropriate guidance on the development of mHealth apps. The objectives of the consultation were to gather input from the public, industry and public authorities on their experience relating to the safety of apps and other non-embedded software, with a view to better understanding the risks they may pose to users and how those risks can be addressed.

The consultation returned 78 responses from stakeholders both inside and outside the EU. The majority of respondents were members of the public (37) and the remainder comprised trade associations (12), businesses (10), public authorities (6), professional associations (5), academia (5) and civil society (3).

Nearly half of the respondents (33) identified health and wellbeing apps as the main category of apps that could pose risks to users’ safety. In line with previous comments from the public, the most common concern raised by respondents related to data protection (including the risk that apps could access or collect users’ sensitive data without their consent, see: Attention App Developers… Final Draft of Code of Conduct on Privacy for mHealth Apps  (17), followed by cyber-attacks (including for the purposes of data collection, financial operations or controlling another device) (12). The types of risks most frequently cited by respondents included economic damage (60) and non-material damage (pain and suffering) (55).

The Commission is analyzing the replies to the consultation, and a full report will be published in due course. The Commission has stated that while the results do not point to the need for a new Commission initiative, it will consider the responses in its ongoing review of the regulatory frameworks governing apps, medical devices, and product safety and liability.

Published in Privacy & Cybersecurity Law Report’s April 2017 issue.

In the closing days of last year, the FDA issued its final guidance on postmarket medical device cybersecurity. This guidance is a corollary to the previously issued final guidance on premarket cybersecurity issues, and the pre and post market pieces should be read, and fit, together. In both cases, the FDA sets out a comprehensive, and lifecycle approach to managing cyber risk. Under this guidance, the FDA is asking companies to operationalize a structured way to think through and act on these product, hardware, software, and network issues. Last year, we wrote about 5 things companies can do now to get ahead of the curve on the premarket guidance, and they still apply.

The final postmarket guidance follows much of the 2016 draft guidance, with a few important changes. We wrote a detailed piece on the 2016 draft guidance. The two big changes are:  a change in focus from possible cyber impact on the product (what was called the “essential clinical performance” of the device) to a focus on the health impact on the patient if a vulnerability were exploited (what is now called the “patient harm”); and a fleshing-out of the recommended vulnerability disclosure process and time frames. Focusing on the possible impact to the patient seems like a good change. Cyber risk is a function of threat, vulnerability and consequence, and with medical devices, the consequence surely revolves around the patient. It is the second change – around vulnerability disclosure, timing for disclosure, and required information sharing with an industry-wide “Information Sharing Analysis Organization (ISAO)” that will take real thought, work and finesse.

Under the final guidance, if there is an “Uncontrolled Risk” given the exploitability of the vulnerability and the severity of patient harm if exploited, that risk should be remediated “as quickly as possible.” As for notice to the FDA and customers, you must report these vulnerabilities to the FDA pursuant to part 806 (which requires manufacturers to report certain device corrections and removals), unless the manufacturer meets four specific requirements: (1) there are no known serious adverse events or deaths; (2) within 30 days of learning of the vulnerability the manufacturer communicates with its customers and user community describing at a minimum the vulnerability, an impact assessment, the efforts to address the risk of patient harm, any compensating controls or strategies to apply, and commit to communicating the availability of a future fix; (3) within 60 days of learning of the vulnerability, the manufacturer fixes the vulnerability, validates the change, and distributes the fix such that the risk is reduced to an acceptable level; and (4)  the manufacturer participates in the ISAO and provides the ISAO with any customer communications upon notification of its customers. If you meet these obligation and timelines, you do not have to report under part 806 – but if you don’t meet these obligations you do have to report and then are subject to the usual 806 reporting.

So, to avoid part 806, you want to follow the four conditions. But they are more complex than one might think at first glance. As a general matter, information technology companies do not like to notify users of a vulnerability until there is a fix. A known vulnerability without a fix can easily (and often are) exploited by adversaries. Customers are less secure. Therefore, generally companies announce vulnerabilities and fixes together, so that customers can protect themselves before bad guys can exploit. Usually, only on rare occasions, when there is a known active exploit, would you notify customers before you have a fix. The FDA and the medical device industry seem to be searching for the appropriate approach for medical devices, where there is potential for non-trivial patient harm, and an existing regulatory structure and overall public health mission.  The issue of vulnerability disclosure is complex, and subject to much debate (the U.S. Commerce Department just published the results of a year-long study, concluded there is still much work to be done to get it right). Similarly, the issue of information sharing about cyber threat and vulnerability information with others in industry, and with the government is still an area of much discussion. A year ago, Congress passed an information-sharing bill to help reduce potential barriers to information sharing, including provisions for some amount of liability protection for sharing cyber threat and vulnerability information with others. Today, companies are still finding their way around the business and legal issues , even under the new legislation.

Therefore, to meet the 30 and 60 day notice requirements, and the information sharing requirement, medical device companies will have to carefully craft their notices to both meet the specificity requirement in the final guidance, and not disclose enough that adversaries will be alerted to the possibility of a vulnerability, figure out what function, method, process or technology is implicated, focus on that topic and exploit it before a fix is developed, shared, and implemented. The same considerations hold for sharing vulnerability and notice information with the ISAO, whose members will include competitors and which information could (depending on the ISAO rules and information classification decisions) be further shared with government and security industry partners. Net-net, a clear understanding of the technical vulnerability, possible consequences, ability to fix, and appreciation of the line between notification and usefulness for exploit is required. It may also be true that no fix can be had in 60 days, and that if there are many reported vulnerabilities backed-up and in the queue to be fixed, a company may fix the priority tickets first, and then the least priority items may take longer than 60 days to address and fix as a matter of band-width and expertise. Consequently, over time, companies may be faced with decisions about whether to try to meet the 806 exception conditions, or file 806 notices with the FDA and deal with the potential implications. None of this is to say that the benefits of the 806 exception are not worth it, or are trivial, it just means that your approach has to be clueful and strategic.

One more issue, of course continues to be quite important – the global rules must be rationalized. Medical device companies build-once and sell globally, and the security, integrity, vulnerability, and disclosure rules and best practices have to work globally. As these new guidelines get rolled-out, significant education globally will be critical.

Over time, and most likely, like most things in security, this ‘final guidance’ will be a work in progress, as companies and the FDA and regulators globally begin to deal with specific use cases that push the boundaries of what situation is a “Controlled Risk,” an “Uncontrolled Risk,” and what 30 and 60 day notifications, and fixes, and ISAO information is required, helpful, and not helpful. As we always say – ‘security is a journey, not a destination’ – and so too will the postmarket cyber guidance be.