Join us for a 90-minute webinar, hosted by AdvaMed, focusing on the new EU Medical Device Regulations (MDR/IVDR), which represent the single largest regulatory change in the EU in decades. This program will highlight what the regulatory changes are, how these changes will affect your business, and what you can do to better prepare.  In particular, the classification rules for software have changed, and new requirements are likely to apply to apps and mHealth technologies.

Agenda topics include:

  • What are the changes in the regulations?
  • How will these changes affect research and development, and what impact will there be on, among other things, mobile and telemedicine?
  • How will the new framework affect market access?

Click here to register.


In a recent article published in Intellectual Property & Technology Law Journal, and expanding on our previous post, we discuss the legal and regulatory implications of applying artificial intelligence (AI) to the EU and US healthcare and life sciences sectors.

AI software, particularly when it involves machine learning, is being increasingly used within the healthcare and life science sectors. Its uses include drug discovery (e.g., software that examines biological data to identify potential drug candidates), diagnostics (e.g., an app that analyses real-time data to predict health issues), disease management (e.g., mobile-based coaching systems for pre- and post- operative care) and post-market analysis (e.g., adverse event data collection systems).

Given the healthcare and life science sectors are highly regulated, the development and use of AI requires careful scrutiny of applicable legal and regulatory obligations and any ongoing policy developments. The article discusses how AI may contribute to the research and development of health products, to the care and treatment of patients, and the corresponding legal and regulatory issues surrounding such technological advances.

In Europe, depending on its functionality and intended purpose, software may fall within the definition of ‘medical device’ under the Medical Devices Directive. However, classification of software is fraught with practical challenges because, unlike classification of general medical devices, it is not immediately apparent how the legal parameters apply. The European Commission has published guidelines to interpret the Directive’s requirements, but these are not legally binding (although were recently endorsed by the Advocate General of the Court of Justice of the European Union, as discussed in our advisory). The new EU Regulations adopted on April 5, 2017, which come into effect on May 26, 2020, will widen the scope of the regulatory regime considerably, and will require all operators to re-assess product classification well in advance of this deadline.

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices. However, there are a variety of legal, regulatory, and compliance issues that may arise for AI developers based on the intended use of the product. Once a product is classified as a medical device, its class will define the applicable regulatory requirements, including the type of premarketing notification/ application that is required for FDA clearance or approval. As the use of AI becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems.

Further important considerations, given the capability of AI to capture various forms of personal data, are data protection and cybersecurity, which will become very important to ensure sustainability of the technology. In the EU, these rules are soon to be overhauled by the General Data Protection Regulation, which applies from May 25, 2018. And in the US, regardless of the product’s classification, AI developers will need to assess whether the HIPAA rules apply, and any design controls and post-manufacture auditing that may also apply in the cybersecurity space.

The European Commission has made clear its intention to harness the potential that digital innovation can offer, and in May 2015, announced it Digital Single Market strategy. A key part of this is the digital transformation of health and care in order to improve healthcare for its citizens. On 20 July 2017, the European Commission launched a public consultation to assess how digital innovation can be used to enhance health and care in Europe. This consultation follows on from the Roadmap published last month, with the aim of developing a new policy Communication by the end of 2017.

The consultation focuses on collecting information on three main areas:

(i) cross-border access to and management of personal health data, through electronic medical records and e-prescriptions;

(ii) sharing of data and expertise to advance research, assist with personalized healthcare and anticipate epidemics; and,

(iii) measures for widespread uptake of digital innovation and interaction between patients and healthcare providers.

The questions are very much at a fact-finding level, asking respondents’ views on a wide range of issues, particularly on data protection, which, as recent cyber-attacks on the UK NHS and sanctions imposed by the UK Information Commissioner have shown, is an important factor in a digital market. However, although the Roadmap sets out some intended outcomes that are in line with the three areas of the consultation, the ultimate goal of “widespread adoption of digital technology to make borderless European health and care a reality” is unlikely to be obtained by the end of the year.

The Commission is inviting citizens, patient organizations, healthcare professionals, public authorities and any other users of digital health tools to share their views until 12 October 2017.

We have previously reported on a number of EU projects designed to promote eHealth interoperability (the ability of EU Member States to share healthcare information between their respective IT systems), including the Commission’s eHealth standards project, which aims to build consensus on the standards to be applied to eHealth products, and EURO-CAS, which aims to develop tools to assess the conformity of eHealth products with those standards.

In parallel with those projects, the VALUeHEALTH project, which ran from April 2015 to June 2017 as part of the Commission’s broader research and innovation program, Horizon 2020, focused on developing a business plan for the implementation and funding of eHealth services across the EU. Trans-European digital services are currently funded by the Connecting Europe Facility (“CEF”), which has committed to investing EUR 1.04 billion for this purpose between 2014 and 2020. VALUeHEALTH was concerned with ensuring the sustainable interoperability of European eHealth services beyond 2020.

To this end, the VALUeHEALTH project had five objectives, summarized in the following schematic:

VALUeHEALTH Overall Concept, and Objectives:

Objective 1: Develop a set of prioritized use cases

The VALUeHEALTH prioritized “use cases” for eHealth services on the basis of a number of criteria, including their potential positive impact on patients, improved health outcomes, and reduced healthcare costs. Using these criteria, two use cases were prioritized:

  • Safe prescribing: Ensuring that existing algorithms to support prescribing decisions are able to access critical safety information (e.g., other current medication, allergies and intolerances, clinical conditions, significant family history, relevant bio-markers).
  •  Individual disease management: Condition-specific information-sharing between actors involved in the healthcare, social care and self-care of a patient’s portfolio of long-term conditions.

These use cases were used to inform the analysis underlying the business plan under the remaining objectives.

Objective 2: Design an overarching business model framework

The project sought to identify the expected benefits of interoperability for various stakeholders — in particular, those whose involvement was necessary to sustain interoperability, and those who most needed to realise value from interoperable information. Further, it was intended to produce a cost-benefit analysis for stakeholders who would be required to drive investments. Finally, business modelling methodologies would be used to establish the value of eHealth interoperability and to determine how cost savings and growth in capacity could justify financial investment in eHealth services, with minimal dependence on public funding.

As a result of this work, VALUeHEALTH has established a Business Modelling Task Force, tasked with developing the value chains and value propositions described above. However, further details are not yet available on the project website.

Objective 3: Develop a scale-up roadmap

The VALUeHEALTH project identified high quality data capture as a necessary pre-condition for the scale-up of self-financed cross-border eHealth services. With this in mind, it aimed to examine the barriers to, and the conditions and incentives required for, wide-scale, high quality data capture, which could inform a scale-up strategy.

Barriers identified by the project were (i) the reliance on busy, often junior, clinicians to capture health information from patients, and (ii) the existence of reimbursement models that pay for activity rather than clinical outcomes. Incentives were needed to address these issues.

The Commission intends to use this information to scope the interoperability deployment roadmap and scale-up strategy, as well as its structure and costs. However, it appears that this exercise is ongoing.

Objective 4: Design an information communication technology and interoperability deployment roadmap

VALUeHEALTH has defined the interfaces, services and tools need to deliver the prioritized use cases identified in Objective 1 and, from this, has derived a design and deployment roadmap for eHealth services in general. However, this is not yet available publicly.

There appears to be some overlap between the roadmaps envisaged by Objective 3 and Objective 4. From the available information, we understand that the scale-up roadmap described in Objective 3 is designed to address issues with data capture (i.e., the practical human barriers to ensuring that the data required for cross-border eHealth services is collected and entered into the system), whereas the ICT and interoperability roadmap described in Objective 4 is intended to address the technical requirements of the service.

Objective 5: Deliver a business plan and sustainability plan

The results of Objectives 1-4 have been used to produce a Business Plan and Strategy for future public-private investment in EU eHealth services. In particular, the plan provides guidance to the CEF on how to construct digital service infrastructure for health to ensure maximum value and sustainability beyond 2020. Again, this plan has not yet been published.

We have previously reported on the Accelerated Access Review (AAR), which made 18 recommendations to the UK government for speeding up patient access to new medical technologies. The overarching aim of the AAR was to make the UK a world-leader in healthcare innovation. The AAR report, which was published in October 2016, was particularly focused on digital technologies, and recognized that the current systems in place are not sufficiently flexible to realize the full potential of digital health.

To implement the recommendations of the AAR, the UK government announced last week that it is investing a total of £86 million in four projects aimed at encouraging small and medium sized enterprises (SMEs) to develop and test new products and technologies in the UK’s National Health Service (NHS).

One of the four projects to be funded by the new package is the ‘Digital Health Technology Catalyst’. The Catalyst will receive £35 million to help support innovators by match-funding the development of digital technologies for use by patients and the NHS. The government specifically highlighted digital technologies that help patients manage their conditions from home, or that develop new medicines, as possible areas of development, and cited MyCOPD as a successful project to be repeated – an online system that helps people with chronic obstructive pulmonary disease better manage their condition.

The announcement has been publicly welcomed by a number of industry representative groups, including the Association of British Healthcare Industries (ABHI), techUK, BioIndustry Association (BIA) and the British In Vitro Diagnostics Association (BIVDA).

On 28 June, the Advocate General of the Court of Justice of the European Union gave his opinion on the SNITEM and Philips France case against France. In this case, the Conseil d’Etat in France asked whether a particular software program intended to be used by doctors to support  prescribing decisions falls within the definition of medical device as provided by Directive 93/42/EEC (the Medical Devices Directive).

Definition of a medical device

As we have discussed previously in this blog, there is no general exclusion for software in the definition of medical device provided by the Medical Devices Directive. Software may be regulated as a medical device if it has a medical purpose, meaning that it is capable of appreciably restoring, correcting or modifying physiological functions in human beings. The assessment is by no means straightforward for software as, unlike general medical devices, it is not immediately apparent how these parameters apply to programs. The Commission MEDDEV guidance makes a distinction between software specifically intended by the manufacturer to be used for one or more of the medical purposes set out in the definition of a medical device, and software for general purposes that is used in a healthcare setting which will not be considered as a medical device.

Opinion of the Advocate General

The software subject of this case, the Intellispace Critical Care and Anesthesia (ICCA) manufactured by Philips France, is designed to assist anesthesia and intensive care services by providing doctors with information to assist their prescribing decisions. It provides information with regards to possible contraindications, interactions with other medicines and excessive dosing. The ICCA has been CE marked as a medical device.

The dispute in this case arose from the fact that French law requires that software designed to assist medical prescriptions should be certified at national level. Philips France claimed that, by imposing a further requirement in addition to the conformity procedure laid down by the Directive, the French Government had set up a restriction on import of the device, contrary to EU law.

The French Government argued that the ICCA does not satisfy the definition of a medical device under the Directive, as its functions are purely administrative and for storage purposes, and could not, therefore, be marketed in France without such certification from the French authorities.

The Advocate General disagreed with the French Government’s assessment, and found that ICCA should be classified as a medical device. The following 3 factors are key to reach this conclusion: (i) the ICCA is not a program for general purposes that is used in a healthcare setting; it goes beyond simple storage of data and modifies and interprets such data providing certain information that is useful for healthcare professionals to make adequate prescribing decisions; (ii) the fact that the ICCA does not act directly on the interior or the surface of the human body does not prevent its classification as a medical device; as “contributing” to the principal intended action is sufficient, and; (iii) the Commission’s MEDDEV guidance and other guidance issued by national competent authorities are aligned, and classify programs such as ICCA as medical devices.

Next steps

The European Court will now consider this opinion and deliver a judgment in a few months. This case is the first time that the European Courts have considered software that may be classified as medical devices, and the decision of the Court will likely have an immediate effect on the EU market and how software used in healthcare setting is regulated.

You may find further details on this case in our Advisory.

Royal Free NHS Foundation Trust (the Trust) is one of the largest Trusts in the UK, employing more than 9,000 staff and providing services to over a million patients in North London.

On 3 July 2017, the UK Information Commissioner (ICO), the regulator overseeing data privacy, ruled that the Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind. The Trust provided personal data of about 1.6 million patients as part of a trial to test an alert, diagnosis and detection system for acute kidney injury.

DeepMind works with hospitals on mobile tools and Artificial Intelligence to plan patient journey, from diagnosis to treatment, as quickly and accurately as possible. Streams, an application which is in use at the Trust, is based on a mobile technology platform to send immediate alerts to clinicians when a patient’s condition deteriorates. Streams has also been rolled out to other UK hospitals, and DeepMind has also diversified the application for use in other settings that include a project aimed at using Artificial Intelligence to improve diagnosis of diabetic retinopathy, and another aimed at using similar approach to better prepare radiotherapists for treating head and neck cancers The application therefore saves lives.

In her ruling, the ICO recognized the huge potential for innovative research and creative use of data on patient care and clinical improvements. However, ICO considered that the price of innovation does not need to be erosion of fundamental privacy rights. The ICO noted in particular: “[i]n relation to health data, the Commissioner and her office recognises the benefits that can be achieved by using patient data for wider public good and where appropriate, we support the development of innovative technological solutions that use personal data to improve clinical care. We would like to make it clear that the Commissioner has no desire to prevent or hamper the development of such solutions; however, such schemes and laudable ends must meet the necessary compliance mechanism set out in the Act.

In this case, the ruling is levelled against the Trust as the data controller responsible for compliance with the Data Protection Act throughout its partnership with Streams and DeepMind has been acting as a data processor processing personal data on behalf of the Trust.

The ruling is based on the ICO investigation which has identified several shortcomings in how the data were handled according to the terms of the agreement for the partnership between the Trust and DeepMind. Most importantly, as regards transparency in data sharing between the Trust and DeepMind, the ICO found that patients were not adequately informed that their data would be used as part of the test.

The Trust is required to provide an undertaking to ensure that personal data are processed in accordance with the Data Protection Act especially in relation to the following guiding principles: (a) personal data must be processed fairly and lawfully; (b) processing of personal data must be adequate, relevant and not excessive; (c) personal data must be processed in accordance with the rights of data subjects; and (d) appropriate technical and organisation controls must be taken, including the need to ensure that appropriate contractual controls are put in place when a data processor is used. These remedial measures are set out in the undertaking for the Trust to implement according to the time-table imposed by the ICO. The Trust is also required to commission an audit of the trial the results of which will be shared with the ICO and the results may be published as the ICO sees appropriate.

In February 2016, the European Commission established a Working Group on mHealth tasked with developing guidelines “for assessing the validity and reliability of the data that health apps collect and process”. Since this Working Group was set up, there have been a series of face-to-face meetings, open stakeholder meetings, conference calls and online questionnaires. Two drafts of the guidelines have also been published for consideration, as discussed in our previous posts here and here.

Last month, the Working Group, drawn from patients, healthcare professionals, industry, public authorities, payers and social care insurance, research and academia, finally published its report on the draft guidance. Members of the Working Group were invited to give their views on the assessment criteria, what they understood by each of the criteria and whether they considered them relevant for the purposes of assessing the validity and reliability of health apps.

To the extent that any consensus could be found on the criteria for the assessment of apps, six criteria were considered to be relevant: privacy, transparency, reliability, validity, interoperability and safety. Two further criteria achieved majority support: technical stability and effectiveness.

However, the Working Group’s discussions were plagued by “areas of apparent disagreement and different understanding of the implications, use and meaning of the criteria during app assessment”, such that the Working Group was unable to come to any agreement on the scope, purpose or targets for health app assessment guidelines. Their divergent understandings were not helped by the fact that the range of technologies that constitute health apps is constantly evolving, nor by the passing of new legislation and guidelines at EU and Member State level (e.g., the Medical Devices Regulation and the General Data Protection Regulation).

The Working Group was, therefore, forced to conclude: “Clearly, an important lesson from this exercise is the need to follow a step-wise approach, starting with a solid agreement on scope and terminology, especially if the Guidelines are to be developed by a multi-stakeholder group.” As such, it seems that the guidelines are currently not being progressed in their current form.

It has been almost a year since the European Commission published a final draft of a Code of Conduct on privacy for mHealth mobile applications (the “Code”). Our previous post summarizes the draft and its application to app developers. However, we noted that the Article 29 Working Party (the “WP29”), an independent advisory body comprised of representatives from all EU Data Protection Authorities, had to comment on the draft before it was formally adopted. In a letter dated 10 April 2017, the WP29 has finally set out its comments on the draft, and identified areas of improvement.

Comments on the draft

The letter begins by setting out the WP29’s expectations for the Code:

  • The Code needs to be compliant with the Data Protection Directive (Directive 95/46/EC, the “Directive”) and its national implementing legislation.
  • The Code must be of adequate quality.
  • The Code must provide sufficient added value to the Directive and other applicable data protection legislation.
  • The Code should continue to be relevant following the transition to the General Data Protection Regulation (Regulation (EU) 2016/679, the “GDPR”).

The WP29 is quite critical of the draft Code, and identifies a number of ways that the draft fails to add value to existing data protection legislation. The WP29’s general comments are that:

  • The Code does not elaborate sufficiently on the relationship between the Directive and national legislation implementing the Directive in individual EU Member States.
  • While the Code’s stated aim is to facilitate data protection compliance and not to address other compliance issues, it should nonetheless take into account other legislation that impacts on the prime objective of data compliance (e.g., provisions on cookies in the ePrivacy Directive (Directive 2002/58/EC)).
  • The Code needs to be clearer on the roles of the parties involved in the processing of personal data (i.e., whether the app developer is a data controller, data processor or both).
  • The Code should be re-evaluated in light of the relevant provisions of the GDPR to ensure that the content of the Code is consistent with the definitions given in both the Directive and the GDPR.

Specific comments

The WP29 also sets out more specific observations on areas in which the Code requires improvement. In summary:

  • Governance and monitoring model: It was not clear whether the model detailed in the Code would be compliant with some of the new requirements of the GDPR. In addition, further information was needed on: (1) the composition of the Assembly and how membership was to be managed; (2) how the monitoring body would be accredited; and (3) the financial contributions required from different members (the WP29 was specifically concerned with ensuring that fees did not preclude wide participation).
  • Practical guidelines for data controllers: The Code should make clear that consent to personal data processing should fulfil all requirements of the GDPR and the Directive, and guidance in relation to obtaining consent to the processing of children’s data should be more thorough. At the same time, the Code should acknowledge that there are other conditions that render data processing fair and lawful, and refer explicitly to them. It should also identify safeguards to raise awareness of the possible risks associated with the use of mHealth apps.
  • Data protection principles: Whilst the “practical guidelines for data controllers” referred to the necessity of safeguards for data subjects, it did not mention that these safeguards should be “appropriate”, in line with data protection principles. Further, the Code should refer to all of the data protection principles, or explain why they are not relevant.
  • Information, transparency and data subjects rights: The Code should require developers to make more information about the role of the data controller available to end users. It did not provide sufficient information on how data subjects could exert their rights, or how data controllers and data processors should meet their obligations. The Code should refer to the relevant provisions of the GDPR in relation to transfer of personal data to third countries. The legal basis and requirements for processing data for marketing purposes should also be referred to, such as the relevant sections of the GDPR.
  • Security: The Code should include more details and relevant examples on how app developers can integrate “privacy by design” and “privacy by default” into their development processes, as well as being attentive to legal restrictions relating to retention periods. Specific provisions in relation to data protection breaches should be included in line with the definitions of personal data contained in the Directive and the GDPR.

The draft will now need to be reconsidered by the drafting group to take these comments into account. The WP29 specifically states: “When revising the draft, please consider carefully what “added value” the code of conduct provides as a whole and, in particular, what specific examples, practical solutions or recommendations you could draw from discussions with stakeholders, ...” In the meantime, given the shortage of guidance in this area, developers may choose to follow the Code, and the recommendations from the WP29 in order to conform to best practice.

The U.S. Food and Drug Administration (FDA) issued a Warning Letter on April 12, 2017 requiring an explanation of how St. Jude Medical plans to correct and prevent cybersecurity concerns identified for St. Jude Medical’s Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators, and the Merlin@home monitor.

The Warning Letter follows a January 2017 FDA Safety Communication on St. Jude Medical’s implantable cardiac devices and the Merline@home transmitter. The safety alert identified that such devices “contain configurable embedded computer systems that can be vulnerable to cybersecurity intrusions and exploits. As medical devices become increasingly interconnected via the Internet, hospital networks, other medical devices, and smartphones, there is an increased risk of exploitation of cybersecurity vulnerabilities, some of which could affect how a medical device operates.” FDA conducted an assessment of St. Jude Medical’s software patch for the Merlin@home Transmitter and determined that “the health benefits to patients from continued use of the device outweigh the cybersecurity risks.” Consequently, FDA’s safety alert provides recommendations to healthcare professionals, patients and caregivers to “reduce the risk of patient harm due to cybersecurity vulnerabilities.”

The following month, FDA conducted a 10-day inspection at St. Jude Medical’s Sylmar, CA facility and concluded that St. Jude Medical has not adequately addressed the cybersecurity concerns. Notably, FDA observed failures related to corrective and preventive actions (CAPA), controls, design verification and design validation.


In one instance, FDA found that St. Jude Medical based it’s risk evaluation on “confirmed” defect cases and not considering the potential for “unconfirmed” defect cases and therefore underestimated the occurrence of a hazardous situation related to premature battery depletion. Moreover, FDA found that St. Jude Medical failed to follow its CAPA procedures when evaluating a third party cybersecurity risk assessment report. Finally, FDA found that St. Jude Medical’s management and medical advisory boards did not receive information on the potential for “unconfirmed” defect cases and were falsely informed that no death resulted from premature battery depletion issue.

For all instances, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA.

Control Procedures

On October 11, 2016, St. Jude Medical initiated a recall for Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators due to premature battery depletion. Despite the recall, FDA noted that some devices were distributed and implanted. Again, FDA was unable to determine whether the St. Jude Medical’s corrective actions were sufficient because St. Jude Medical failed to provide evidence of implementation.

Design Verification and Validation

In addition, FDA found St. Jude Medical failed to ensure that “design verification shall confirm that the design output meets the design input requirements,” and failed to accurately incorporate the findings of a third-party assessment into updated cybersecurity risk assessments for high voltage and peripheral devices like the Merlin@home monitor. Specifically, the Merlin@home monitor’s testing procedures did not require full verification to ensure the network ports would not open with an unauthorized interface. Further, the cybersecurity risk assessments failed to accurately incorporate the third party report’s findings into its security risk ratings. Also, even though the same reports identified the hardcoded universal unlock code as an exploitable hazard for the high voltage devices, St. Jude Medical failed to estimate and evaluate this risk.

For all violations, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA. FDA has given St. Jude Medical 15 days to explain how the company plans to act on the premature battery depletion issue (despite related injuries and one death) as well as the improper focus on “confirmed” cases, and the distribution and implantation of recalled devices. FDA warns that St. Jude could face additional regulatory action if the matters are not resolved in a timely manner.

The Warning Letter, together with the January 2017 Safety Communication and a December 2016 Guidance on Postmarket Management of Cybersecurity in Medical Devices (which we have previously summarized here and here), demonstrates FDA’s continued scrutiny on the cybersecurity of medical devices. It appears that FDA is trying to communicate the need for device manufacturers to incorporate cybersecurity checkpoints throughout a product’s lifecycle to prevent patient harm and potential regulatory action. Not a bad idea for an increasingly tech-savvy world.