Software can be considered a medical device under EU law. Although guidance has been issued by the European Commission and national authorities to assist in legal classification, factors or criteria that are considered as relevant in such guidance have not been validated by European or national courts. The recent decision of the Court of Justice of the European Union (CJEU) on legal classification of software medical device is therefore instructive.

The European Court’s first decision on the classification of software in the context of medical devices legislation

On 7 December 2017, the CJEU issued its judgment in Case C-329/16. The CJEU agreed with the Advocate General’s opinion (discussed in our previous advisory), and held that software can be classified as a medical device under EU law if the software has at least one functionality that allows the use of patient-specific data to assist the physician in prescribing or calculating the dosage for treating the underlying condition. It does not matter whether the software acts directly or indirectly on the human body. The decisive factor is whether the software is specifically intended by the manufacturer to be used for one or more medical objectives specified in Article 1(2) of Directive 93/42/EEC (the Medical Devices Directive), including the diagnosis, prevention, monitoring, treatment or alleviation of disease.

The decision is in line with the existing national and European guidance, and aligns with the generally accepted understanding of the meaning of the relevant legislative provisions. Therefore, the decision ought not be surprising to many who operate in the field of digital health. However, we note the following:

  • It is the first time that Europe’s highest court has confirmed the proper interpretation of these legislative provisions insofar as they apply to software. The decision is particularly relevant to the research and development of digital health and mobile health devices.
  • In reaching its decision, the CJEU relies considerably on the so-called ‘MEDDEV Guidelines’ which have been considered as useful tools to interpret the legal requirements relating to medical device, as they set out the agreed position of the European Commission in collaboration with national authorities, industry and national accredited bodies following a period of internal and external consultation.
  • The CJEU decision validates the criteria set out in the specific MEDDEV for classification of software medical devices. While the MEDDEV Guidelines are explicitly stated to be non-legally binding, industry and regulatory authorities may now rely on these to a greater extent for the purpose of assessing classification.

A revised edition of the ‘Manual on borderline and classification in the Community regulatory framework for medical devices’ (the Borderline Manual)

In the broader regulatory context, the decision on whether or not a particular software program constitutes a medical device (and is therefore subject to a conformity assessment) may also be guided by reference to the Borderline Manual (version 1.18), the most recent version of which was published on 7 December 2017.

As a general matter, a case-by-case assessment is required to decide whether a software program can be properly classified as a software medical device in view of its characteristics and functionality.

The Borderline Manual provides that the following types of software should generally be classified as medical devices:

  • picture archiving and communication systems;
  • mobile apps for processing ECGs;
  • software for delivery and management of cognitive remediation and rehabilitation programs;
  • software for information management and patient monitoring; and
  • mobile apps for the assessment of moles (e.g. making a recommendation about any changes).

In contrast, the following types of software should generally not be classified as medical devices:

  • mobile apps for the communication between patient and caregivers while giving birth;
  • mobile apps for viewing the anatomy of the human body;
  • software that allows for faster interpretation of particular guidelines (e.g., faster consulting/ reading of an international guideline regarding the Classification of Malignant Tumors issued by the International Union Against Cancer); and
  • mobile apps for managing pictures of moles (e.g. recording changes over time).

On December 7, 2017, the US Food and Drug Administration (FDA) announced several digital health policy documents designed to “encourage innovation” and “bring efficiency and modernization” to the agency’s regulation of digital health products. The three documents include two draft and one final guidance which address, in part, the important changes made by Section 3060 of the 21st Century Cures Act (Cures Act) to the medical device provisions of the Federal, Food, Drug, and Cosmetic Act (FDCA), which we previously summarized, that expressly excluded from the definition of medical device five distinct categories of software or digital health products. FDA Commissioner Dr. Scott Gottlieb emphasized that these documents collectively “offer additional clarity about where the FDA sees its role in digital health, and importantly, where we don’t see a need for FDA involvement.”

To read the full advisory click here.

Join us for a 90-minute webinar, hosted by AdvaMed, focusing on the new EU Medical Device Regulations (MDR/IVDR), which represent the single largest regulatory change in the EU in decades. This program will highlight what the regulatory changes are, how these changes will affect your business, and what you can do to better prepare.  In particular, the classification rules for software have changed, and new requirements are likely to apply to apps and mHealth technologies.

Agenda topics include:

  • What are the changes in the regulations?
  • How will these changes affect research and development, and what impact will there be on, among other things, mobile and telemedicine?
  • How will the new framework affect market access?

Click here to register.


In a recent article published in Intellectual Property & Technology Law Journal, and expanding on our previous post, we discuss the legal and regulatory implications of applying artificial intelligence (AI) to the EU and US healthcare and life sciences sectors.

AI software, particularly when it involves machine learning, is being increasingly used within the healthcare and life science sectors. Its uses include drug discovery (e.g., software that examines biological data to identify potential drug candidates), diagnostics (e.g., an app that analyses real-time data to predict health issues), disease management (e.g., mobile-based coaching systems for pre- and post- operative care) and post-market analysis (e.g., adverse event data collection systems).

Given the healthcare and life science sectors are highly regulated, the development and use of AI requires careful scrutiny of applicable legal and regulatory obligations and any ongoing policy developments. The article discusses how AI may contribute to the research and development of health products, to the care and treatment of patients, and the corresponding legal and regulatory issues surrounding such technological advances.

In Europe, depending on its functionality and intended purpose, software may fall within the definition of ‘medical device’ under the Medical Devices Directive. However, classification of software is fraught with practical challenges because, unlike classification of general medical devices, it is not immediately apparent how the legal parameters apply. The European Commission has published guidelines to interpret the Directive’s requirements, but these are not legally binding (although were recently endorsed by the Advocate General of the Court of Justice of the European Union, as discussed in our advisory). The new EU Regulations adopted on April 5, 2017, which come into effect on May 26, 2020, will widen the scope of the regulatory regime considerably, and will require all operators to re-assess product classification well in advance of this deadline.

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices. However, there are a variety of legal, regulatory, and compliance issues that may arise for AI developers based on the intended use of the product. Once a product is classified as a medical device, its class will define the applicable regulatory requirements, including the type of premarketing notification/ application that is required for FDA clearance or approval. As the use of AI becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems.

Further important considerations, given the capability of AI to capture various forms of personal data, are data protection and cybersecurity, which will become very important to ensure sustainability of the technology. In the EU, these rules are soon to be overhauled by the General Data Protection Regulation, which applies from May 25, 2018. And in the US, regardless of the product’s classification, AI developers will need to assess whether the HIPAA rules apply, and any design controls and post-manufacture auditing that may also apply in the cybersecurity space.

The European Commission has made clear its intention to harness the potential that digital innovation can offer, and in May 2015, announced it Digital Single Market strategy. A key part of this is the digital transformation of health and care in order to improve healthcare for its citizens. On 20 July 2017, the European Commission launched a public consultation to assess how digital innovation can be used to enhance health and care in Europe. This consultation follows on from the Roadmap published last month, with the aim of developing a new policy Communication by the end of 2017.

The consultation focuses on collecting information on three main areas:

(i) cross-border access to and management of personal health data, through electronic medical records and e-prescriptions;

(ii) sharing of data and expertise to advance research, assist with personalized healthcare and anticipate epidemics; and,

(iii) measures for widespread uptake of digital innovation and interaction between patients and healthcare providers.

The questions are very much at a fact-finding level, asking respondents’ views on a wide range of issues, particularly on data protection, which, as recent cyber-attacks on the UK NHS and sanctions imposed by the UK Information Commissioner have shown, is an important factor in a digital market. However, although the Roadmap sets out some intended outcomes that are in line with the three areas of the consultation, the ultimate goal of “widespread adoption of digital technology to make borderless European health and care a reality” is unlikely to be obtained by the end of the year.

The Commission is inviting citizens, patient organizations, healthcare professionals, public authorities and any other users of digital health tools to share their views until 12 October 2017.

We have previously reported on a number of EU projects designed to promote eHealth interoperability (the ability of EU Member States to share healthcare information between their respective IT systems), including the Commission’s eHealth standards project, which aims to build consensus on the standards to be applied to eHealth products, and EURO-CAS, which aims to develop tools to assess the conformity of eHealth products with those standards.

In parallel with those projects, the VALUeHEALTH project, which ran from April 2015 to June 2017 as part of the Commission’s broader research and innovation program, Horizon 2020, focused on developing a business plan for the implementation and funding of eHealth services across the EU. Trans-European digital services are currently funded by the Connecting Europe Facility (“CEF”), which has committed to investing EUR 1.04 billion for this purpose between 2014 and 2020. VALUeHEALTH was concerned with ensuring the sustainable interoperability of European eHealth services beyond 2020.

To this end, the VALUeHEALTH project had five objectives, summarized in the following schematic:

VALUeHEALTH Overall Concept, and Objectives:

Objective 1: Develop a set of prioritized use cases

The VALUeHEALTH prioritized “use cases” for eHealth services on the basis of a number of criteria, including their potential positive impact on patients, improved health outcomes, and reduced healthcare costs. Using these criteria, two use cases were prioritized:

  • Safe prescribing: Ensuring that existing algorithms to support prescribing decisions are able to access critical safety information (e.g., other current medication, allergies and intolerances, clinical conditions, significant family history, relevant bio-markers).
  •  Individual disease management: Condition-specific information-sharing between actors involved in the healthcare, social care and self-care of a patient’s portfolio of long-term conditions.

These use cases were used to inform the analysis underlying the business plan under the remaining objectives.

Objective 2: Design an overarching business model framework

The project sought to identify the expected benefits of interoperability for various stakeholders — in particular, those whose involvement was necessary to sustain interoperability, and those who most needed to realise value from interoperable information. Further, it was intended to produce a cost-benefit analysis for stakeholders who would be required to drive investments. Finally, business modelling methodologies would be used to establish the value of eHealth interoperability and to determine how cost savings and growth in capacity could justify financial investment in eHealth services, with minimal dependence on public funding.

As a result of this work, VALUeHEALTH has established a Business Modelling Task Force, tasked with developing the value chains and value propositions described above. However, further details are not yet available on the project website.

Objective 3: Develop a scale-up roadmap

The VALUeHEALTH project identified high quality data capture as a necessary pre-condition for the scale-up of self-financed cross-border eHealth services. With this in mind, it aimed to examine the barriers to, and the conditions and incentives required for, wide-scale, high quality data capture, which could inform a scale-up strategy.

Barriers identified by the project were (i) the reliance on busy, often junior, clinicians to capture health information from patients, and (ii) the existence of reimbursement models that pay for activity rather than clinical outcomes. Incentives were needed to address these issues.

The Commission intends to use this information to scope the interoperability deployment roadmap and scale-up strategy, as well as its structure and costs. However, it appears that this exercise is ongoing.

Objective 4: Design an information communication technology and interoperability deployment roadmap

VALUeHEALTH has defined the interfaces, services and tools need to deliver the prioritized use cases identified in Objective 1 and, from this, has derived a design and deployment roadmap for eHealth services in general. However, this is not yet available publicly.

There appears to be some overlap between the roadmaps envisaged by Objective 3 and Objective 4. From the available information, we understand that the scale-up roadmap described in Objective 3 is designed to address issues with data capture (i.e., the practical human barriers to ensuring that the data required for cross-border eHealth services is collected and entered into the system), whereas the ICT and interoperability roadmap described in Objective 4 is intended to address the technical requirements of the service.

Objective 5: Deliver a business plan and sustainability plan

The results of Objectives 1-4 have been used to produce a Business Plan and Strategy for future public-private investment in EU eHealth services. In particular, the plan provides guidance to the CEF on how to construct digital service infrastructure for health to ensure maximum value and sustainability beyond 2020. Again, this plan has not yet been published.

We have previously reported on the Accelerated Access Review (AAR), which made 18 recommendations to the UK government for speeding up patient access to new medical technologies. The overarching aim of the AAR was to make the UK a world-leader in healthcare innovation. The AAR report, which was published in October 2016, was particularly focused on digital technologies, and recognized that the current systems in place are not sufficiently flexible to realize the full potential of digital health.

To implement the recommendations of the AAR, the UK government announced last week that it is investing a total of £86 million in four projects aimed at encouraging small and medium sized enterprises (SMEs) to develop and test new products and technologies in the UK’s National Health Service (NHS).

One of the four projects to be funded by the new package is the ‘Digital Health Technology Catalyst’. The Catalyst will receive £35 million to help support innovators by match-funding the development of digital technologies for use by patients and the NHS. The government specifically highlighted digital technologies that help patients manage their conditions from home, or that develop new medicines, as possible areas of development, and cited MyCOPD as a successful project to be repeated – an online system that helps people with chronic obstructive pulmonary disease better manage their condition.

The announcement has been publicly welcomed by a number of industry representative groups, including the Association of British Healthcare Industries (ABHI), techUK, BioIndustry Association (BIA) and the British In Vitro Diagnostics Association (BIVDA).

On 28 June, the Advocate General of the Court of Justice of the European Union gave his opinion on the SNITEM and Philips France case against France. In this case, the Conseil d’Etat in France asked whether a particular software program intended to be used by doctors to support  prescribing decisions falls within the definition of medical device as provided by Directive 93/42/EEC (the Medical Devices Directive).

Definition of a medical device

As we have discussed previously in this blog, there is no general exclusion for software in the definition of medical device provided by the Medical Devices Directive. Software may be regulated as a medical device if it has a medical purpose, meaning that it is capable of appreciably restoring, correcting or modifying physiological functions in human beings. The assessment is by no means straightforward for software as, unlike general medical devices, it is not immediately apparent how these parameters apply to programs. The Commission MEDDEV guidance makes a distinction between software specifically intended by the manufacturer to be used for one or more of the medical purposes set out in the definition of a medical device, and software for general purposes that is used in a healthcare setting which will not be considered as a medical device.

Opinion of the Advocate General

The software subject of this case, the Intellispace Critical Care and Anesthesia (ICCA) manufactured by Philips France, is designed to assist anesthesia and intensive care services by providing doctors with information to assist their prescribing decisions. It provides information with regards to possible contraindications, interactions with other medicines and excessive dosing. The ICCA has been CE marked as a medical device.

The dispute in this case arose from the fact that French law requires that software designed to assist medical prescriptions should be certified at national level. Philips France claimed that, by imposing a further requirement in addition to the conformity procedure laid down by the Directive, the French Government had set up a restriction on import of the device, contrary to EU law.

The French Government argued that the ICCA does not satisfy the definition of a medical device under the Directive, as its functions are purely administrative and for storage purposes, and could not, therefore, be marketed in France without such certification from the French authorities.

The Advocate General disagreed with the French Government’s assessment, and found that ICCA should be classified as a medical device. The following 3 factors are key to reach this conclusion: (i) the ICCA is not a program for general purposes that is used in a healthcare setting; it goes beyond simple storage of data and modifies and interprets such data providing certain information that is useful for healthcare professionals to make adequate prescribing decisions; (ii) the fact that the ICCA does not act directly on the interior or the surface of the human body does not prevent its classification as a medical device; as “contributing” to the principal intended action is sufficient, and; (iii) the Commission’s MEDDEV guidance and other guidance issued by national competent authorities are aligned, and classify programs such as ICCA as medical devices.

Next steps

The European Court will now consider this opinion and deliver a judgment in a few months. This case is the first time that the European Courts have considered software that may be classified as medical devices, and the decision of the Court will likely have an immediate effect on the EU market and how software used in healthcare setting is regulated.

You may find further details on this case in our Advisory.

Royal Free NHS Foundation Trust (the Trust) is one of the largest Trusts in the UK, employing more than 9,000 staff and providing services to over a million patients in North London.

On 3 July 2017, the UK Information Commissioner (ICO), the regulator overseeing data privacy, ruled that the Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind. The Trust provided personal data of about 1.6 million patients as part of a trial to test an alert, diagnosis and detection system for acute kidney injury.

DeepMind works with hospitals on mobile tools and Artificial Intelligence to plan patient journey, from diagnosis to treatment, as quickly and accurately as possible. Streams, an application which is in use at the Trust, is based on a mobile technology platform to send immediate alerts to clinicians when a patient’s condition deteriorates. Streams has also been rolled out to other UK hospitals, and DeepMind has also diversified the application for use in other settings that include a project aimed at using Artificial Intelligence to improve diagnosis of diabetic retinopathy, and another aimed at using similar approach to better prepare radiotherapists for treating head and neck cancers The application therefore saves lives.

In her ruling, the ICO recognized the huge potential for innovative research and creative use of data on patient care and clinical improvements. However, ICO considered that the price of innovation does not need to be erosion of fundamental privacy rights. The ICO noted in particular: “[i]n relation to health data, the Commissioner and her office recognises the benefits that can be achieved by using patient data for wider public good and where appropriate, we support the development of innovative technological solutions that use personal data to improve clinical care. We would like to make it clear that the Commissioner has no desire to prevent or hamper the development of such solutions; however, such schemes and laudable ends must meet the necessary compliance mechanism set out in the Act.

In this case, the ruling is levelled against the Trust as the data controller responsible for compliance with the Data Protection Act throughout its partnership with Streams and DeepMind has been acting as a data processor processing personal data on behalf of the Trust.

The ruling is based on the ICO investigation which has identified several shortcomings in how the data were handled according to the terms of the agreement for the partnership between the Trust and DeepMind. Most importantly, as regards transparency in data sharing between the Trust and DeepMind, the ICO found that patients were not adequately informed that their data would be used as part of the test.

The Trust is required to provide an undertaking to ensure that personal data are processed in accordance with the Data Protection Act especially in relation to the following guiding principles: (a) personal data must be processed fairly and lawfully; (b) processing of personal data must be adequate, relevant and not excessive; (c) personal data must be processed in accordance with the rights of data subjects; and (d) appropriate technical and organisation controls must be taken, including the need to ensure that appropriate contractual controls are put in place when a data processor is used. These remedial measures are set out in the undertaking for the Trust to implement according to the time-table imposed by the ICO. The Trust is also required to commission an audit of the trial the results of which will be shared with the ICO and the results may be published as the ICO sees appropriate.

In February 2016, the European Commission established a Working Group on mHealth tasked with developing guidelines “for assessing the validity and reliability of the data that health apps collect and process”. Since this Working Group was set up, there have been a series of face-to-face meetings, open stakeholder meetings, conference calls and online questionnaires. Two drafts of the guidelines have also been published for consideration, as discussed in our previous posts here and here.

Last month, the Working Group, drawn from patients, healthcare professionals, industry, public authorities, payers and social care insurance, research and academia, finally published its report on the draft guidance. Members of the Working Group were invited to give their views on the assessment criteria, what they understood by each of the criteria and whether they considered them relevant for the purposes of assessing the validity and reliability of health apps.

To the extent that any consensus could be found on the criteria for the assessment of apps, six criteria were considered to be relevant: privacy, transparency, reliability, validity, interoperability and safety. Two further criteria achieved majority support: technical stability and effectiveness.

However, the Working Group’s discussions were plagued by “areas of apparent disagreement and different understanding of the implications, use and meaning of the criteria during app assessment”, such that the Working Group was unable to come to any agreement on the scope, purpose or targets for health app assessment guidelines. Their divergent understandings were not helped by the fact that the range of technologies that constitute health apps is constantly evolving, nor by the passing of new legislation and guidelines at EU and Member State level (e.g., the Medical Devices Regulation and the General Data Protection Regulation).

The Working Group was, therefore, forced to conclude: “Clearly, an important lesson from this exercise is the need to follow a step-wise approach, starting with a solid agreement on scope and terminology, especially if the Guidelines are to be developed by a multi-stakeholder group.” As such, it seems that the guidelines are currently not being progressed in their current form.