Connected health involving health technology, digital media and mobile devices opens up new opportunities to improve the quality and outcomes of both health and social care. Such transformational innovation, however, may also bring about significant regulatory compliance risks.

On 3 March 2017, four UK healthcare regulators, including the Care Quality Commission (“CQC”), made a joint statement reminding providers of online clinical and pharmaceutical services, and associated healthcare professionals, that they should follow professional guidelines to ensure such services are provided safely and effectively.

We have written an in-depth assessment on the ongoing regulatory activities in the UK, available here, which was published in Digital Health Legal on 20 April 2017.

As indicated in the joint statement, CQC inspections found that certain online services were found to be too ready to sell prescription-only medicines without undertaking proper checks or verifying the patient’s individual circumstance, raising significant concerns about patient safety. The view taken by the regulators is that the same safeguards should be put in place for patients whether they attend a physical consultation with their GP (primary care physician) or seek medical advice and treatment online.

UK domestic law already provides that online providers must assess the risks to people’s health and safety during any care or treatment and make sure that staff have the qualifications, competence, skills and experience to keep people safe. The CQC has the power to bring a criminal prosecution if a failure to meet this responsibility results in avoidable harm to a person using the service or if a person using the service is exposed to significant risk of harm. Unlike other enforcement regimes, the CQC does not have to serve a warning notice before prosecution. The CQC can also pursue criminal sanctions where there have been fundamental breaches of standards of quality and safety and can enforce the standards using civil powers to impose conditions, suspend or cancel a registration to provide the online services.

In March 2017, the CQC published guidance clarifying its existing primary care guidance by setting out how it proposes to regulate digital healthcare providers in primary care. The guidance provides that the CQC will evaluate the following key lines of inquiry (“KLOEs”): whether services are safe, effective, caring, responsive to people’s needs and well-led. Each KLOE is accompanied by a number of questions that inspectors will consider as part of the assessment, which are characterised by the CQC as ‘prompts’.

The European Medicines Agency (“EMA”) recently set up a task force, along with the national competent authorities in the EEA, to analyze how medicines regulators in the EEA can use big data to better develop medicines for humans and animals. This follows a workshop in November last year to identify opportunities for big data in medicines development and regulation, and to address the challenges of their exploitation. “Big data” in the healthcare sector is the sum of many parts, which include the records of a multitude of patients, clinical trial data, adverse reaction reports, social media commentary and app records. Several projects have been set up across the EU to aggregate and analyze such data, which are explored by the European Commission in its December 2016 Study on Big Data in Public Health, Telemedicine and Healthcare. The EMA has recognized that “the vast volume of data has the potential to contribute significantly to the way the benefits and risks of medicines are assessed over their entire lifecycle.”

So who will make up the new EMA task force and what are its objectives?

The task force comprises staff from several medicine regulatory agencies in the EEA and will be chaired by the Danish Medicines Agency. It’s first actions will be carried out over the next 18 months and they include:

  • Mapping sources and characteristics of big data.
  • Exploring the potential applicability and impact of big data on medicines regulation.
  • Developing recommendations on necessary changes to legislation, regulatory guidelines or data security provisions.
  • Creating a roadmap for the development of big data capabilities for the evaluation of applications for marketing authorizations or clinical trials in the national competent authorities.
  • Collaborating with other regulatory authorities and partners outside the EEA to consider their insights on big data initiatives. News of the task force comes on the back of the update that the UK data protection regulator, the Information Commissioner’s Office (“ICO”), made to its 2014 publication on big data, artificial intelligence, machine learning and data protection, early last month. The publication pulls out the distinctive considerations of the use of big data from a data protection perspective. Such considerations can include whether the collection of personal data goes above and beyond what is needed for specific processing activities, whether processing activities are made clear to individuals, and how new types of data can be used. In light of the increasingly imminent General Data Protection Regulation, as discussed in our previous post, the ICO includes practical guidance for organizations to process big data in a way that is compliant with the new rules. Healthcare and other organizations looking to process big data will need to ensure that they carry out suitable privacy impact assessments and implement a range of protective measures, such as auditable machine learning algorithms, anonymization and comprehensive privacy policies. Guidance on profiling is also likely to follow

We’ll be keeping an eye on the work of the new task force, as well as any further practical guidance that comes from data protection regulatory agencies. It is clear that organizations will need to get the balance right between potentially hugely speeding up research and innovation by using big data, and adhering to the regulatory obligations that are attached.

Join us on Tuesday, April 18, 2017 from 12:00–1:00 pm ET for a webinar that will address, among others, the following issues:

  • Overview of EU security rules
  • Securing the IoT
  • Product liability and other potential claims
  • Health care reimbursement and fraud issues
  • Cyber liability insurance

Click here to register.

Speakers include:

1 hour CA and NY MCLE credit is pending. CLE credit for other jurisdictions is also pending.

The European Commission has published a report on the cost-effectiveness of standards-driven eHealth interoperability; the exchange of data between IT systems. This is one of a number of parallel initiatives from the Commission to advance e-Health interoperability, such as the EURO-CAS project launched in January this year, and is an essential part of the EU Digital Agenda.

The ultimate goal of the Commission’s efforts on eStandards for eHealth interoperability is to join up with healthcare stakeholders in Europe, and globally, to build consensus on eHealth standards, accelerate knowledge-sharing and promote wider adoption of standards.

The eStandards project is working to finalize a roadmap and associated evidence base, a white paper on the need for formal standards, and two guidelines addressing how to work with: (a) clinical content in profiles, and (b) competing standards in large-scale eHealth deployments. An initial roadmap has already been prepared. The final roadmap aims to describe the actions to be taken by standards development and profiling organizations (SDOs), policymakers in eHealth, and national competence centers, to warrant high availability and use of general and personal health information at the point of care, as well as for biomedical, clinical, public health, and health policy research.

The objective of this discrete cost-effectiveness study is to support the preparation of the final roadmap. The study contacted 3 categories of stakeholders: i) Centers of Competence; ii) Vendors (mostly small and medium-sized companies) on the European market; and iii) Standards Organizations (mostly international). It has shown that stakeholders use the same tools in different projects across Europe, which should facilitate communication of best practices between them.

Its main findings are that:

  • All stakeholders consider that using standards and standards-driven tools contribute to better quality products.
  • Vendors and Centers of Competence share the same benefits as a result of the efficiency of the project (e.g. the continuous improvement of the specifications, and their effectiveness).
  • In terms of economic results, the study shows clearly that using and reusing existing tools and content saves effort and time, as well as money. It standardizes methods of working and increases professionalism of the project team. However due to the complexity of the eHealth domain, training is one of the major challenges for increasing the adoption of profiles and standards.
  • The study also indicates that standards are available, but the challenge is their adoption.

The study proposes a few practical recommendations for promoting the use of the standards-driven tools:

  1. Develop a strategy to communicate and disseminate the use of standards-driven tools, showing evidence of their positive impact in the development of projects and products;
  2. Develop simple indicators and/or refine the indicators used in this study in order to quantify the progress of adoption of standards-driven tools;
  3. Identify the weaknesses and limitations associated with deploying standards and tools;
  4. Develop conformity assessments and testing platforms for better adoption of the standards.

These initiatives complement the new guidance published on 23 March by the Commission for digital public services in its new European Interoperability Framework, which is meant to help European public administrations to coordinate their digitalization efforts when delivering public services.

Last week, the New York Office of the Attorney General (“OAG”) announced settlements with three mobile health application developers to resolve allegations that the companies made misleading claims and engaged in “irresponsible privacy practices.” The three companies that entered into settlements are:

  • Cardiio, a U.S.-based company that sells Cardiio, an app that claims to measure heart rate;
  • Runtastic, an Austria-based company that sells Runtastic, an app that purports to measure heart rate and cardiovascular performance under stress (downloaded approximately 1 million times); and
  • Matis, an Israel-based company that sells My Baby’s Beat, an app which Matis previously claimed could turn any smartphone into a fetal heart monitor, without FDA approval for such use.

With respect to Cardiio (settlement) and Runtastic (settlement), OAG alleged that both companies failed to test the accuracy of their apps under the conditions for which the apps were marketed (e.g., failed to test the product on subjects who had engaged in vigorous exercise, despite marketing the app for that purpose”). In addition, the OAG alleged that both companies’ apps claimed to accurately measure heart rate after vigorous exercise while using only a smartphone camera and sensors. OAG also alleged that Cardiio’s marketing practices included false endorsements. For example, Cardioo was charged with making claims that “misleadingly implied that the app was endorsed by MIT,” when Cardiio’s technology was based only on technology licensed from MIT and originally developed at the MIT Media Lab.

With respect to Matis (settlement), OAG alleged that the company deceived customers into using the My Baby’s Beat instead of a fetal heart monitor or Doppler, even though the app was not FDA-approved for such use and the company had “never conducted … a comparison to a fetal heart monitor, Doppler, or any other device that had been scientifically proven to amplify the sound of a fetal heartbeat.”

In each settlement agreement, OAG cites various claims made by the companies on the App or Google Play Stores (including product reviews by consumers), company websites, and other promotional materials. The OAG asserted that the “net impression” conveyed to consumers about such apps by these claims were misleading and unsubstantiated. In addition, OAG alleged that each company failed to obtain FDA approval for their apps and noted in the settlements that FDA generally regulates cardiac monitors as Class II devices under 21 C.F.R. § 870.2300 and fetal cardiac monitors as Class II devices under 21 C.F.R. § 884.2600.

Under the settlements, Cardiio and Runtastic each paid $5,000 in civil penalties, and Matis paid $20,000. Further, each company is required to take the following corrective actions:

  1. Amend and correct the deceptive statements made about their apps to make them non-misleading;
  2. Provide additional information about the testing conducted on their apps (e.g. substantiation);
  3. Post clear and prominent disclaimers informing consumers that their apps are not medical devices, are not for medical use, and are not approved or cleared by the FDA; and
  4. Modify their privacy policies to better protect consumers

With respect to privacy, the companies must now require the affirmative consent to their privacy policies for these apps and disclose that they collect and share information that may be personally identifying. This includes users’ GPS location, unique device identifier, and “de-identified” data that third parties may be able to use to re-identify specific users.

In addition, if the companies make any “material change” to their claims concerning the functionality of their apps, the companies must: (1) perform testing to substantiate any such claims; (2) conduct such testing using researchers qualified by training and experience to conduct such testing; and (3) secure and preserve all data, analyses, and documents regarding such testing, and make them available to the OAG upon request.

The OAG explained that the settlements follow a year-long investigation of mobile health applications, which include “more than 165,000 apps that provide general medical advice and education, allow consumers to track their fitness or symptoms based on self-reported data, and promote healthy behavior and wellness.” Of these apps, the OAG appears to be focusing its enforcement on a “narrower subset of apps [that] claim to measure vital signs and other key health indicators using only a smartphone [camera and sensors, without any external device], which can be harmful to consumers if they provide inaccurate or misleading results.”

Referred to as “Health Measurement Apps,” the OAG expressed concern that such apps could “provide false reassurance that a consumer is healthy, which might cause [them] to forgo necessary medical treatment and thereby jeopardize [their] health.” Conversely, Health Measurement Apps “can incorrectly indicate a medical issue, causing a consumer to unnecessarily seek medical treatment – sometimes from a hospital emergency room.”

The OAG’s risk-based approach appears to be consistent with FDA’s risk-based approach for regulating general wellness products, which Congress expressly excluded from the definition of medical “device” in Section 3060 of the recently enacted 21st Century Cures Act (read our Advisory here).

Ultimately, this settlement demonstrates that in addition to traditional regulators such as the FTC and FDA, which have taken a number of recent enforcement actions against mHealth app developers (as we’ve discussed here, here, and here), state consumer protection laws may also be implicated by such products. Accordingly, companies should continue to establish, implement, and execute robust quality or medical/clinical programs to support any research needed to substantiate claims made about mHealth products. And, more importantly, digital health companies should create strong promotional review committees that consistent of legal, medical, and regulatory professionals who can properly vet any advertising or promotional claims to mitigate potentially false, misleading, or deceptive claims that could trigger enforcement by regulatory agencies and prosecutors.

We have previously published a post on the potential uses of mobile apps in clinical trials, and the accompanying advantages and limitations. Recent research published in The New England Journal of Medicine (NEJM) confirms the increasing number of innovative studies being conducted through the internet, and discusses the bioethical considerations and technical complexities arising from this use.

Apps used in clinical research

The vast majority of the population, including patients and healthcare professionals, have mobile phones. They are using them in a growing number of ways, and increasingly expect the organizations they interact with to do the same. Clinical research is no exception. As we discussed previously, smartphones are becoming increasingly important as a means of facilitating patient recruitment, reducing costs, disseminating and collecting a wide-range of health data, and improving the informed consent process.

A major development in relation to app-based studies occurred in early 2015 with the launch of Apple’s ResearchKit, an open-source software toolkit for the iOS platform that can be used to build apps for smartphone-based medical research. Since then, similar toolkits, such as ResearchStack, have been launched to facilitate app development on the Android operating system.

Several Institutional Review Board-approved study apps were launched shortly after the creation of ResearchKit, including MyHeart Counts (cardiovascular disease), mPower (Parkinson’s disease), Gluco-Success (type 2 diabetes), Asthma Health (asthma) and Share the Journey (breast cancer).

The NEJM publication refers to data from MyHeart Counts to emphasize particular features of app-based studies. The MyHeart Counts study enrolled more than 10,000 participants in the first 24 hours: a recruitment figure that many traditional study sponsors would regard with envy. While this figure appears, at least in part, to result from expanded access to would-be participants who are not within easy reach of a study site, it may carry with it a degree of selection bias. For example, the consenting study population in MyHeart Counts was predominantly young (median age, 36) and male (82 per cent), reflecting the uneven distribution of smartphone usage and familiarity across the population in the demographics of app-based study participants. The MyHeart Counts completer population (i.e. those who completed a 6-minute “walk test” at the end of seven days) represented only 10 per cent of participants who provided consent. The reasons for low completer rates in app-based studies are not mapped out, but may relate to participants’ commitment to partake in and contribute to the study in the absence of face-to-face interactions.

Regulatory and legal challenges for digital consent

Conduct of clinical trials is guided by good clinical practice (GCP) principles, which seek to ensure that:

  • trials are ethically conducted to protect the dignity, privacy and safety of trial subjects; and
  • there exists an adequate procedure to ensure the quality and integrity of the data generated from the trial.

Informed consent is one of the most important ethical principles, and an essential condition both for therapy and research. It is a voluntary agreement to participate in research, but is more than a form that is signed; it is a process during which the subject acquires an understanding of the research and its risks.

The challenges of conducting clinical research using digital technology are, to name a few:

  1. how to ensure that the language used in the informed consent is engaging and user-friendly to promote greater understanding of the nature of the study and the risks relating to participation in the trial;
  2. how to assess capacity and understanding of trial subjects remotely;
  3. how to assess voluntary choice without the benefit of body language and tone; and
  4. how to verify the identity of the person consenting (although this risk may be mitigated in the future through biometric or identity verification tools).

Moreover, there are practical challenges with using these technologies. For example, relating to the assessment of patient eligibility, and monitoring of trial subjects to ensure clinically meaningful data of an acceptable quality are collected and collated during the trial to comply with the GCP principles and support regulatory submissions.

Because of some of these challenges, the NEJM publication suggests that app-based research may be most suitable for low-risk studies. However, it is likely that these risks will be mitigated in the future as the technology develops and researchers and patients become more familiar with its use.

2017 has started with a bang on the data protection front. There have been several developments these past few months, ranging from updates on the new EU General Data Protection Regulation (“GDPR”), coming into force in May 2018, to the establishment of a Swiss-EU Privacy Shield. In relation to mHealth specifically, the Code of Conduct for mHealth is still with the Article 29 Working Party (the EU data protection representative body, or “WP29”) – such codes of conduct have a raised status in the GDPR and are likely to play a more significant role going forwards. We provide a snapshot of the latest developments below.

Firstly, there have been several steps forward in relation to the GDPR. The UK data protection regulator, the “ICO”, has been consistent in its support for preparation of the GDPR in the UK following the Brexit vote last year. In January, we have seen the ICO provide an update on the GDPR guidance that it will be publishing for organizations in 2017, and the WP29 adopt an action plan and publish guidance on three key areas of the GDPR. MP Matt Hancock (Minister of State for Digital and Culture with responsibility for data protection) also suggested in December and February that a radical departure from the GDPR provisions in the UK after Brexit is unlikely, despite being careful not to give away the intentions of the UK government.

On the electronic communications front, the European Commission published a draft E-Privacy Regulation in January, which is currently being assessed by the WP29, European Parliament and Council. The new Regulation is designed as an update to the E-Privacy Directive, and will sit alongside the GDPR to govern the protection of personal data in relation to the wide area of electronic communications, whether in the healthcare sector or otherwise (such as those via WhatsApp, Skype, Gmail and Facebook Messenger).

In relation to global personal data transfer mechanisms, in January the Federal Council of Switzerland announced that there would be a new framework for transferring personal data (including health data) from Switzerland to the US; the Swiss-EU Privacy Shield. As with the EU-US Privacy Shield, the Swiss-US Privacy Shield has been agreed as a replacement of the Swiss-US Safe Harbor framework. The establishment of the new Swiss-EU Privacy Shield means that Switzerland will apply similar standards for transfers of personal data to the US as the EU. Organizations can sign up to the Swiss-EU Privacy Shield with the US Department of Commerce from 12 April 2017. If organizations have already self-certified to the EU-US Privacy Shield, they will be able to add their certification to the Swiss-US Privacy Shield on the Privacy Shield website from 12 April 2017.

These developments need to be taken into consideration by organizations that are creating and implementing digital health products, such as mHealth apps, which operate in a space that can bring up several regulatory questions. Further information can be found in our recent advisory.

The National Institute for health and Care Excellence (NICE) provides guidance to the NHS in England on the clinical and cost effectiveness of selected new and established technologies through its healthcare technology assessment (HTA) program. Using the experience it has gained from this program, NICE intends to develop a system for evaluating digital apps. The pilot phase for this project was set in place in November 2016, and, from March 2017, NICE will publish non-guidance briefings on mobile technology health apps, to be known as “Health App Briefings”. These briefings will set out the evidence for an app, but will not provide a recommendation on its use; this will remain subject to the judgment of the treating physician.

The existing HTA program consists of an initial scoping process, during which NICE defines the specific questions that the HTA will address. NICE then conducts an assessment of the technology, in which an independent academic review group conducts a review of the quality, findings and implications of the available evidence for a technology, followed by an economic evaluation. Finally, an Appraisal Committee considers the report prepared by the academic review group and decides whether to recommend the technology for use in the NHS.

The new program builds on the current Paperless 2020 simplified app assessment process, which was recommended in the Accelerated Access Review Report discussed in a previous post. It has many parallels with the HTA program. In particular, it will be a four-stage process, comprising: (1) the app developer’s self-assessment against defined criteria; (2) a community evaluation involving crowd-sourced feedback from professionals, the public and local commissioners; (3) preparation of a benefit case; and (4) an independent impact evaluation, considering both efficacy and cost-effectiveness.

NICE is currently preparing five Health App Briefings, of which NICE’s Deputy Chief Executive and Director of Health and Social Care, Professor Gillian Leng, has confirmed one will relate to Sleepio, an app shown in placebo-controlled clinical trials to improve sleep through a virtual course of cognitive behavioral therapy.

We understand that future Health App Briefings will also focus on digital tools with applications in mental health and chronic conditions, consistent with NHS England’s plans to improve its mental healthcare provision and, in particular, access to tailored care.

For apps that have evidence to support their use and the claims made about them, the new Innovation and Technology Tariff, announced by the Chief Executive of NHS England in June 2016, could provide a reimbursement route for the app. This will provide a national route to market for a small number of technologies, and will incentivize providers to use digital products with proven health outcomes and economic benefits.

We previously described some of the ways in which life sciences companies are exploring the potential of IBM’s supercomputer, ‘Watson®’, to assist with product development and disease treatment.  Such uses raise important questions about how Watson and other software are treated under medical device regulations.  These questions are particularly important as tech companies find themselves wading into the healthcare arena and may be unaware of the heavily regulated industry they are entering.

The regulation of medical software has been controversial and subject to the vagaries of guidelines and subjective interpretations by the regulatory authorities. We consider below the regulatory minefield and the circumstances in which a software is regulated as a medical device in the EU and U.S.

EU

How is software regulated?

In the EU, a medical device means any instrument or other apparatus, including software, intended by the manufacturer to be used for human beings for the purpose of, among other things, diagnosis, prevention, monitoring, treatment or alleviation of disease. There is no general exclusion for software, and software may be regulated as a medical device if it has a medical purpose, meaning it is capable of appreciably restoring, correcting or modifying physiological functions in human beings. A case-by-case assessment is needed, taking account of the product characteristics, mode of use and claims made by the manufacturer. However, the assessment is by no means straightforward for software, which is particularly complex because, unlike classification of general medical devices, it is not immediately apparent how these parameters apply to software, given that software does not act on the human body to restore, correct or modify bodily functions.

As a result, software used in a healthcare setting is not necessarily a medical device. The issue is whether the software can be used as a tool for treatment, prevention or diagnosis of a disease or condition. For example, software that calculates anatomical sites of the body, and image enhancing software intended for diagnostic purposes, is generally viewed as a software medical device because it is used as a tool, over and above the healthcare professionals’ clinical judgment, in order to assist clinical diagnosis and treatment. For the same reason, software used for merely conveying or reviewing patient data is generally not a medical device.

What about Watson?

The main benefit of IBM’s cognitive computing software is its ability to analyse large amounts of data to develop knowledge about a disease or condition, rather than treatment options for an individual patient. Currently, its uses are largely limited to research and development. On the basis of these uses, the software may not be considered as having the medical purpose necessary for it to be classified as a medical device.

However, uses of the software that aim to enhance clinical diagnosis or treatment of a condition may potentially alter the regulatory status, especially if the function of the software goes beyond data capture and communication. Similarly, some of the new partnerships recently announced, described in our previous post, are aimed at developing personalised management solutions, or mobile coaching systems for patients. These may be viewed as having a medical purpose in view of the health-related information they acquire to provide informed feedback to the patient on self-help, or decision-making relating to the patient’s treatment plan. As the uses for Watson increase, and become more involved in treatment decisions, this change in regulatory status is likely to increase.

Will there be any change under the new Medical Device Regulations?

The EU legislative proposal for new medical device Regulations, which have reached broad agreement by the EU legislature but have not yet been adopted, contain additional provisions that specifically address software medical devices. Of particular relevance, software with a medical purpose of “prediction and prognosis” will be considered as coming within the scope of the Regulations. This means that software and apps that were previously excluded from being regulated, may in the future be “up-classified” and be susceptible to being regulated as medical devices. Along with a number of initiatives in the EU, the EU institutions recognize the importance of mHealth in the healthcare setting, and are seeking to ensure it is properly regulated as its use increases.

U.S.

How is software regulated?

In the United States, the Food and Drug Administration (FDA) has regulatory authority over medical devices. FDA considers a medical device to be an instrument or other apparatus, component, or accessory that is intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease in man or other animals, or that is intended to affect the structure or function of any man or other animal but which is not dependent on being metabolized (i.e., a drug) for achievement of that purpose.   FDA has issued a number of guidance documents to assist in identifying when software or mobile apps are considered to be medical devices.

One type of software FDA has not issued guidance on is Clinical Decision Support Software (CDSS). CDSS is software that utilizes patient information to assist providers in making diagnostic or treatment decisions. Until recently, CDSS was approached in a similar fashion to FDA’s framework for mobile apps. In other words, CDSS was viewed as existing on a continuum from being a Class II regulated medical device, to being subject to FDA’s enforcement discretion, to not being considered a medical device at all. On December 13, 2016, however, the 21st Century Cures Act was signed into law, clarifying the scope of FDA’s regulatory jurisdiction over stand-alone software products used in healthcare.

The 21st Century Cures Act contains a provision – Section 3060 – that explicitly exempts certain types of software from the definition of a medical device. As relevant for CDSS, the law excludes from the definition of a “device” software (unless the software is intended to “acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system”):

  1. Displaying, analyzing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines);
  2. Supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition; and
  3. Enabling health care professionals to independently review the basis for such recommendations so that the software is not primarily relied upon to make a clinical diagnosis or treatment decision regarding an individual patient.

Thus the Act generally excludes most CDSS from FDA jurisdiction. However, it is worth noting that FDA may bring CDSS back under its jurisdiction if it makes certain findings regarding: (1) the likelihood and severity of patient harm if the software does not perform as intended, (2) the extent to which the software is intended to support the clinical judgment of a health care professional, (3) whether there is a reasonable opportunity for a health care professional to review the basis of the information or treatment recommendation, and (4) the intended user and use environment.

What About Watson?

Based on this regulatory framework, IBM’s Watson would not generally be regulated as a medical device if simply used as a tool to assist physician review of medical data. In many uses, Watson is still dependent on human intervention and therefore does not make independent patient-specific diagnoses or treatment decisions. Importantly, statements about Watson also show that it is intended to be used simply as a tool by physicians and it is not intended that physicians rely primarily on Watson’s recommendations.

As such, in many applications, Watson is likely to be the kind of CDSS statutorily excluded from the definition of a medical device. However, as Watson and other forms of artificial intelligence advance and become capable of making or altering medical diagnoses or treatment decisions with little input or oversight from physicians, or transparency as to underlying assumptions and algorithms, these technologies will fall outside of the exclusion. As the use of such forms of artificial intelligence becomes more central to clinical decision-making, it will be interesting to see whether FDA attempts to take a more active role in its regulation, or if other agencies — such as the U.S. Federal Trade Commission — step up their scrutiny of such systems. Additionally, state laws may be implicated with regard to how such technology is licensed or regulated under state public health, consumer protection, and medical practice licensure requirements.

Interoperability has been identified as one of the greatest challenges in healthcare IT. It is defined as the ability of organizations to share information and knowledge, by means of the exchange of data between their respective IT systems, and is about bringing to life fruitful collaborations between different healthcare environments, with electronic means.

With this in mind, the eHealth Interoperability Conformity Assessment for Europe (EURO-CAS) project launched on 26 January 2017. With a budget of €1 million (approximately $1.1m), it is one of the projects being funded under the European Union’s (EU) Horizon 2020 research and innovation program.  The launch of this project shows the EU’s recognition that eHealth has become increasingly important within healthcare, and that the use of such technologies needs to be streamlined.

The aims of the project are two-fold:

  1. to develop models, tools and processes to enable an assessment of the conformity of eHealth products with international, regional and national standards.
  2. to provide a method for manufacturers of conforming technology to demonstrate that conformity to the public. As set out in previous posts, a key concern with the proliferation of apps and eHealth products is how to demonstrate to patients and payers that they are safe, effective, and protect patients’ privacy. The hope is that the project will address these concerns and promote the adoption and take-up of eHealth products, and the use of the various standards that are being developed.

These aims will be achieved through the development of the ‘CAS’ scheme by a consortium led by IHE-Europe, and consisting of EU member state representatives, experts and international associations. Six ‘work packages’ have been set up to deliver the project, each focusing on discrete aspects of the project:

work packages

Fig 1: https://www.euro-cas.eu/work-packages

The project’s key deliverables (and corresponding timelines) have also been outlined, with the final scheme to be presented to the public in November 2018.

The overarching plan behind EURO-CAS is to pave the way for more eHealth interoperability in the EU. The project will build on the findings of a series of EU-funded projects concerning eHealth interoperability over the past years. The scheme also aims for consistency with the ‘Refined eHealth European Interoperability Framework’, which identified the importance of interoperability for eHealth to be truly useful in healthcare, and was endorsed by representatives from all EU member states in 2015.

EURO-CAS states that it is “committed to transparency and openness”. Interested parties are invited to partake in project events, to engage through the project’s Twitter channel (@EURO_CAS), or LinkedIn group, or to provide feedback on the deliverables that will be submitted for public consultation in due course.