It has been almost a year since the European Commission published a final draft of a Code of Conduct on privacy for mHealth mobile applications (the “Code”). Our previous post summarizes the draft and its application to app developers. However, we noted that the Article 29 Working Party (the “WP29”), an independent advisory body comprised of representatives from all EU Data Protection Authorities, had to comment on the draft before it was formally adopted. In a letter dated 10 April 2017, the WP29 has finally set out its comments on the draft, and identified areas of improvement.

Comments on the draft

The letter begins by setting out the WP29’s expectations for the Code:

  • The Code needs to be compliant with the Data Protection Directive (Directive 95/46/EC, the “Directive”) and its national implementing legislation.
  • The Code must be of adequate quality.
  • The Code must provide sufficient added value to the Directive and other applicable data protection legislation.
  • The Code should continue to be relevant following the transition to the General Data Protection Regulation (Regulation (EU) 2016/679, the “GDPR”).

The WP29 is quite critical of the draft Code, and identifies a number of ways that the draft fails to add value to existing data protection legislation. The WP29’s general comments are that:

  • The Code does not elaborate sufficiently on the relationship between the Directive and national legislation implementing the Directive in individual EU Member States.
  • While the Code’s stated aim is to facilitate data protection compliance and not to address other compliance issues, it should nonetheless take into account other legislation that impacts on the prime objective of data compliance (e.g., provisions on cookies in the ePrivacy Directive (Directive 2002/58/EC)).
  • The Code needs to be clearer on the roles of the parties involved in the processing of personal data (i.e., whether the app developer is a data controller, data processor or both).
  • The Code should be re-evaluated in light of the relevant provisions of the GDPR to ensure that the content of the Code is consistent with the definitions given in both the Directive and the GDPR.

Specific comments

The WP29 also sets out more specific observations on areas in which the Code requires improvement. In summary:

  • Governance and monitoring model: It was not clear whether the model detailed in the Code would be compliant with some of the new requirements of the GDPR. In addition, further information was needed on: (1) the composition of the Assembly and how membership was to be managed; (2) how the monitoring body would be accredited; and (3) the financial contributions required from different members (the WP29 was specifically concerned with ensuring that fees did not preclude wide participation).
  • Practical guidelines for data controllers: The Code should make clear that consent to personal data processing should fulfil all requirements of the GDPR and the Directive, and guidance in relation to obtaining consent to the processing of children’s data should be more thorough. At the same time, the Code should acknowledge that there are other conditions that render data processing fair and lawful, and refer explicitly to them. It should also identify safeguards to raise awareness of the possible risks associated with the use of mHealth apps.
  • Data protection principles: Whilst the “practical guidelines for data controllers” referred to the necessity of safeguards for data subjects, it did not mention that these safeguards should be “appropriate”, in line with data protection principles. Further, the Code should refer to all of the data protection principles, or explain why they are not relevant.
  • Information, transparency and data subjects rights: The Code should require developers to make more information about the role of the data controller available to end users. It did not provide sufficient information on how data subjects could exert their rights, or how data controllers and data processors should meet their obligations. The Code should refer to the relevant provisions of the GDPR in relation to transfer of personal data to third countries. The legal basis and requirements for processing data for marketing purposes should also be referred to, such as the relevant sections of the GDPR.
  • Security: The Code should include more details and relevant examples on how app developers can integrate “privacy by design” and “privacy by default” into their development processes, as well as being attentive to legal restrictions relating to retention periods. Specific provisions in relation to data protection breaches should be included in line with the definitions of personal data contained in the Directive and the GDPR.

The draft will now need to be reconsidered by the drafting group to take these comments into account. The WP29 specifically states: “When revising the draft, please consider carefully what “added value” the code of conduct provides as a whole and, in particular, what specific examples, practical solutions or recommendations you could draw from discussions with stakeholders, ...” In the meantime, given the shortage of guidance in this area, developers may choose to follow the Code, and the recommendations from the WP29 in order to conform to best practice.

The U.S. Food and Drug Administration (FDA) issued a Warning Letter on April 12, 2017 requiring an explanation of how St. Jude Medical plans to correct and prevent cybersecurity concerns identified for St. Jude Medical’s Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators, and the Merlin@home monitor.

The Warning Letter follows a January 2017 FDA Safety Communication on St. Jude Medical’s implantable cardiac devices and the Merline@home transmitter. The safety alert identified that such devices “contain configurable embedded computer systems that can be vulnerable to cybersecurity intrusions and exploits. As medical devices become increasingly interconnected via the Internet, hospital networks, other medical devices, and smartphones, there is an increased risk of exploitation of cybersecurity vulnerabilities, some of which could affect how a medical device operates.” FDA conducted an assessment of St. Jude Medical’s software patch for the Merlin@home Transmitter and determined that “the health benefits to patients from continued use of the device outweigh the cybersecurity risks.” Consequently, FDA’s safety alert provides recommendations to healthcare professionals, patients and caregivers to “reduce the risk of patient harm due to cybersecurity vulnerabilities.”

The following month, FDA conducted a 10-day inspection at St. Jude Medical’s Sylmar, CA facility and concluded that St. Jude Medical has not adequately addressed the cybersecurity concerns. Notably, FDA observed failures related to corrective and preventive actions (CAPA), controls, design verification and design validation.

CAPA

In one instance, FDA found that St. Jude Medical based it’s risk evaluation on “confirmed” defect cases and not considering the potential for “unconfirmed” defect cases and therefore underestimated the occurrence of a hazardous situation related to premature battery depletion. Moreover, FDA found that St. Jude Medical failed to follow its CAPA procedures when evaluating a third party cybersecurity risk assessment report. Finally, FDA found that St. Jude Medical’s management and medical advisory boards did not receive information on the potential for “unconfirmed” defect cases and were falsely informed that no death resulted from premature battery depletion issue.

For all instances, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA.

Control Procedures

On October 11, 2016, St. Jude Medical initiated a recall for Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators due to premature battery depletion. Despite the recall, FDA noted that some devices were distributed and implanted. Again, FDA was unable to determine whether the St. Jude Medical’s corrective actions were sufficient because St. Jude Medical failed to provide evidence of implementation.

Design Verification and Validation

In addition, FDA found St. Jude Medical failed to ensure that “design verification shall confirm that the design output meets the design input requirements,” and failed to accurately incorporate the findings of a third-party assessment into updated cybersecurity risk assessments for high voltage and peripheral devices like the Merlin@home monitor. Specifically, the Merlin@home monitor’s testing procedures did not require full verification to ensure the network ports would not open with an unauthorized interface. Further, the cybersecurity risk assessments failed to accurately incorporate the third party report’s findings into its security risk ratings. Also, even though the same reports identified the hardcoded universal unlock code as an exploitable hazard for the high voltage devices, St. Jude Medical failed to estimate and evaluate this risk.

For all violations, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA. FDA has given St. Jude Medical 15 days to explain how the company plans to act on the premature battery depletion issue (despite related injuries and one death) as well as the improper focus on “confirmed” cases, and the distribution and implantation of recalled devices. FDA warns that St. Jude could face additional regulatory action if the matters are not resolved in a timely manner.

The Warning Letter, together with the January 2017 Safety Communication and a December 2016 Guidance on Postmarket Management of Cybersecurity in Medical Devices (which we have previously summarized here and here), demonstrates FDA’s continued scrutiny on the cybersecurity of medical devices. It appears that FDA is trying to communicate the need for device manufacturers to incorporate cybersecurity checkpoints throughout a product’s lifecycle to prevent patient harm and potential regulatory action. Not a bad idea for an increasingly tech-savvy world.

Connected health involving health technology, digital media and mobile devices opens up new opportunities to improve the quality and outcomes of both health and social care. Such transformational innovation, however, may also bring about significant regulatory compliance risks.

On 3 March 2017, four UK healthcare regulators, including the Care Quality Commission (“CQC”), made a joint statement reminding providers of online clinical and pharmaceutical services, and associated healthcare professionals, that they should follow professional guidelines to ensure such services are provided safely and effectively.

We have written an in-depth assessment on the ongoing regulatory activities in the UK, available here, which was published in Digital Health Legal on 20 April 2017.

As indicated in the joint statement, CQC inspections found that certain online services were found to be too ready to sell prescription-only medicines without undertaking proper checks or verifying the patient’s individual circumstance, raising significant concerns about patient safety. The view taken by the regulators is that the same safeguards should be put in place for patients whether they attend a physical consultation with their GP (primary care physician) or seek medical advice and treatment online.

UK domestic law already provides that online providers must assess the risks to people’s health and safety during any care or treatment and make sure that staff have the qualifications, competence, skills and experience to keep people safe. The CQC has the power to bring a criminal prosecution if a failure to meet this responsibility results in avoidable harm to a person using the service or if a person using the service is exposed to significant risk of harm. Unlike other enforcement regimes, the CQC does not have to serve a warning notice before prosecution. The CQC can also pursue criminal sanctions where there have been fundamental breaches of standards of quality and safety and can enforce the standards using civil powers to impose conditions, suspend or cancel a registration to provide the online services.

In March 2017, the CQC published guidance clarifying its existing primary care guidance by setting out how it proposes to regulate digital healthcare providers in primary care. The guidance provides that the CQC will evaluate the following key lines of inquiry (“KLOEs”): whether services are safe, effective, caring, responsive to people’s needs and well-led. Each KLOE is accompanied by a number of questions that inspectors will consider as part of the assessment, which are characterised by the CQC as ‘prompts’.

The European Medicines Agency (“EMA”) recently set up a task force, along with the national competent authorities in the EEA, to analyze how medicines regulators in the EEA can use big data to better develop medicines for humans and animals. This follows a workshop in November last year to identify opportunities for big data in medicines development and regulation, and to address the challenges of their exploitation. “Big data” in the healthcare sector is the sum of many parts, which include the records of a multitude of patients, clinical trial data, adverse reaction reports, social media commentary and app records. Several projects have been set up across the EU to aggregate and analyze such data, which are explored by the European Commission in its December 2016 Study on Big Data in Public Health, Telemedicine and Healthcare. The EMA has recognized that “the vast volume of data has the potential to contribute significantly to the way the benefits and risks of medicines are assessed over their entire lifecycle.”

So who will make up the new EMA task force and what are its objectives?

The task force comprises staff from several medicine regulatory agencies in the EEA and will be chaired by the Danish Medicines Agency. It’s first actions will be carried out over the next 18 months and they include:

  • Mapping sources and characteristics of big data.
  • Exploring the potential applicability and impact of big data on medicines regulation.
  • Developing recommendations on necessary changes to legislation, regulatory guidelines or data security provisions.
  • Creating a roadmap for the development of big data capabilities for the evaluation of applications for marketing authorizations or clinical trials in the national competent authorities.
  • Collaborating with other regulatory authorities and partners outside the EEA to consider their insights on big data initiatives. News of the task force comes on the back of the update that the UK data protection regulator, the Information Commissioner’s Office (“ICO”), made to its 2014 publication on big data, artificial intelligence, machine learning and data protection, early last month. The publication pulls out the distinctive considerations of the use of big data from a data protection perspective. Such considerations can include whether the collection of personal data goes above and beyond what is needed for specific processing activities, whether processing activities are made clear to individuals, and how new types of data can be used. In light of the increasingly imminent General Data Protection Regulation, as discussed in our previous post, the ICO includes practical guidance for organizations to process big data in a way that is compliant with the new rules. Healthcare and other organizations looking to process big data will need to ensure that they carry out suitable privacy impact assessments and implement a range of protective measures, such as auditable machine learning algorithms, anonymization and comprehensive privacy policies. Guidance on profiling is also likely to follow

We’ll be keeping an eye on the work of the new task force, as well as any further practical guidance that comes from data protection regulatory agencies. It is clear that organizations will need to get the balance right between potentially hugely speeding up research and innovation by using big data, and adhering to the regulatory obligations that are attached.

Join us on Tuesday, April 18, 2017 from 12:00–1:00 pm ET for a webinar that will address, among others, the following issues:

  • Overview of EU security rules
  • Securing the IoT
  • Product liability and other potential claims
  • Health care reimbursement and fraud issues
  • Cyber liability insurance

Click here to register.

Speakers include:

1 hour CA and NY MCLE credit is pending. CLE credit for other jurisdictions is also pending.

The European Commission has published a report on the cost-effectiveness of standards-driven eHealth interoperability; the exchange of data between IT systems. This is one of a number of parallel initiatives from the Commission to advance e-Health interoperability, such as the EURO-CAS project launched in January this year, and is an essential part of the EU Digital Agenda.

The ultimate goal of the Commission’s efforts on eStandards for eHealth interoperability is to join up with healthcare stakeholders in Europe, and globally, to build consensus on eHealth standards, accelerate knowledge-sharing and promote wider adoption of standards.

The eStandards project is working to finalize a roadmap and associated evidence base, a white paper on the need for formal standards, and two guidelines addressing how to work with: (a) clinical content in profiles, and (b) competing standards in large-scale eHealth deployments. An initial roadmap has already been prepared. The final roadmap aims to describe the actions to be taken by standards development and profiling organizations (SDOs), policymakers in eHealth, and national competence centers, to warrant high availability and use of general and personal health information at the point of care, as well as for biomedical, clinical, public health, and health policy research.

The objective of this discrete cost-effectiveness study is to support the preparation of the final roadmap. The study contacted 3 categories of stakeholders: i) Centers of Competence; ii) Vendors (mostly small and medium-sized companies) on the European market; and iii) Standards Organizations (mostly international). It has shown that stakeholders use the same tools in different projects across Europe, which should facilitate communication of best practices between them.

Its main findings are that:

  • All stakeholders consider that using standards and standards-driven tools contribute to better quality products.
  • Vendors and Centers of Competence share the same benefits as a result of the efficiency of the project (e.g. the continuous improvement of the specifications, and their effectiveness).
  • In terms of economic results, the study shows clearly that using and reusing existing tools and content saves effort and time, as well as money. It standardizes methods of working and increases professionalism of the project team. However due to the complexity of the eHealth domain, training is one of the major challenges for increasing the adoption of profiles and standards.
  • The study also indicates that standards are available, but the challenge is their adoption.

The study proposes a few practical recommendations for promoting the use of the standards-driven tools:

  1. Develop a strategy to communicate and disseminate the use of standards-driven tools, showing evidence of their positive impact in the development of projects and products;
  2. Develop simple indicators and/or refine the indicators used in this study in order to quantify the progress of adoption of standards-driven tools;
  3. Identify the weaknesses and limitations associated with deploying standards and tools;
  4. Develop conformity assessments and testing platforms for better adoption of the standards.

These initiatives complement the new guidance published on 23 March by the Commission for digital public services in its new European Interoperability Framework, which is meant to help European public administrations to coordinate their digitalization efforts when delivering public services.

Last week, the New York Office of the Attorney General (“OAG”) announced settlements with three mobile health application developers to resolve allegations that the companies made misleading claims and engaged in “irresponsible privacy practices.” The three companies that entered into settlements are:

  • Cardiio, a U.S.-based company that sells Cardiio, an app that claims to measure heart rate;
  • Runtastic, an Austria-based company that sells Runtastic, an app that purports to measure heart rate and cardiovascular performance under stress (downloaded approximately 1 million times); and
  • Matis, an Israel-based company that sells My Baby’s Beat, an app which Matis previously claimed could turn any smartphone into a fetal heart monitor, without FDA approval for such use.

With respect to Cardiio (settlement) and Runtastic (settlement), OAG alleged that both companies failed to test the accuracy of their apps under the conditions for which the apps were marketed (e.g., failed to test the product on subjects who had engaged in vigorous exercise, despite marketing the app for that purpose”). In addition, the OAG alleged that both companies’ apps claimed to accurately measure heart rate after vigorous exercise while using only a smartphone camera and sensors. OAG also alleged that Cardiio’s marketing practices included false endorsements. For example, Cardioo was charged with making claims that “misleadingly implied that the app was endorsed by MIT,” when Cardiio’s technology was based only on technology licensed from MIT and originally developed at the MIT Media Lab.

With respect to Matis (settlement), OAG alleged that the company deceived customers into using the My Baby’s Beat instead of a fetal heart monitor or Doppler, even though the app was not FDA-approved for such use and the company had “never conducted … a comparison to a fetal heart monitor, Doppler, or any other device that had been scientifically proven to amplify the sound of a fetal heartbeat.”

In each settlement agreement, OAG cites various claims made by the companies on the App or Google Play Stores (including product reviews by consumers), company websites, and other promotional materials. The OAG asserted that the “net impression” conveyed to consumers about such apps by these claims were misleading and unsubstantiated. In addition, OAG alleged that each company failed to obtain FDA approval for their apps and noted in the settlements that FDA generally regulates cardiac monitors as Class II devices under 21 C.F.R. § 870.2300 and fetal cardiac monitors as Class II devices under 21 C.F.R. § 884.2600.

Under the settlements, Cardiio and Runtastic each paid $5,000 in civil penalties, and Matis paid $20,000. Further, each company is required to take the following corrective actions:

  1. Amend and correct the deceptive statements made about their apps to make them non-misleading;
  2. Provide additional information about the testing conducted on their apps (e.g. substantiation);
  3. Post clear and prominent disclaimers informing consumers that their apps are not medical devices, are not for medical use, and are not approved or cleared by the FDA; and
  4. Modify their privacy policies to better protect consumers

With respect to privacy, the companies must now require the affirmative consent to their privacy policies for these apps and disclose that they collect and share information that may be personally identifying. This includes users’ GPS location, unique device identifier, and “de-identified” data that third parties may be able to use to re-identify specific users.

In addition, if the companies make any “material change” to their claims concerning the functionality of their apps, the companies must: (1) perform testing to substantiate any such claims; (2) conduct such testing using researchers qualified by training and experience to conduct such testing; and (3) secure and preserve all data, analyses, and documents regarding such testing, and make them available to the OAG upon request.

The OAG explained that the settlements follow a year-long investigation of mobile health applications, which include “more than 165,000 apps that provide general medical advice and education, allow consumers to track their fitness or symptoms based on self-reported data, and promote healthy behavior and wellness.” Of these apps, the OAG appears to be focusing its enforcement on a “narrower subset of apps [that] claim to measure vital signs and other key health indicators using only a smartphone [camera and sensors, without any external device], which can be harmful to consumers if they provide inaccurate or misleading results.”

Referred to as “Health Measurement Apps,” the OAG expressed concern that such apps could “provide false reassurance that a consumer is healthy, which might cause [them] to forgo necessary medical treatment and thereby jeopardize [their] health.” Conversely, Health Measurement Apps “can incorrectly indicate a medical issue, causing a consumer to unnecessarily seek medical treatment – sometimes from a hospital emergency room.”

The OAG’s risk-based approach appears to be consistent with FDA’s risk-based approach for regulating general wellness products, which Congress expressly excluded from the definition of medical “device” in Section 3060 of the recently enacted 21st Century Cures Act (read our Advisory here).

Ultimately, this settlement demonstrates that in addition to traditional regulators such as the FTC and FDA, which have taken a number of recent enforcement actions against mHealth app developers (as we’ve discussed here, here, and here), state consumer protection laws may also be implicated by such products. Accordingly, companies should continue to establish, implement, and execute robust quality or medical/clinical programs to support any research needed to substantiate claims made about mHealth products. And, more importantly, digital health companies should create strong promotional review committees that consistent of legal, medical, and regulatory professionals who can properly vet any advertising or promotional claims to mitigate potentially false, misleading, or deceptive claims that could trigger enforcement by regulatory agencies and prosecutors.

We have previously published a post on the potential uses of mobile apps in clinical trials, and the accompanying advantages and limitations. Recent research published in The New England Journal of Medicine (NEJM) confirms the increasing number of innovative studies being conducted through the internet, and discusses the bioethical considerations and technical complexities arising from this use.

Apps used in clinical research

The vast majority of the population, including patients and healthcare professionals, have mobile phones. They are using them in a growing number of ways, and increasingly expect the organizations they interact with to do the same. Clinical research is no exception. As we discussed previously, smartphones are becoming increasingly important as a means of facilitating patient recruitment, reducing costs, disseminating and collecting a wide-range of health data, and improving the informed consent process.

A major development in relation to app-based studies occurred in early 2015 with the launch of Apple’s ResearchKit, an open-source software toolkit for the iOS platform that can be used to build apps for smartphone-based medical research. Since then, similar toolkits, such as ResearchStack, have been launched to facilitate app development on the Android operating system.

Several Institutional Review Board-approved study apps were launched shortly after the creation of ResearchKit, including MyHeart Counts (cardiovascular disease), mPower (Parkinson’s disease), Gluco-Success (type 2 diabetes), Asthma Health (asthma) and Share the Journey (breast cancer).

The NEJM publication refers to data from MyHeart Counts to emphasize particular features of app-based studies. The MyHeart Counts study enrolled more than 10,000 participants in the first 24 hours: a recruitment figure that many traditional study sponsors would regard with envy. While this figure appears, at least in part, to result from expanded access to would-be participants who are not within easy reach of a study site, it may carry with it a degree of selection bias. For example, the consenting study population in MyHeart Counts was predominantly young (median age, 36) and male (82 per cent), reflecting the uneven distribution of smartphone usage and familiarity across the population in the demographics of app-based study participants. The MyHeart Counts completer population (i.e. those who completed a 6-minute “walk test” at the end of seven days) represented only 10 per cent of participants who provided consent. The reasons for low completer rates in app-based studies are not mapped out, but may relate to participants’ commitment to partake in and contribute to the study in the absence of face-to-face interactions.

Regulatory and legal challenges for digital consent

Conduct of clinical trials is guided by good clinical practice (GCP) principles, which seek to ensure that:

  • trials are ethically conducted to protect the dignity, privacy and safety of trial subjects; and
  • there exists an adequate procedure to ensure the quality and integrity of the data generated from the trial.

Informed consent is one of the most important ethical principles, and an essential condition both for therapy and research. It is a voluntary agreement to participate in research, but is more than a form that is signed; it is a process during which the subject acquires an understanding of the research and its risks.

The challenges of conducting clinical research using digital technology are, to name a few:

  1. how to ensure that the language used in the informed consent is engaging and user-friendly to promote greater understanding of the nature of the study and the risks relating to participation in the trial;
  2. how to assess capacity and understanding of trial subjects remotely;
  3. how to assess voluntary choice without the benefit of body language and tone; and
  4. how to verify the identity of the person consenting (although this risk may be mitigated in the future through biometric or identity verification tools).

Moreover, there are practical challenges with using these technologies. For example, relating to the assessment of patient eligibility, and monitoring of trial subjects to ensure clinically meaningful data of an acceptable quality are collected and collated during the trial to comply with the GCP principles and support regulatory submissions.

Because of some of these challenges, the NEJM publication suggests that app-based research may be most suitable for low-risk studies. However, it is likely that these risks will be mitigated in the future as the technology develops and researchers and patients become more familiar with its use.

2017 has started with a bang on the data protection front. There have been several developments these past few months, ranging from updates on the new EU General Data Protection Regulation (“GDPR”), coming into force in May 2018, to the establishment of a Swiss-EU Privacy Shield. In relation to mHealth specifically, the Code of Conduct for mHealth is still with the Article 29 Working Party (the EU data protection representative body, or “WP29”) – such codes of conduct have a raised status in the GDPR and are likely to play a more significant role going forwards. We provide a snapshot of the latest developments below.

Firstly, there have been several steps forward in relation to the GDPR. The UK data protection regulator, the “ICO”, has been consistent in its support for preparation of the GDPR in the UK following the Brexit vote last year. In January, we have seen the ICO provide an update on the GDPR guidance that it will be publishing for organizations in 2017, and the WP29 adopt an action plan and publish guidance on three key areas of the GDPR. MP Matt Hancock (Minister of State for Digital and Culture with responsibility for data protection) also suggested in December and February that a radical departure from the GDPR provisions in the UK after Brexit is unlikely, despite being careful not to give away the intentions of the UK government.

On the electronic communications front, the European Commission published a draft E-Privacy Regulation in January, which is currently being assessed by the WP29, European Parliament and Council. The new Regulation is designed as an update to the E-Privacy Directive, and will sit alongside the GDPR to govern the protection of personal data in relation to the wide area of electronic communications, whether in the healthcare sector or otherwise (such as those via WhatsApp, Skype, Gmail and Facebook Messenger).

In relation to global personal data transfer mechanisms, in January the Federal Council of Switzerland announced that there would be a new framework for transferring personal data (including health data) from Switzerland to the US; the Swiss-EU Privacy Shield. As with the EU-US Privacy Shield, the Swiss-US Privacy Shield has been agreed as a replacement of the Swiss-US Safe Harbor framework. The establishment of the new Swiss-EU Privacy Shield means that Switzerland will apply similar standards for transfers of personal data to the US as the EU. Organizations can sign up to the Swiss-EU Privacy Shield with the US Department of Commerce from 12 April 2017. If organizations have already self-certified to the EU-US Privacy Shield, they will be able to add their certification to the Swiss-US Privacy Shield on the Privacy Shield website from 12 April 2017.

These developments need to be taken into consideration by organizations that are creating and implementing digital health products, such as mHealth apps, which operate in a space that can bring up several regulatory questions. Further information can be found in our recent advisory.

The National Institute for health and Care Excellence (NICE) provides guidance to the NHS in England on the clinical and cost effectiveness of selected new and established technologies through its healthcare technology assessment (HTA) program. Using the experience it has gained from this program, NICE intends to develop a system for evaluating digital apps. The pilot phase for this project was set in place in November 2016, and, from March 2017, NICE will publish non-guidance briefings on mobile technology health apps, to be known as “Health App Briefings”. These briefings will set out the evidence for an app, but will not provide a recommendation on its use; this will remain subject to the judgment of the treating physician.

The existing HTA program consists of an initial scoping process, during which NICE defines the specific questions that the HTA will address. NICE then conducts an assessment of the technology, in which an independent academic review group conducts a review of the quality, findings and implications of the available evidence for a technology, followed by an economic evaluation. Finally, an Appraisal Committee considers the report prepared by the academic review group and decides whether to recommend the technology for use in the NHS.

The new program builds on the current Paperless 2020 simplified app assessment process, which was recommended in the Accelerated Access Review Report discussed in a previous post. It has many parallels with the HTA program. In particular, it will be a four-stage process, comprising: (1) the app developer’s self-assessment against defined criteria; (2) a community evaluation involving crowd-sourced feedback from professionals, the public and local commissioners; (3) preparation of a benefit case; and (4) an independent impact evaluation, considering both efficacy and cost-effectiveness.

NICE is currently preparing five Health App Briefings, of which NICE’s Deputy Chief Executive and Director of Health and Social Care, Professor Gillian Leng, has confirmed one will relate to Sleepio, an app shown in placebo-controlled clinical trials to improve sleep through a virtual course of cognitive behavioral therapy.

We understand that future Health App Briefings will also focus on digital tools with applications in mental health and chronic conditions, consistent with NHS England’s plans to improve its mental healthcare provision and, in particular, access to tailored care.

For apps that have evidence to support their use and the claims made about them, the new Innovation and Technology Tariff, announced by the Chief Executive of NHS England in June 2016, could provide a reimbursement route for the app. This will provide a national route to market for a small number of technologies, and will incentivize providers to use digital products with proven health outcomes and economic benefits.