Privacy and Cybersecurity

It has been almost a year since the European Commission published a final draft of a Code of Conduct on privacy for mHealth mobile applications (the “Code”). Our previous post summarizes the draft and its application to app developers. However, we noted that the Article 29 Working Party (the “WP29”), an independent advisory body comprised of representatives from all EU Data Protection Authorities, had to comment on the draft before it was formally adopted. In a letter dated 10 April 2017, the WP29 has finally set out its comments on the draft, and identified areas of improvement.

Comments on the draft

The letter begins by setting out the WP29’s expectations for the Code:

  • The Code needs to be compliant with the Data Protection Directive (Directive 95/46/EC, the “Directive”) and its national implementing legislation.
  • The Code must be of adequate quality.
  • The Code must provide sufficient added value to the Directive and other applicable data protection legislation.
  • The Code should continue to be relevant following the transition to the General Data Protection Regulation (Regulation (EU) 2016/679, the “GDPR”).

The WP29 is quite critical of the draft Code, and identifies a number of ways that the draft fails to add value to existing data protection legislation. The WP29’s general comments are that:

  • The Code does not elaborate sufficiently on the relationship between the Directive and national legislation implementing the Directive in individual EU Member States.
  • While the Code’s stated aim is to facilitate data protection compliance and not to address other compliance issues, it should nonetheless take into account other legislation that impacts on the prime objective of data compliance (e.g., provisions on cookies in the ePrivacy Directive (Directive 2002/58/EC)).
  • The Code needs to be clearer on the roles of the parties involved in the processing of personal data (i.e., whether the app developer is a data controller, data processor or both).
  • The Code should be re-evaluated in light of the relevant provisions of the GDPR to ensure that the content of the Code is consistent with the definitions given in both the Directive and the GDPR.

Specific comments

The WP29 also sets out more specific observations on areas in which the Code requires improvement. In summary:

  • Governance and monitoring model: It was not clear whether the model detailed in the Code would be compliant with some of the new requirements of the GDPR. In addition, further information was needed on: (1) the composition of the Assembly and how membership was to be managed; (2) how the monitoring body would be accredited; and (3) the financial contributions required from different members (the WP29 was specifically concerned with ensuring that fees did not preclude wide participation).
  • Practical guidelines for data controllers: The Code should make clear that consent to personal data processing should fulfil all requirements of the GDPR and the Directive, and guidance in relation to obtaining consent to the processing of children’s data should be more thorough. At the same time, the Code should acknowledge that there are other conditions that render data processing fair and lawful, and refer explicitly to them. It should also identify safeguards to raise awareness of the possible risks associated with the use of mHealth apps.
  • Data protection principles: Whilst the “practical guidelines for data controllers” referred to the necessity of safeguards for data subjects, it did not mention that these safeguards should be “appropriate”, in line with data protection principles. Further, the Code should refer to all of the data protection principles, or explain why they are not relevant.
  • Information, transparency and data subjects rights: The Code should require developers to make more information about the role of the data controller available to end users. It did not provide sufficient information on how data subjects could exert their rights, or how data controllers and data processors should meet their obligations. The Code should refer to the relevant provisions of the GDPR in relation to transfer of personal data to third countries. The legal basis and requirements for processing data for marketing purposes should also be referred to, such as the relevant sections of the GDPR.
  • Security: The Code should include more details and relevant examples on how app developers can integrate “privacy by design” and “privacy by default” into their development processes, as well as being attentive to legal restrictions relating to retention periods. Specific provisions in relation to data protection breaches should be included in line with the definitions of personal data contained in the Directive and the GDPR.

The draft will now need to be reconsidered by the drafting group to take these comments into account. The WP29 specifically states: “When revising the draft, please consider carefully what “added value” the code of conduct provides as a whole and, in particular, what specific examples, practical solutions or recommendations you could draw from discussions with stakeholders, ...” In the meantime, given the shortage of guidance in this area, developers may choose to follow the Code, and the recommendations from the WP29 in order to conform to best practice.

The U.S. Food and Drug Administration (FDA) issued a Warning Letter on April 12, 2017 requiring an explanation of how St. Jude Medical plans to correct and prevent cybersecurity concerns identified for St. Jude Medical’s Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators, and the Merlin@home monitor.

The Warning Letter follows a January 2017 FDA Safety Communication on St. Jude Medical’s implantable cardiac devices and the Merline@home transmitter. The safety alert identified that such devices “contain configurable embedded computer systems that can be vulnerable to cybersecurity intrusions and exploits. As medical devices become increasingly interconnected via the Internet, hospital networks, other medical devices, and smartphones, there is an increased risk of exploitation of cybersecurity vulnerabilities, some of which could affect how a medical device operates.” FDA conducted an assessment of St. Jude Medical’s software patch for the Merlin@home Transmitter and determined that “the health benefits to patients from continued use of the device outweigh the cybersecurity risks.” Consequently, FDA’s safety alert provides recommendations to healthcare professionals, patients and caregivers to “reduce the risk of patient harm due to cybersecurity vulnerabilities.”

The following month, FDA conducted a 10-day inspection at St. Jude Medical’s Sylmar, CA facility and concluded that St. Jude Medical has not adequately addressed the cybersecurity concerns. Notably, FDA observed failures related to corrective and preventive actions (CAPA), controls, design verification and design validation.

CAPA

In one instance, FDA found that St. Jude Medical based it’s risk evaluation on “confirmed” defect cases and not considering the potential for “unconfirmed” defect cases and therefore underestimated the occurrence of a hazardous situation related to premature battery depletion. Moreover, FDA found that St. Jude Medical failed to follow its CAPA procedures when evaluating a third party cybersecurity risk assessment report. Finally, FDA found that St. Jude Medical’s management and medical advisory boards did not receive information on the potential for “unconfirmed” defect cases and were falsely informed that no death resulted from premature battery depletion issue.

For all instances, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA.

Control Procedures

On October 11, 2016, St. Jude Medical initiated a recall for Fortify, Unify, Assura (including Quadra) implantable cardioverter defibrillators and cardiac resynchronization therapy defibrillators due to premature battery depletion. Despite the recall, FDA noted that some devices were distributed and implanted. Again, FDA was unable to determine whether the St. Jude Medical’s corrective actions were sufficient because St. Jude Medical failed to provide evidence of implementation.

Design Verification and Validation

In addition, FDA found St. Jude Medical failed to ensure that “design verification shall confirm that the design output meets the design input requirements,” and failed to accurately incorporate the findings of a third-party assessment into updated cybersecurity risk assessments for high voltage and peripheral devices like the Merlin@home monitor. Specifically, the Merlin@home monitor’s testing procedures did not require full verification to ensure the network ports would not open with an unauthorized interface. Further, the cybersecurity risk assessments failed to accurately incorporate the third party report’s findings into its security risk ratings. Also, even though the same reports identified the hardcoded universal unlock code as an exploitable hazard for the high voltage devices, St. Jude Medical failed to estimate and evaluate this risk.

For all violations, FDA stated that while St. Jude Medical provided details on some corrective actions, it failed to provide evidence of implementation and was therefore deemed inadequate by FDA. FDA has given St. Jude Medical 15 days to explain how the company plans to act on the premature battery depletion issue (despite related injuries and one death) as well as the improper focus on “confirmed” cases, and the distribution and implantation of recalled devices. FDA warns that St. Jude could face additional regulatory action if the matters are not resolved in a timely manner.

The Warning Letter, together with the January 2017 Safety Communication and a December 2016 Guidance on Postmarket Management of Cybersecurity in Medical Devices (which we have previously summarized here and here), demonstrates FDA’s continued scrutiny on the cybersecurity of medical devices. It appears that FDA is trying to communicate the need for device manufacturers to incorporate cybersecurity checkpoints throughout a product’s lifecycle to prevent patient harm and potential regulatory action. Not a bad idea for an increasingly tech-savvy world.

The European Medicines Agency (“EMA”) recently set up a task force, along with the national competent authorities in the EEA, to analyze how medicines regulators in the EEA can use big data to better develop medicines for humans and animals. This follows a workshop in November last year to identify opportunities for big data in medicines development and regulation, and to address the challenges of their exploitation. “Big data” in the healthcare sector is the sum of many parts, which include the records of a multitude of patients, clinical trial data, adverse reaction reports, social media commentary and app records. Several projects have been set up across the EU to aggregate and analyze such data, which are explored by the European Commission in its December 2016 Study on Big Data in Public Health, Telemedicine and Healthcare. The EMA has recognized that “the vast volume of data has the potential to contribute significantly to the way the benefits and risks of medicines are assessed over their entire lifecycle.”

So who will make up the new EMA task force and what are its objectives?

The task force comprises staff from several medicine regulatory agencies in the EEA and will be chaired by the Danish Medicines Agency. It’s first actions will be carried out over the next 18 months and they include:

  • Mapping sources and characteristics of big data.
  • Exploring the potential applicability and impact of big data on medicines regulation.
  • Developing recommendations on necessary changes to legislation, regulatory guidelines or data security provisions.
  • Creating a roadmap for the development of big data capabilities for the evaluation of applications for marketing authorizations or clinical trials in the national competent authorities.
  • Collaborating with other regulatory authorities and partners outside the EEA to consider their insights on big data initiatives. News of the task force comes on the back of the update that the UK data protection regulator, the Information Commissioner’s Office (“ICO”), made to its 2014 publication on big data, artificial intelligence, machine learning and data protection, early last month. The publication pulls out the distinctive considerations of the use of big data from a data protection perspective. Such considerations can include whether the collection of personal data goes above and beyond what is needed for specific processing activities, whether processing activities are made clear to individuals, and how new types of data can be used. In light of the increasingly imminent General Data Protection Regulation, as discussed in our previous post, the ICO includes practical guidance for organizations to process big data in a way that is compliant with the new rules. Healthcare and other organizations looking to process big data will need to ensure that they carry out suitable privacy impact assessments and implement a range of protective measures, such as auditable machine learning algorithms, anonymization and comprehensive privacy policies. Guidance on profiling is also likely to follow

We’ll be keeping an eye on the work of the new task force, as well as any further practical guidance that comes from data protection regulatory agencies. It is clear that organizations will need to get the balance right between potentially hugely speeding up research and innovation by using big data, and adhering to the regulatory obligations that are attached.

2017 has started with a bang on the data protection front. There have been several developments these past few months, ranging from updates on the new EU General Data Protection Regulation (“GDPR”), coming into force in May 2018, to the establishment of a Swiss-EU Privacy Shield. In relation to mHealth specifically, the Code of Conduct for mHealth is still with the Article 29 Working Party (the EU data protection representative body, or “WP29”) – such codes of conduct have a raised status in the GDPR and are likely to play a more significant role going forwards. We provide a snapshot of the latest developments below.

Firstly, there have been several steps forward in relation to the GDPR. The UK data protection regulator, the “ICO”, has been consistent in its support for preparation of the GDPR in the UK following the Brexit vote last year. In January, we have seen the ICO provide an update on the GDPR guidance that it will be publishing for organizations in 2017, and the WP29 adopt an action plan and publish guidance on three key areas of the GDPR. MP Matt Hancock (Minister of State for Digital and Culture with responsibility for data protection) also suggested in December and February that a radical departure from the GDPR provisions in the UK after Brexit is unlikely, despite being careful not to give away the intentions of the UK government.

On the electronic communications front, the European Commission published a draft E-Privacy Regulation in January, which is currently being assessed by the WP29, European Parliament and Council. The new Regulation is designed as an update to the E-Privacy Directive, and will sit alongside the GDPR to govern the protection of personal data in relation to the wide area of electronic communications, whether in the healthcare sector or otherwise (such as those via WhatsApp, Skype, Gmail and Facebook Messenger).

In relation to global personal data transfer mechanisms, in January the Federal Council of Switzerland announced that there would be a new framework for transferring personal data (including health data) from Switzerland to the US; the Swiss-EU Privacy Shield. As with the EU-US Privacy Shield, the Swiss-US Privacy Shield has been agreed as a replacement of the Swiss-US Safe Harbor framework. The establishment of the new Swiss-EU Privacy Shield means that Switzerland will apply similar standards for transfers of personal data to the US as the EU. Organizations can sign up to the Swiss-EU Privacy Shield with the US Department of Commerce from 12 April 2017. If organizations have already self-certified to the EU-US Privacy Shield, they will be able to add their certification to the Swiss-US Privacy Shield on the Privacy Shield website from 12 April 2017.

These developments need to be taken into consideration by organizations that are creating and implementing digital health products, such as mHealth apps, which operate in a space that can bring up several regulatory questions. Further information can be found in our recent advisory.

Published in Privacy & Cybersecurity Law Report’s April 2017 issue.

In the closing days of last year, the FDA issued its final guidance on postmarket medical device cybersecurity. This guidance is a corollary to the previously issued final guidance on premarket cybersecurity issues, and the pre and post market pieces should be read, and fit, together. In both cases, the FDA sets out a comprehensive, and lifecycle approach to managing cyber risk. Under this guidance, the FDA is asking companies to operationalize a structured way to think through and act on these product, hardware, software, and network issues. Last year, we wrote about 5 things companies can do now to get ahead of the curve on the premarket guidance, and they still apply.

The final postmarket guidance follows much of the 2016 draft guidance, with a few important changes. We wrote a detailed piece on the 2016 draft guidance. The two big changes are:  a change in focus from possible cyber impact on the product (what was called the “essential clinical performance” of the device) to a focus on the health impact on the patient if a vulnerability were exploited (what is now called the “patient harm”); and a fleshing-out of the recommended vulnerability disclosure process and time frames. Focusing on the possible impact to the patient seems like a good change. Cyber risk is a function of threat, vulnerability and consequence, and with medical devices, the consequence surely revolves around the patient. It is the second change – around vulnerability disclosure, timing for disclosure, and required information sharing with an industry-wide “Information Sharing Analysis Organization (ISAO)” that will take real thought, work and finesse.

Under the final guidance, if there is an “Uncontrolled Risk” given the exploitability of the vulnerability and the severity of patient harm if exploited, that risk should be remediated “as quickly as possible.” As for notice to the FDA and customers, you must report these vulnerabilities to the FDA pursuant to part 806 (which requires manufacturers to report certain device corrections and removals), unless the manufacturer meets four specific requirements: (1) there are no known serious adverse events or deaths; (2) within 30 days of learning of the vulnerability the manufacturer communicates with its customers and user community describing at a minimum the vulnerability, an impact assessment, the efforts to address the risk of patient harm, any compensating controls or strategies to apply, and commit to communicating the availability of a future fix; (3) within 60 days of learning of the vulnerability, the manufacturer fixes the vulnerability, validates the change, and distributes the fix such that the risk is reduced to an acceptable level; and (4)  the manufacturer participates in the ISAO and provides the ISAO with any customer communications upon notification of its customers. If you meet these obligation and timelines, you do not have to report under part 806 – but if you don’t meet these obligations you do have to report and then are subject to the usual 806 reporting.

So, to avoid part 806, you want to follow the four conditions. But they are more complex than one might think at first glance. As a general matter, information technology companies do not like to notify users of a vulnerability until there is a fix. A known vulnerability without a fix can easily (and often are) exploited by adversaries. Customers are less secure. Therefore, generally companies announce vulnerabilities and fixes together, so that customers can protect themselves before bad guys can exploit. Usually, only on rare occasions, when there is a known active exploit, would you notify customers before you have a fix. The FDA and the medical device industry seem to be searching for the appropriate approach for medical devices, where there is potential for non-trivial patient harm, and an existing regulatory structure and overall public health mission.  The issue of vulnerability disclosure is complex, and subject to much debate (the U.S. Commerce Department just published the results of a year-long study, concluded there is still much work to be done to get it right). Similarly, the issue of information sharing about cyber threat and vulnerability information with others in industry, and with the government is still an area of much discussion. A year ago, Congress passed an information-sharing bill to help reduce potential barriers to information sharing, including provisions for some amount of liability protection for sharing cyber threat and vulnerability information with others. Today, companies are still finding their way around the business and legal issues , even under the new legislation.

Therefore, to meet the 30 and 60 day notice requirements, and the information sharing requirement, medical device companies will have to carefully craft their notices to both meet the specificity requirement in the final guidance, and not disclose enough that adversaries will be alerted to the possibility of a vulnerability, figure out what function, method, process or technology is implicated, focus on that topic and exploit it before a fix is developed, shared, and implemented. The same considerations hold for sharing vulnerability and notice information with the ISAO, whose members will include competitors and which information could (depending on the ISAO rules and information classification decisions) be further shared with government and security industry partners. Net-net, a clear understanding of the technical vulnerability, possible consequences, ability to fix, and appreciation of the line between notification and usefulness for exploit is required. It may also be true that no fix can be had in 60 days, and that if there are many reported vulnerabilities backed-up and in the queue to be fixed, a company may fix the priority tickets first, and then the least priority items may take longer than 60 days to address and fix as a matter of band-width and expertise. Consequently, over time, companies may be faced with decisions about whether to try to meet the 806 exception conditions, or file 806 notices with the FDA and deal with the potential implications. None of this is to say that the benefits of the 806 exception are not worth it, or are trivial, it just means that your approach has to be clueful and strategic.

One more issue, of course continues to be quite important – the global rules must be rationalized. Medical device companies build-once and sell globally, and the security, integrity, vulnerability, and disclosure rules and best practices have to work globally. As these new guidelines get rolled-out, significant education globally will be critical.

Over time, and most likely, like most things in security, this ‘final guidance’ will be a work in progress, as companies and the FDA and regulators globally begin to deal with specific use cases that push the boundaries of what situation is a “Controlled Risk,” an “Uncontrolled Risk,” and what 30 and 60 day notifications, and fixes, and ISAO information is required, helpful, and not helpful. As we always say – ‘security is a journey, not a destination’ – and so too will the postmarket cyber guidance be.

 

On January 23, 2017, the FTC released a long-awaited report regarding the increased incidence of cross-device tracking.  The report, which follows a November 2015 FTC workshop on cross-device tracking, sheds light on the privacy concerns raised by the practice and alerts companies engaged in cross-device tracking of certain best practices for avoiding potential violations of applicable law and regulations.

Background

Cross-device tracking is the practice of using deterministic and probabilistic techniques to associate multiple devices with the same consumer.  Deterministic techniques are used to track consumer behavior based on the affirmative use of a common identifying characteristic, such as log-in credentials.  For example, when a consumer enters his or her log-in credentials to access an online platform on a number of devices, the consumer’s behavior on one device can be used to inform targeted advertising through the same platform on the consumer’s other devices.

By contrast, probabilistic techniques are used to draw inferences about consumer behavior. As noted in the FTC report, a common probabilistic technique is IP address matching, through which devices using the same IP address at the same time—e.g., a smart television, mobile device and tablet on the same local network—are presumed to belong to the same consumer.  Because probabilistic tracking does not involve affirmative consumer action, and may not involve any direct relationship between the consumer and the company engaged in the tracking activity, the practice is less transparent for consumers than deterministic tracking.

The FTC report is based, in part, on a prior FTC staff study on cross-device tracking trends which involved the testing of 100 popular websites on two separate devices. The study found, among other things, that 96 of the 100 websites reviewed collected log-in or other authentication credentials from consumers, the domains of 87 companies known to use cross-device tracking technologies were embedded, directly or indirectly, in such websites, and that 861 third parties were observed connecting to both devices.

Findings and Recommendations of the FTC Report

The FTC report acknowledges that cross-device tracking can produce benefits for both businesses and consumers.  These benefits include enhanced fraud detection and account security (e.g., by requiring additional authentication when a new device is used to access a consumer’s account), an improved consumer experience on online platforms, the use of more targeted, less saturated advertising, and a more equal competitive arena for companies that do not have access to large amounts of deterministic tracking data.  However, notwithstanding these benefits, the FTC report expresses serious concern about risks to consumer privacy associated with such activities.  For example, the FTC found that:

  • Cross-device tracking is employed by a growing number of companies (including both consumer-facing and third-party tracking and analytics companies);
  •  Very few companies using such techniques have disclosed both the fact and scope of their tracking activities;
  •  Many consumers may be unaware that their activities on certain platforms are being tracked, while some consumers may have knowledge of companies’ tracking practices, but little to no ability to limit or opt-out of tracking and data collection;
  •  Data collected through cross-device tracking may include highly-private personal information which, if exposed through a security breach, could result in considerable consumer harm and could reduce the efficacy of knowledge-based authentication (e.g., answering pre-selected security questions); and
  •  Self-regulatory initiatives have improved transparency and consumer choice in the cross-device tracking arena, but many existing practices are not fully disclosed to consumers and may implicate the FTC Act.

Based on these findings, the FTC report makes a number of recommendations to companies engaged in cross-device tracking, including that:

  • Consumer-facing companies should disclose to consumers, fully and truthfully, their use of cross-device tracking practices and the extent of those practices, including the nature of any data collected;
  •  Third-party tracking companies should provide their tracking disclosures both to consumers and to the first-party companies with whom they transact;
  •  Companies should consider providing consumers with clear and conspicuous opt-out mechanisms or other means to limit how their activities are tracked;
  •  Companies should refrain from tracking sensitive information, such as financial, health, or children’s information or precise geolocation data without first obtaining the express consent of the consumers to whom the information belongs; and
  • Companies should track and collect only information that is necessary for their business purposes to reduce the risk of a security breach resulting in significant consumer harm.

Considerations for Companies Engaged in or Considering Undertaking Cross-Device Tracking

Companies engaged in or considering undertaking cross-device tracking—whether consumer-facing or without a direct consumer relationship—may wish to review their tracking and information-collection activities in light of the FTC report. In particular, such entities may wish to examine their practices involving information that is viewed as “sensitive” or which can be reasonably linked to consumer or his or her device(s), even if the information is hashed or is otherwise protected. Companies should also consider reviewing their privacy policies and relevant consumer disclosures to ensure that any cross-device tracking activities, as well as any related opt-out procedures, are described accurately and conspicuously therein.  As the FTC report highlights, consumer-facing companies, such as application developers and website operators, can be exposed to liability for allowing third parties to install tracking technology in their applications and platforms without providing notice to consumers (see our previous Seller Beware post for further reading on prior FTC action in this area).  Similarly, third-party tracking companies may be held liable for misrepresenting the nature or the extent of their tracking techniques to the consumer-facing companies on whose platforms those techniques are deployed.  These reminders of potential liability should not be overlooked by consumer-facing and third-party tracking companies. It can be helpful to review existing and future agreements, as well as all representations made to consumers, to limit the potential for claims of misrepresentation regarding tracking practices, policies and procedures.

On November 28, 2016, the US Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) issued a rare alert warning the public of an email scam masquerading as an official OCR audit communication. The alert addresses an emerging “phishing” scheme that targets employees of HIPAA covered entities and their business associates in an apparent attempt to market non-governmental cybersecurity services. Below we offer some brief guidance about the scheme and tips for what you can do to protect your company.

What the scheme is: “Phishing” emails are designed to steal money, information, or both from their targets. The emails usually appear to come from legitimate enterprises—in this case, OCR—but, when accessed, hyperlinks in the emails direct users to spoofed or fake websites designed to get users to divulge private information. Although the underlying goal of the scammers here is unclear, as a general matter, criminals who obtain such private information often attempt to commit identity theft or to sell obtained data to interested parties on the internet’s “black-market.”

What to watch out for: An email that appears to come from OCR, under cloned OCR letterhead, that prompts recipients to click a link regarding possible inclusion in the HIPAA Privacy, Security, and Breach Rules Audit Program. The hyperlink directs users to a non-governmental website that markets, ironically, cybersecurity services. OCR’s alert emphasizes that the cybersecurity web page and services are in no way affiliated with HHS or OCR.

What to do now: Inform your employees to be on the lookout for the OCR Phishing Scam email, and remind them to remain wary of these types of emails generally. Also remind employees about any policies in place regarding sharing sensitive information. Set up a point of contact for employees to consult if they are in doubt about what to do, or what information they can share.

What to do if you receive one of these emails: Do not respond to the email. Do not download any files or images in the email. Do not click on any hyperlinks in the email. Add the sender to a blocked senders list and delete the email. If you have any questions as to whether an email is an official agency communication regarding a HIPAA audit, contact OCR at OSOCRAudit@hhs.gov.

What to do if you have responded to one of these emails or accessed hyperlinked material: If an employee has already responded to the OCR Phishing Email or accessed a hyperlink in one of these emails, we suggest you contact experienced legal counsel to aid in determining what information or systems may have been compromised, and what obligations, if any, you may have to notify impacted individuals and/or state or federal entities under various laws.

On August 25, 2016, investment firm Muddy Waters Capital issued a report claiming that St. Jude Medical’s implantable cardiac devices are susceptible to cybersecurity attacks, allegedly putting more than 260,000 individuals in the U.S. at risk.  St. Jude strongly rejected the report and disputed the alleged security risks of its devices.

The report claims that MedSec Holdings Ltd., a cybersecurity firm, was able to demonstrate two types of cyberattacks on St. Jude’s implantable cardiac devices. The first type of attack — a “crash” attack — enables a hacker to remotely disable cardiac devices, and in some cases, cause the cardiac device to pace at a dangerous rate.  The second type of attack — a battery drain attack — remotely runs cardiac device batteries down to 3% of capacity within a 24-hour period.  However, the report concludes that patients’ personal health information appears to be safe as the report states that patient data is encrypted.

The report argues that the cybersecurity risks of the devices are due to security deficiencies in accessories to the implantable devices including devices located in physician offices that display data from the implanted devices, the network that manages and transmits data, and the at-home device which communicates with the implanted device via radio frequency within a 50 foot range.  Some of the alleged deficiencies require attackers having access to device accessory hardware or being within 50 feet of the target(s).

Continue Reading A New Kind of Heart Attack: Allegations of Cybersecurity Risks in Cardiac Pacemakers

Last week, the U.S. Food and Drug Administration (FDA) released a draft guidance entitled “Dissemination of Patient-Specific Information from Devices by Device Manufacturers,” which is intended to “clarify that manufacturers may share patient-specific information recorded, stored, processed, retrieved, and/or derived from a medical device with the patient who is either treated or diagnosed with that specific device.”  Such sharing, the FDA believes, “will empower patients to be more engaged with their healthcare providers in making sound medical decisions.”

The draft guidance is timely. Individuals are increasingly using wearable mobile technologies (e.g., trackers, fitness watches, etc.), as well as mobile medical applications and related health software.  Many wearable technology manufacturers are facing increased scrutiny and litigation about the reliability of their products’ assessments (e.g., sleep or exercise trackers).  And there is considerable concern about the security of patient-specific information on such devices.

The draft guidance defines “patient-specific information” to mean “any information unique to an individual patient or unique to that patient’s treatment or diagnosis that, consistent with the intended use of a medical device, may be recorded, stored, processed, retrieved, and/or derived from that medical device.” Such information may include, but is not limited to:

  • recorded patient data;
  • device usage/output statistics;
  • healthcare provider inputs;
  • incidence of alarms; and/or
  • records of device malfunctions or failures.

Continue Reading Sharing is Caring: FDA Issues Draft Guidance on Device Manufacturers Sharing Patient-Specific Information from Medical Devices