After three government agencies collectively created an online tool to help developers navigate federal regulations impacting mobile health apps, McDermott partner Jennifer Geetter was interviewed by FierceMobileHealthcare on the need for mobile health development tools.
The search by consumers, payers and providers for more efficient, effective and convenient care delivery models has led to an explosion of technological innovation in the health care sector. This explosion has supported the increased use of telemedicine by providers to reach patients who were previously out of reach, and to provide more timely and cost-effective care.
With the use of telemedicine technologies comes a responsibility on the part of providers to educate and inform patients on the benefits, and more importantly, on the risks associated with receiving care via telemedicine. Like any other care setting, compliance with this responsibility serves the dual purpose of providing consumers with the information needed to make an informed decision about their care, but also mitigates the provider’s potential liability exposure from medical malpractice claims. (more…)
This week, the Federal Trade Commission (FTC or Commission) released an interactive tool (entitled the “Mobile Health Apps Interactive Tool”) that is intended to help developers identify the federal law(s) that apply to apps that collect, create and share consumer information, including health information. The interactive series of questions and answers augments and cross-references existing guidance from the US Department of Health and Human Service (HHS) that helps individuals and entities—including app developers—understand when the Health Insurance Portability and Accountability Act (HIPAA) and its rules may apply. The tool is also intended to help developers determine whether their app is subject to regulation as a medical device by the FDA, or subject to certain requirements under the Federal Trade Commission Act (FTC Act) or the FTC’s Health Breach Notification Rule. The Commission developed the tool in conjunction with HHS, FDA and the Office of the National Coordinator for Health Information Technology (ONC).
Based on the user’s response to ten questions, the tool helps developers determine if HIPAA, the Federal Food, Drug, and Cosmetic Act (FDCA), FTC Act and/or the FTC’s Health Breach Notification Rule apply to their app(s). Where appropriate based on the developer’s response to a particular question, the tool provides a short synopsis of the potentially applicable law and links to additional information from the appropriate federal government regulator.
The first four questions cover a developer’s potential obligations under HIPAA. The first question explores whether an app creates, receives, maintains or transmits individually identifiable health information, such as an IP address. Developers may use the tool’s second, third and fourth questions to assess whether they are a covered entity or a business associate under HIPAA. The tool’s fifth, sixth and seventh questions help developers establish whether their app may be a medical device that the FDA has chosen to regulate. The final three questions are intended to help users assess the extent to which the developer is subject to regulation by the FTC.
Although the tool provides helpful, straightforward guidance, users will likely need a working knowledge of relevant regulatory principles to successfully use the tool. For example, the tool asks the user to identify whether the app is “intended for use” for diagnosis, cure, mitigation, treatment or disease prevention, but does not provide any information regarding the types of evidence that the FDA would consider to identify a product’s intended use or the intended use of a mobile app (e.g., statements made by the developer in advertising or oral or written statements). In addition, how specifically an app will be offered to individuals to be used in coordination with their physicians can be dispositive of the HIPAA analysis in ways that are not necessarily intuitive.
The tool provides a starting point for developers to raise their awareness of potential compliance obligations. It also highlights the need to further explore the three federal laws, implementing rules and their exceptions. Developers must be aware of the tool’s limitations—it does not address state laws and is not intended to provide [...]
On January 15, 2016, the U.S. Food and Drug Administration (FDA) published a draft guidance entitled Postmarket Management of Cybersecurity in Medical Devices (Draft Guidance), which outlines FDA’s recommendations for managing postmarket cybersecurity vulnerabilities in medical devices that contain software or programmable logic and software that is a medical device, including networked medical devices. The Draft Guidance represents FDA’s latest attempt to outline principles intended to enhance medical device cybersecurity throughout the product lifecycle.
On 11 May 2015, the UK Information Commissioner’s Office (ICO), the French data protection authority (CNIL) and the Office of the Privacy Commissioner of Canada (OPCC) announced their participation in a new Global Privacy Enforcement Network (GPEN) privacy sweep to examine the data privacy practices of websites and apps aimed at or popular among children. This closely follows the results of GPEN’s latest sweep on mobile applications (apps),which suggested a high proportion of apps collected significant amounts of personal information but did not sufficiently explain how consumers’ personal information would be collected and used. We originally reported the sweep on mobile apps back in September 2014.
According to the CNIL and ICO, the purpose of this sweep is to determine a global picture of the privacy practices of websites and apps aimed at or frequently used by children. The sweep seeks to instigate recommendations or formal sanctions where non-compliance is identified and, more broadly, to provide valuable privacy education to the public and parents as well as promoting best privacy practice in the online space.
Background
GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development. GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to globally strengthen personal privacy. GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.
According to the ICO, GPEN has identified a growing global trend for websites and apps targeted at (or used by) children. This represents an area that requires special attention and protection. From 12 to 15 May 2015, GPEN’s “sweepers”—comprised of 28 volunteering data protection authorities across the globe, including the ICO, CNIL and the OPCC—will each review 50 popular websites and apps among children (such as online gaming sites, social networks, and sites offering educational services or tutoring). In particular, the sweepers will seek to determine inter alia:
The types of information being collected from children;
The ways in which privacy information is explained, including whether it is adapted to a younger audience (e.g., through the use of easy to understand language, large print, audio and animations, etc.);
Whether protective controls are implemented to limit the collection of childrens’ personal information, such as requiring parental permission prior to use of the relevant services or collection of personal information; and
The ease with which one can request for personal information submitted by children to be deleted.
Comment
We will have to wait some time for in-depth analysis of the sweep, as the results are not expected to be published until the Q3 of this year. As with previous sweeps, following publishing of the results, we can expect data protection authorities to issue new guidance, as well as write to those organisations identified as needing to improve or take more formal action where appropriate.
As part of its four-part Digital Health webinar series, on April 14, 2015, McDermott Will & Emery presented “Telehealth: Implementation Challenges in an Evolving Dynamic.”
Telehealth (also known as telemedicine) generally refers to the use of technology to support the remote delivery of health care. For example:
A health care provider in one place is connected to a patient in another place by video conference
A patient uses a mobile device or wearable that enables a doctor to monitor his or her vital signs and symptoms
A specialist is able to rapidly share information with a geographically remote provider treating a patient
While the benefits of telehealth are clear – for example, making health care available to those in underserved areas and for patients who cannot regularly visit their providers but need ongoing monitoring — implementing telehealth requires providers and patients, as well as payers, to adapt to a dynamic new health care, data sharing and reimbursement delivery framework. The webinar explored these areas and more.
Two significant decisions under the Video Privacy Protection Act (VPPA) in recent weeks have provided new defenses to companies alleged to have run afoul of the statute. Bringing the long-running litigation against Hulu to a close–at least pending appeal–the court in In re: Hulu Privacy Litigation granted summary judgment in favor of Hulu, holding that the plaintiffs could not prove that Hulu knowingly violated the VPPA. A week later in a more recently filed case, Austin-Spearman v. AMC Network Entertainment, LLC, the court dismissed the complaint on the basis that the plaintiff was not a “consumer” protected by the VPPA. Both rulings provide comfort to online content providers, while also raising new questions as to the scope of liability under the VPPA.
In re: Hulu Privacy Litigation
In a decision with wide-ranging implications, the Hulu court granted summary judgment against the plaintiffs, holding that they had not shown that Hulu knowingly shared their video selections in violation of the VPPA. The plaintiffs’ allegations were based on Hulu’s integration of a Facebook “Like” button into its website. Specifically, the plaintiffs alleged that when the “Like” button loaded on a user’s browser, Hulu would automatically send Facebook a “c_user” cookie containing the user’s Facebook user ID. At the same time, Hulu would also send Facebook a “watch page” that identified the video requested by the user. The plaintiffs argued that Hulu’s transmission of the c_user cookie and the watch page allowed Facebook to identify both the Hulu user and that user’s video selection and therefore violated the VPPA.
The plaintiffs’ case foundered, however, on their inability to demonstrate that Hulu knew that Facebook’s linking of those two pieces of information was a possibility. According to the court, “there is no evidence that Hulu knew that Facebook might combine a Facebook user’s identity (contained in the c_user cookie) with the watch page address to yield ‘personally identifiable information’ under the VPPA.” Without showing that Hulu had knowingly disclosed a connection between the user’s identity and the user’s video selection, there could be no VPPA liability.
The court’s decision, if upheld on appeal, is likely to provide a significant defense to online content providers sued under the VPPA. Under the decision, plaintiffs must now be able to show not only that the defendant company knew that the identity and video selections of the user were disclosed to a third party, but also that the company knew that that information was disclosed in manner that would allow the third party to combine those pieces of information to determine which user had watched which content. While Hulu prevailed only at the summary judgment stage after four years of litigation, other companies could likely make use of this same rationale at the pleadings stage, insisting that plaintiffs set out a plausible case in their complaint that the defendant had the requisite level of knowledge.
Austin Spearman v. AMC Network Entertainment
The AMC decision turned on the VPPA’s definition of the term “consumer” and illustrated how that seemingly [...]
On April 1, 2015, the Office of the National Coordinator for Health Information Technology (ONC), which assists with the coordination of federal policy on data sharing objectives and standards, issued its Shared Nationwide Interoperability Roadmap and requested comments. The Roadmap seeks to lay out a framework for developing and implementing interoperable health information systems that will allow for the freer flow of health-related data by and among providers and patients. The use of technology to capture and understand health-related information and the strategic sharing of information between health industry stakeholders and its use is widely recognized as critical to support patient engagement, improve quality outcomes and lower health care costs.
On April 3, 2015, the Federal Trade Commission issued coordinated comments from its Office of Policy Planning, Bureau of Competition, Bureau of Consumer Protection and Bureau of Economics. The FTC has a broad, dual mission to protect consumers and promote competition, in part, by preventing business practices that are anticompetitive or deceptive or unfair to consumers. This includes business practices that relate to consumer privacy and data security. Notably, the FTC’s comments on the Roadmap draw from both its pro-competitive experience and its privacy and security protection perspective, and therefore offer insights into the FTC’s assessment of interoperability from a variety of consumer protection vantage points.
The FTC agreed that ONC’s Roadmap has the potential to benefit both patients and providers by “facilitating innovation and fostering competition in health IT and health care services markets” – lowering health care costs, improving population health management and empowering consumers through easier access to their personal information. The concepts advanced in the Roadmap, however, if not carefully implemented, can also have a negative effect on competition for health care technology services. The FTC comments are intended to guide ONC’s implementation with respect to: (1) creating a business and regulatory environment that encourages interoperability, (2) shared governance mechanisms that enable interoperability, and (3) advancing technical standards.
Taking each of these aspects in turn, creating a business and regulatory environment that encourages interoperability is important because, if left unattended, the marketplace may be resistant to interoperability. For example, health care providers may resist interoperability because it would make switching providers easier and IT vendors may see interoperability as a threat to customer-allegiance. The FTC suggests that the federal government, as a major payer, work to align economic incentives to create greater demand among providers for interoperability.
With respect to shared governance mechanisms, the FTC notes that coordinated efforts among competitors may have the effect of suppressing competition. The FTC identifies several examples of anticompetitive conduct in standard setting efforts for ONC’s consideration as it considers how to implement the Roadmap.
Finally, in advancing core technical standards, the FTC advised ONC to consider how standardization could affect competition by (1) limiting competition between technologies, (2) facilitating customer lock-in, (3) reducing competition between standards, and (4) impacting the method for selecting standards.
As part of its mission to protect consumers, the FTC focuses its privacy and security [...]
On February 27, 2015, the Obama White House released an “Administration Discussion Draft” of its Consumer Privacy Bill of Rights Act of 2015 (Proposed Consumer Privacy Act).
As described during President Obama’s January 12 visit to the Federal Trade Commission (FTC), the Proposed Consumer Privacy Act identifies seven “basic principles to both protect personal privacy and ensure that industry can keep innovating.” These seven principles are:
Transparency (§101): Transparency is a principle frequently cited in guidance from the FTC, as well as self-regulatory framework, such as the Digital Advertising Alliance’s cross-industry code for interest based-advertising. The Proposed Consumer Privacy Act describes transparency as “concise and easily understandable language, accurate, clear, timely, and conspicuous notice about privacy and security practices.” The notice required from an entity subject to the Proposed Consumer Privacy Act (defined as a “covered entity” (CE)) must describe the entity’s collection, use, disclosure, retention, destruction and security practices.
Individual Control (§102): The Individual Control principle means offering consumers a “reasonable means to control the processing (i.e., taking any action regarding) personal data about them in proportion to the privacy risk to the individual and consistent with context.” An individual must have a way to either withdraw consent related to his or her personal data that is “reasonably comparable” to the means by which the consent was initially granted consent or request that the CE “de-identify” (as defined in the Proposed Consumer Privacy Act) his or her personal data.
Respect for Context (§103): Under the Respect for Context principle, a CE must process personal data reasonably “in light of context.” If the processing is not reasonable, the CE must undertake a “privacy risk analysis” to identify and take reasonable steps to mitigate privacy-related risk, including “heightened transparency and individual control,” such as just-in-time notices. Reasonableness is presumed when a CE’s personal data processing “fulfills an individual’s request.”
Focused Collection and Responsible Use (§104): The Focused Collection and Responsible Use principle requires that a CE limit its collection, retention and use of personal data to a “manner that is reasonable in light of context.” The CE also must “delete, destroy, or de-identify” personal data within a “reasonable time” after the original purpose for its collection, retention, or use has been fulfilled.
Security (§105): Under the Security principle, a CE must: identify internal and external “risks to privacy and security” of personal data; implement and maintain safeguards “reasonably designed” to secure personal data; regularly assess the efficacy of the safeguards, and adjust the safeguards to reflect material changes to business practices or “any other circumstances that create a material impact on the privacy or security” of personal data under the CE’s control. The [...]
As we reported in May 2014, the Federal Trade Commission (FTC) convened stakeholders to explore whether health-related information collected from and about consumers — known as consumer-generated health information (CHI) — through use of the internet and increasingly-popular lifestyle and fitness mobile apps is more sensitive and in need of more privacy-sensitive treatment than other consumer-generated data.
One of the key questions raised during the FTC’s CHI seminar is: “what is consumer health information”? Information gathered during traditional medical encounters is clearly health-related. Information gathered from mobile apps designed as sophisticated diagnostic tools also is clearly health-related — and may even be “Protected Health Information,” as defined and regulated by Health Information Portability and Accountability Act (HIPAA), depending on the interplay of the app and the health care provider or payor community. But, other information, such as diet and exercise, may be viewed by some as wellness or consumer preference data (for example, the types of foods purchased). Other information (e.g., shopping habits) may not look like health information but, when aggregated with other information generated by and collected from consumers, may become health-related information. Information, therefore, may be “health information,” and may be more sensitive as such, depending on (i) the individual from whom it is collected, (ii) the context in which it is initially collected; (iii) the other information which it is combined; (iv) the purpose for which the information was initially collected; and (v) the downstream uses of the information.
Notably, the FTC is not the only regulatory body struggling with how to define CHI. On February 5, 2015, the European Union’s Article 29 Working Party (an EU representative body tasked with advising EU Member States on data protection) published a letter in response to a request from the European Commission to clarify the definitional scope of “data concerning health in relation to lifestyle and wellbeing apps.”
The EU’s efforts to define CHI underscore the importance of understanding CHI. The EU and the U.S. data privacy and security regimes differ fundamentally in that the EU regime broadly protects personally identifiable information. The US does not currently provide universal protections for personally identifiable information. The U.S. approach varies by jurisdiction and type of information and does not uniformly regulate the mobile app industry or the CHI captured by such apps. These different regulatory regimes make the EU’s struggle to define the precise scope and definition of “lifestyle and wellbeing” data (CHI) and develop best practices going forward all the more striking because, even absent such a definition, the EU privacy regime would offer protections.
The Article 29 Working Party letter acknowledges the European Commission’s work to date, including the European Commission’s “Green Paper on Mobile Health,” which emphasized the need for strong privacy and security protections, transparency – particularly with respect to how CHI interoperates with big data – and the need for specific legislation on CHI-related apps or regulatory guidance that will promote “the safety and performance of lifestyle and wellbeing apps.” But, [...]