Consumer Protection
Subscribe to Consumer Protection's Posts

Developing and Implementing an Effective Telemedicine Informed Consent Form

The search by consumers, payers and providers for more efficient, effective and convenient care delivery models has led to an explosion of technological innovation in the health care sector. This explosion has supported the increased use of telemedicine by providers to reach patients who were previously out of reach, and to provide more timely and cost-effective care.

With the use of telemedicine technologies comes a responsibility on the part of providers to educate and inform patients on the benefits, and more importantly, on the risks associated with receiving care via telemedicine. Like any other care setting, compliance with this responsibility serves the dual purpose of providing consumers with the information needed to make an informed decision about their care, but also mitigates the provider’s potential liability exposure from medical malpractice claims. (more…)




read more

Government Issues New Tool to Help Mobile App Developers Identify Applicable Federal Laws

This week, the Federal Trade Commission (FTC or Commission) released an interactive tool (entitled the “Mobile Health Apps Interactive Tool”) that is intended to help developers identify the federal law(s) that apply to apps that collect, create and share consumer information, including health information. The interactive series of questions and answers augments and cross-references existing guidance from the US Department of Health and Human Service (HHS) that helps individuals and entities—including app developers—understand when the Health Insurance Portability and Accountability Act (HIPAA) and its rules may apply.  The tool is also intended to help developers determine whether their app is subject to regulation as a medical device by the FDA, or subject to certain requirements under the Federal Trade Commission Act (FTC Act) or the FTC’s Health Breach Notification Rule. The Commission developed the tool in conjunction with HHS, FDA and the Office of the National Coordinator for Health Information Technology (ONC).

Based on the user’s response to ten questions, the tool helps developers determine if HIPAA, the Federal Food, Drug, and Cosmetic Act (FDCA), FTC Act and/or the FTC’s Health Breach Notification Rule apply to their app(s). Where appropriate based on the developer’s response to a particular question, the tool provides a short synopsis of the potentially applicable law and links to additional information from the appropriate federal government regulator.

The first four questions cover a developer’s potential obligations under HIPAA. The first question explores whether an app creates, receives, maintains or transmits individually identifiable health information, such as an IP address. Developers may use the tool’s second, third and fourth questions to assess whether they are a covered entity or a business associate under HIPAA. The tool’s fifth, sixth and seventh questions help developers establish whether their app may be a medical device that the FDA has chosen to regulate.  The final three questions are intended to help users assess the extent to which the developer is subject to regulation by the FTC.

Although the tool provides helpful, straightforward guidance, users will likely need a working knowledge of relevant regulatory principles to successfully use the tool.  For example, the tool asks the user to identify whether the app is “intended for use” for diagnosis, cure, mitigation, treatment or disease prevention, but does not provide any information regarding the types of evidence that the FDA would consider to identify a product’s intended use or the intended use of a mobile app (e.g., statements made by the developer in advertising or oral or written statements). In addition, how specifically an app will be offered to individuals to be used in coordination with their physicians can be dispositive of the HIPAA analysis in ways that are not necessarily intuitive.

The tool provides a starting point for developers to raise their awareness of potential compliance obligations. It also highlights the need to further explore the three federal laws, implementing rules and their exceptions. Developers must be aware of the tool’s limitations—it does not address state laws and is not intended to provide [...]

Continue Reading




read more

Farewell ‘Safe Harbor,’ Hello ‘Privacy Shield’: Europe and U.S. Agree on New Rules for Transatlantic Data Transfer

After intense negotiations, and after the official deadline had passed on Sunday, 31 January 2016, the United States and the European Union have finally agreed on a new set of rules—the “EU-U.S. Privacy Shield”—for data transfers across the Atlantic. The Privacy Shield replaces the old Safe Harbor agreement, which was struck down by the European Court of Justice (ECJ) in October 2015. Critics already comment that the Privacy Shield will share Safe Harbor’s fate and will be declared invalid by the ECJ; nevertheless, until such a decision exists, the Privacy Shield should give companies legal security when transferring data to the United States.

While a text of the new agreement is not yet published, European Commissioner Věra Jourvá stated that the Privacy Shield should be in place in the next few weeks. According to a press release from the European Commission, the new arrangement

…will provide stronger obligations on companies in the U.S. to protect the personal data of Europeans and stronger monitoring and enforcement by the U.S. Department of Commerce and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities. The new arrangement includes commitments by the U.S. that possibilities under U.S. law for public authorities to access personal data transferred under the new arrangement will be subject to clear conditions, limitations and oversight, preventing generalized access. Europeans will have the possibility to raise any enquiry or complaint in this context with a dedicated new Ombudsperson.

One of the most known critics of the U.S. data processing practices and initiator of the ECJ Safe Harbor decision, Austrian Max Schrems, already reacted to the news. Schrems stated on social media that the ECJ Safe Harbor decision explicitly says that “generalized access to content of communications” by intelligence agencies violates the fundamental right to respect for privacy. Commissioner Jourová, referring to the Privacy Shield, stated that “generalized access … may happen in very rare cases”—which could be viewed as contradictory to the ECJ decision. Critics also argue that an informal commitment by the United States during negotiations with the European Union is not something on which European citizens could base lawsuits in the United States if their data is transferred or used illegally.

The European Commission will now prepare a draft text for the Privacy Shield, which still must be ratified by the Member States. The EU Parliament will also review the draft text. In the meantime, the United States will make the necessary preparations to put in place the new framework, monitoring mechanisms and new ombudsperson.

 




read more

FTC Report Alerts Organizations about the Risks and Rewards of Big Data Analytics

On January 6, the Federal Trade Commission (FTC) released a report that it hopes will educate organizations on the important laws and research that are relevant to big data analytics. The report, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues, looks specifically at how big data is used after it is collected and analyzed and provides suggestions aimed at maximizing the benefits and minimizing the risks of using big data.

Risk and Rewards

The report argues that big data analytics can provide numerous opportunities for improvements in society. In addition to more effectively matching products and services to consumers, big data can create opportunities for low income and underserved communities. The report highlights a number of innovative uses of big data that provide benefits to underserved populations, such as increased educational attainment, access to credit through nontraditional methods, specialized health care for underserved communities, and better access to employment.

At the same time, the report shows that potential inaccuracies and biases might lead to detrimental effects for low-income and underserved populations. For example, organizations  could use big data to inadvertently exclude low-income and underserved communities from credit and employment opportunities, which may reinforce existing disparities or weaken the effectiveness of consumer choice.

Considerations for Using Big Data

The report outlines some of the consumer protection laws (in particular, the Fair Credit Reporting Act and FTC Act)  and equal opportunity laws that apply to the use of big data, especially with regard to possible issues of discrimination or exclusion. It also recommends that an organization consider the following questions to help ensure that its use of big data analytics does not lead to unlawful exclusion or discrimination:

How representative is your data set? 

If the data set is missing information from particular populations, take appropriate steps to address this problem.

Does your data model account for biases? 

Review data sets and algorithms to ensure that hidden biases do not have an unintended impact on certain populations.

How accurate are your predictions based on big data? 

Balance the risks of using correlative results, especially where the business’ policies could negatively affect certain populations.

Does your reliance on big data cause ethical or fairness concerns?

Consider whether fairness and ethical considerations advise against using big data in certain circumstances and whether the business can use big data in ways that advance opportunities for previously underrepresented populations.

Monitoring and Enforcement Ahead

The FTC stated that its collective challenge is to make sure that big data analytics continue to provide benefits and opportunities to consumers while adhering to core consumer protection values and principles. It has committed to continue monitoring areas where big data practices could violate existing laws and to bring enforcement actions where appropriate.  With that in mind, organizations that already use big data and those that are have been persuaded by reported benefits of big data should heed [...]

Continue Reading




read more

China Released the Latest Classification Catalogue of Telecommunications Services (2015 Revision)

On December 28, 2015, the Ministry of Industry and Information Technology of China released the newly revised Classification Catalogue of Telecommunications Services, which is due to take effect as of March 1st, 2016. This round of revision has long been awaited since its last amendment in 2003, and is expected to reflect the advancement and emergence of new technologies and business models in the telecommunication field as well as to help keep new telecommunication business models under the regulatory radar.

 

Read the full China Law Alert.




read more

States Begin 2016 with the Expansion of Telehealth Services

As we enter into the new year, the health industry continues to see expanded access to telehealth services.  After a whirlwind 2015 in which we saw over 200 telehealth-related bills introduced in 42 states, New York and Connecticut emerge as the first states in 2016 to implement laws that expand patients’ access to telehealth services.

Effective January 1, 2016, three new laws will greatly expand telehealth services across the state of New York. The first law, A.2552-A, amends section 2999-cc of the New York Public Health Law regarding coverage of telehealth services by insurers, including Medicaid, and with respect to telehealth-related definitions.  As defined in the New York Public Health Law, telehealth is “the use of electronic information and communication technologies by telehealth providers to deliver health care services, which include assessment, diagnosis, consultation, treatment, education, care management and/or self-management of a patient.” Among other things, A.2552-A provides that health care services delivered by means of telehealth will be entitled to reimbursement under New York’s Medicaid program, and private insurers may not exclude from coverage a service that is otherwise covered under a patient’s insurance policy because the service is delivered via telehealth. Under this law, reimbursement for telehealth services is contingent upon services being delivered by a telehealth provider when the patient is located at an approved originating site. The second law, A.7488, amends 2999-cc of the Public Health Law, by adding physical therapist and occupational therapist to the list of telehealth providers that are able to provide telehealth services. Lastly, the third law, A.7369, amends section 2999-cc, by including a dentist office as an “originating site” for the delivery of telehealth services.

Connecticut, like New York, started off 2016 with continued efforts to promote telehealth services. Connecticut’s existing telehealth law, which became effective in October 2015, broadly defines “telehealth” as “the mode of delivering health care or other health services via information and communication technologies to facilitate the diagnosis, consultation and treatment, education, care management and self-management of a patient’s physical and mental health, and includes (A) interaction between the patient at the originating site and the telehealth provider at a distant site, and (B) synchronous interactions, asynchronous store and forward transfers or remote patient monitoring.” Under the new Connecticut law, CT Public Act No. 15-88, effective January 1, 2016, commercial insurers must cover telehealth services in the same manner that they cover in-person visits and telehealth coverage must be subject to the same terms and conditions that apply to all other benefits under a patient’s insurance policy.

As the importance of improving access to care and care coordination and identifying cost savings in the delivery of health care services increases, states should continue to steadily expand efforts to allow health care services via telehealth.  While many states have made strides to expand the use of telehealth services, many more have not taken steps to require reimbursement by Medicaid programs or private insurers. At the same time, the multi-state licensure compact developed by [...]

Continue Reading




read more

FTC Sees Disconnect on Proposed Connected Cars Legislation

The Energy & Commerce Committee of the U.S. House of Representatives held a hearing on October 21st titled “Examining Ways to Improve Vehicle and Roadway Safety” to consider (among other matters) Vehicle Data Privacy legislation for internet-connected cars.

The proposed legislation includes requirements that auto manufacturers:

  • “Develop and implement” a privacy policy incorporating key elements on the collection, use and sharing of data collected through technology in vehicles. By providing the policy to the National Highway Traffic Safety Administration, a manufacturer earns certain protection against enforcement action under Section 5 of the Federal Trade Commission Act.
  • Retain data no longer than is determined necessary for “legitimate business purposes.”
  • Implement “reasonable measures” to ensure that the data is protected against theft/unauthorized access or use (hacking).

Manufacturers that fail to comply face a maximum penalty, per manufacturer, of up to $1 million. The penalty for failure to protect against hacking is up to $100,000 per “unauthorized” access.

Maneesha Mithal, Associate Director, Division of Privacy and Identity Protection, of the Federal Trade Commission (FTC), testified that the proposed legislation “could substantially weaken the security and privacy protections that consumers have today.”

The FTC’s criticism focuses on the proposed safe harbor against FTC enforcement for manufacturers. The FTC testified that a manufacturer should not earn immunity under the FTC Act if the privacy policy offers little or no privacy protection, or is not followed or enforced. The FTC expressed disapproval of provisions allowing retroactive application of a privacy policy to data previously collected. The FTC also advised against applying the proposed safe harbor to data outside of the vehicle, such as data collected from a website or mobile app.

Although the FTC applauded the goal of deterring criminal hacking of the auto systems, the FTC testified that the legislation, as drafted, may disincentivize manufacturers’ efforts in safety and privacy improvements. The testimony echoed that of other industry critics who believe that what is considered “authorized” access is too vague, which may prevent manufacturers from allowing others to access vehicle data systems, such as for repair or research on an auto’s critical systems.

Finally, the FTC criticized the provisions creating a council to develop cybersecurity best practices.  Since the council could operate by a simple majority, it could act without any government or consumer advocacy input, diluting consumer protections.

The hearing agenda, as well as the text of the draft legislation is available here.

The FTC’s prepared statement, as well as the text of the testimony is available here.




read more

Safe Harbor Update: House Votes to Pass Judicial Redress Act

The Judicial Redress Act of 2015 (H.R. 1428) (Judicial Redress Act) is on its way to the U.S. Senate. On October 20th, the U.S. House of Representatives voted in favor of passage.

The Judicial Redress Act extends certain privacy rights under the Privacy Act of 1974 (Privacy Act) to citizens of the EU and other specified countries.

The preamble to the Judicial Redress Act states that:

The Judicial Redress Act provides citizens of covered foreign countries with the ability to bring suit in Federal district court for certain Privacy Act violations by the Federal Government related to the sharing of law enforcement information between the United States and a covered foreign government. Any such lawsuit is subject to the same terms and conditions that apply to U.S. citizens and lawful permanent residents who seek redress against the Federal Government under the Privacy Act. Under current law, only U.S. citizens and lawful permanent residents may bring claims against the Federal Government pursuant to the Privacy Act despite the fact that many countries provide U.S. citizens with the ability to seek redress in their courts when their privacy rights are violated. Enactment of this legislation is necessary in order to promote and maintain law enforcement cooperation and information sharing between foreign governments and the United States and to complete negotiations of the Data Protection and Privacy Agreement with the European Union.”

The House’s passage of the Judicial Redress Act is expected to help mitigate one of the key criticisms of U.S. privacy protection from EU regulators. As discussed in our blog posts from earlier this month, in the Court of Justice of the European Union (CJEU) decision invalidating the U.S.-EU Safe Harbor Program, the CJEU noted that EU residents lack an “administrative or judicial means of redress enabling, in particular, the data relating to them to be accessed and, as the case may be, rectified or erased.”  Once passed by the Senate (as is generally expected), the Judicial Redress Act will provide that means of redress.

Check back for updates on the Senate’s consideration of the Judicial Redress Act and the ongoing EU-US negotiations about a Safe Harbor Sequel.




read more

Employee consent to use of personal data reliable under German law

The German Federal Labor Court (Bundesarbeitsgericht (BAG)) has published the reasons for its two decisions about whether an employee can revoke consent given to his or her employer for public use of the employee’s image in photos, videos or other marketing materials (BAG 19 February 2015, 8 AZR 1011/13; BAG 11 December 2014 – 8 AZR 1010/13). The BAG held that (1) an employer can rely on an employee’s voluntary consent under German data privacy laws and (2) an employee must take into account the employer’s interests when justifying his or her revocation of a valid consent.  The BAG’s decisions are notable because they are contrary to the widely-held opinion that employee consent given in the context of the employment relationship is not completely voluntary.

German data privacy and copyright laws require an employer to obtain an employee’s consent to use the employee’s image in photos or videos developed for marketing or similar purposes.  The consent must be voluntarily given and not tied to the employee’s employment status.  Before the BAG’s decisions, some German data privacy law commentators argued that an employee’s consent is not always freely given because of the employee’s subordinate status in the employment relationship.

Now, under the BAG’s decisions, the existence of the employer-employee relationship does not cause an employee’s individual consent to be per se ineffective. The BAG determined that employees can freely choose whether to consent or not. If an employee believes that he or she is subject to discrimination for withholding consent, remedies are available under other German laws. The BAG emphasised that the consent must be in writing and include certain information to be valid and that whether the consent is subsequently revocable depends on the facts and circumstances.

Key Takeaway:

An employer should obtain individual written consent from an employee to use the employee’s image or likeness in marketing materials. To help prevent future revocation, the written consent must state (among other specific requirements) that the employer’s rights survive termination of the employment relationship.




read more

Court of Justice of the European Union Says Safe Harbor Is No Longer Safe

Earlier today, the Court of Justice of the European Union (CJEU) announced its determination that the U.S.-EU Safe Harbor program is no longer a “safe” (i.e., legally valid) means for transferring personal data of EU residents from the European Union to the United States.

The CJEU determined that the European Commission’s 2000 decision (Safe Harbor Decision) validating the Safe Harbor program did not and “cannot eliminate or even reduce the powers” available to the data protection authority (DPA) of each EU member country. Specifically, the CJEU opinion states that a DPA can determine for itself whether the Safe Harbor program provides an “adequate” level of personal data protection (i.e., “a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union” as required by the EU Data Protection Directive (95/46/EC)).

The CJEU based its decision invalidating that Safe Harbor opinion in part on the determination that the U.S. government conducts “indiscriminate surveillance and interception carried out … on a large scale”.

The plaintiff in the case that gave rise to the CJEU opinion, Maximilian Schrems (see background below), issued his first public statement praising the CJEU for a decision that “clarifies that mass surveillance violates our fundamental rights.”

Schrems also made reference to the need for “reasonable legal redress,” referring to the U.S. Congress’ Judicial Redress Act of 2015. The Judicial Redress Act, which has bi-partisan support, would allow EU residents to bring civil actions in U.S. courts to address “unlawful disclosures of records maintained by an [U.S. government] agency.

Edward Snowden also hit the Twittersphere with “Congratulations, @MaxSchrems. You’ve changed the world for the better.”

Background

Today’s CJEU opinion invalidating the Safe Harbor program follows on the September 23, 2015, opinion from the advocate general (AG) to the CJEU in connection with Maximilian Schrems vs. Data Protection Commissioner.

In June 2013, Maximilian Schrems, an Austrian student, filed a complaint with the Irish DPA. Schrems’ complaint related to the transfer of his personal data collected through his use of Facebook. Schrems’ Facebook data was transferred by Facebook Ireland to Facebook USA under the Safe Harbor program. The core claim in Schrems’ complaint is that the Safe Harbor program did not adequately protect his personal data, because Facebook USA is subject to U.S. government surveillance under the PRISM program.

The Irish DPA rejected Schrems’ complaint because Facebook was certified under the Safe Harbor Program. Schrems appealed to the High Court of Ireland, arguing that the Irish (or any other country’s) DPA has a duty to protect EU citizens against privacy violations, like access to their personal data as part of U.S. government surveillance. Since Schrems’ appeal relates to EU law (not solely Irish law), the Irish High Court referred Schrems’ appeal [...]

Continue Reading




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law