Consumer Protection
Subscribe to Consumer Protection's Posts

The Consumer Privacy Bill of Rights Redux

On February 27, 2015, the Obama White House released an “Administration Discussion Draft” of its Consumer Privacy Bill of Rights Act of 2015 (Proposed Consumer Privacy Act)

The Proposed Consumer Privacy Act revises and builds on the “Consumer Privacy Bill of Rights” that the Obama White House released in its 2012 Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy report.

As described during President Obama’s January 12 visit to the Federal Trade Commission (FTC), the Proposed Consumer Privacy Act identifies seven “basic principles to both protect personal privacy and ensure that industry can keep innovating.”   These seven principles are:

  1. Transparency (§101): Transparency is a principle frequently cited in guidance from the FTC, as well as self-regulatory framework, such as the Digital Advertising Alliance’s cross-industry code for interest based-advertising. The Proposed Consumer Privacy Act describes transparency as “concise and easily understandable language, accurate, clear, timely, and conspicuous notice about privacy and security practices.” The notice required from an entity subject to the Proposed Consumer Privacy Act (defined as a “covered entity” (CE)) must describe the entity’s collection, use, disclosure, retention, destruction and security practices.
  2. Individual Control (§102): The Individual Control principle means offering consumers a “reasonable means to control the processing (i.e., taking any action regarding) personal data about them in proportion to the privacy risk to the individual and consistent with context.” An individual must have a way to either withdraw consent related to his or her personal data that is “reasonably comparable” to the means by which the consent was initially granted consent or request that the CE “de-identify” (as defined in the Proposed Consumer Privacy Act) his or her personal data.
  3. Respect for Context (§103): Under the Respect for Context principle, a CE must process personal data reasonably “in light of context.” If the processing is not reasonable, the CE must undertake a “privacy risk analysis” to identify and take reasonable steps to mitigate privacy-related risk, including “heightened transparency and individual control,” such as just-in-time notices.  Reasonableness is presumed when a CE’s personal data processing “fulfills an individual’s request.”
  4. Focused Collection and Responsible Use (§104): The Focused Collection and Responsible Use principle requires that a CE limit its collection, retention and use of personal data to a “manner that is reasonable in light of context.” The CE also must “delete, destroy, or de-identify” personal data within a “reasonable time” after the original purpose for its collection, retention, or use has been fulfilled.
  5. Security (§105): Under the Security principle, a CE must: identify internal and external “risks to privacy and security” of personal data; implement and maintain safeguards “reasonably designed” to secure personal data; regularly assess the efficacy of the safeguards, and adjust the safeguards to reflect material changes to business practices or “any other circumstances that create a material impact on the privacy or security” of personal data under the CE’s control. The [...]

    Continue Reading



read more

FTC Merger Review Likely to Incorporate Analysis of Privacy Issues

The Federal Trade Commission (FTC or the Commission), along with the U.S. Department of Justice, can challenge mergers it believes will result in a substantial lessening of competition – for example through higher prices, lower quality or reduced rates of innovation.  Although the analysis of whether a transaction may be anticompetitive typically focuses on price, privacy is increasingly regarded as a kind of non-price competition, like quality or innovation.  During a recent symposium on the parameters and enforcement reach of Section 5 of the FTC Act, Deborah Feinstein, the director of the FTC’s Bureau of Competition, noted that privacy concerns are becoming more important in the agency’s merger reviews.  Specifically she stated, “Privacy could be a form of non-price competition important to customers that could be actionable if two kinds of companies competed on privacy commitments on technologies they came up with.”

At this same symposium, Jessica Rich, director of the FTC’s Bureau of Consumer Protection, remarked on the agency’s increasing expectations that companies protect the consumer data they collect and be more transparent about what they collect, how they store and protect it, and about third parties with whom they share the data.

The FTC’s Bureaus of Competition and Consumer Protection fulfill the agency’s dual mission to promote competition and protect consumers, in part, through the enforcement of Section 5 of the FTC Act.  With two areas of expertise and a supporting Bureau of Economics under one roof, the Commission is uniquely positioned to analyze whether a potential merger may substantially lessen privacy-related competition.

The concept that privacy is a form of non-price competition is not new to the FTC.  In its 2007 statement upon closing its investigation into the merger of Google, Inc. and DoubleClick Inc., the Commission recognized that mergers can “adversely affect non-price attributes of competition, such as consumer privacy.”  Commissioner Pamela Jones Harbour’s dissent in the Google/DoubleClick matter outlined a number of forward-looking competition and privacy-related considerations for analyzing mergers of data-rich companies.  The FTC ultimately concluded that the evidence in that case “did not support the theories of potential competitive harm” and thus declined to challenge the deal.  The matter laid the groundwork, however, for the agency’s future consideration of these issues.

While the FTC has yet to challenge a transaction on the basis that privacy competition would be substantially lessened, parties can expect staff from both the Bureau of Competition and the Bureau of Consumer Protection to be working closely together to analyze a proposed transaction’s impact on privacy.  The FTC’s review of mergers between entities with large databases of consumer information may focus on: (1) whether the transaction will result in decreased privacy protections, i.e., lower quality of privacy; and (2) whether the combined parties achieve market power as a result of combining their consumer data.

This concept is not unique to the United States.  The European Commission’s 2008 decision in TomTom/Tele Atlas examined whether there would be a decrease [...]

Continue Reading




read more

Consumer Health Information Update from Both Sides of the Atlantic

As we reported in May 2014, the Federal Trade Commission (FTC) convened stakeholders to explore whether health-related information collected from and about consumers — known as consumer-generated health information (CHI) — through use of the internet and increasingly-popular lifestyle and fitness mobile apps is more sensitive and in need of more privacy-sensitive treatment than other consumer-generated data.

One of the key questions raised during the FTC’s CHI seminar is: “what is consumer health information”?  Information gathered during traditional medical encounters is clearly health-related.  Information gathered from mobile apps designed as sophisticated diagnostic tools also is clearly health-related — and may even be “Protected Health Information,” as defined and regulated by Health Information Portability and Accountability Act (HIPAA), depending on the interplay of the app and the health care provider or payor community.  But, other information, such as diet and exercise, may be viewed by some as wellness or consumer preference data (for example, the types of foods purchased).  Other information (e.g., shopping habits) may not look like health information but, when aggregated with other information generated by and collected from consumers, may become health-related information.  Information, therefore, may be “health information,” and may be more sensitive as such, depending on (i) the individual from whom it is collected, (ii) the context in which it is initially collected; (iii) the other information which it is combined; (iv) the purpose for which the information was initially collected; and (v) the downstream uses of the information.

Notably, the FTC is not the only regulatory body struggling with how to define CHI.  On February 5, 2015, the European Union’s Article 29 Working Party (an EU representative body tasked with advising EU Member States on data protection) published a letter in response to a request from the European Commission to clarify the definitional scope of “data concerning health in relation to lifestyle and wellbeing apps.”

The EU’s efforts to define CHI underscore the importance of understanding CHI.  The EU and the U.S. data privacy and security regimes differ fundamentally in that the EU regime broadly protects personally identifiable information.  The US does not currently provide universal protections for personally identifiable information.  The U.S. approach varies by jurisdiction and type of information and does not uniformly regulate the mobile app industry or the CHI captured by such apps.  These different regulatory regimes make the EU’s struggle to define the precise scope and definition of “lifestyle and wellbeing” data (CHI) and develop best practices going forward all the more striking because, even absent such a definition, the EU privacy regime would offer protections.

The Article 29 Working Party letter acknowledges the European Commission’s work to date, including the European Commission’s “Green Paper on Mobile Health,” which emphasized the need for strong privacy and security protections, transparency – particularly with respect to how CHI interoperates with big data  – and the need for specific legislation on CHI-related  apps or regulatory guidance that will promote “the safety and performance of lifestyle and wellbeing apps.”  But, [...]

Continue Reading




read more

States Respond to Recent Breaches with Encryption Legislation

In the wake of recent breaches of personally identifiable information (PII) suffered by health insurance companies located in their states, the New Jersey Legislature passed, and the Connecticut General Assembly will consider legislation that requires health insurance companies offering health benefits within these states to encrypt certain types of PII, including social security numbers, addresses and health information.  New Jersey joins a growing number of states (including California (e.g., 1798.81.5), Massachusetts (e.g., 17.03) and Nevada (e.g., 603A.215)) that require organizations that store and transmit PII to implement data security safeguards.   Massachusetts’ data security law, for example, requires any person or entity that owns or licenses certain PII about a resident of the Commonwealth to, if “technically feasible” (i.e., a reasonable technological means is available), encrypt information stored on laptops and other portable devices and encrypt transmitted records and files that will travel over public networks.  Unlike Massachusetts’ law New Jersey’s new encryption law only applies to health insurance carriers that are authorized to issue health benefits in New Jersey (N.J. Stat. Ann. §  56:8-196) but requires health insurance carriers to encrypt records with the PII protected by the statute when stored on any end-user systems and devices, and when transmitted electronically over public networks (e.g., N.J. Stat. Ann. § 56.8-197).

At the federal level, the Health Insurance Portability and Accountability Act (HIPAA) already requires health plans, as well as other “covered entities” (i.e., health providers)  and their “business associates” (i.e., service providers who need access to a covered entity’s health information to perform their services), to encrypt stored health information or health information transmitted electronically if “reasonable and appropriate” for them to do so (45 C.F.R. §§ 164.306; 164.312).  According to the U.S. Department of Health and Human Services, health plans and other covered entities and their business associates should consider a variety factors to determine whether a security safeguard is reasonable and appropriate, including: (1) the covered entity or business associate’s risk analysis; (2) the security measures the covered entity or business associate already has in place; and (3) the costs of implementation (68 Fed. Reg. 8336).  If the covered entity or business associate determines that encryption of stored health information or transmitted information is not reasonable and appropriate, however, the covered entity or business associate may instead elect to document its determination and implement an equivalent safeguard.

The New Jersey law and the Connecticut proposal appear to reflect a legislative determination that encryption of stored or transmitted health information is always reasonable and appropriate for health plans to implement, regardless of the other safeguards that the health plan may already have in place.  As hackers become more sophisticated and breaches more prevalent in the health care industry, other states may follow New Jersey and Connecticut by expressly requiring health plans and other holders of health care information to implement encryption and other security safeguards, such as multifactor authentication or minimum password complexity requirements.  In fact, Connecticut’s Senate [...]

Continue Reading




read more

Employers with Group Health Plans: Have You Notified State Regulators of the Breach?

Data security breaches affecting large segments of the U.S. population continue to dominate the news. Over the past few years, there has been considerable confusion among employers with group health plans regarding the extent of their responsibility to notify state agencies of security breaches when a vendor or other third party with access to participant information suffers a breach. This On the Subject provides answers to several frequently asked questions to help employers with group health plans navigate the challenging regulatory maze.

Read the full article.




read more

Secure Sockets Layer (SSL) 3.0 Encryption Declared “No Longer Acceptable” to Protect Data

On Friday, February 13, 2015, the Payment Cards Industry (PCI) Security Standards Council (Council) posted a bulletin to its website, becoming the first regulatory body to publicly pronounce that Secure Socket Layers  (SSL) version 3.0 (and by inference, any earlier version) is “no longer… acceptable for protection of data due to inherence weaknesses within the protocol” and, because of the weaknesses, “no version of SSL meets PCI SSC’s definition of ‘strong cryptography.’”  The bulletin does not offer an alternative means that would be acceptable, but rather “urges organizations to work with [their] IT departments and/or partners to understand if [they] are using SSL and determine available options for upgrading to a strong cryptographic protocol as soon as possible.”   The Council reports that it intends to publish soon an updated version of PCI-DSS and the related PA-DSS that will address this issue.  These developments follow news of the Heartbleed and POODLE attacks from 2014 that exposed SSL vulnerabilities.

Although the PCI standards only apply to merchants and other companies involved in the payment processing ecosystem, the Council’s public pronouncement that SSL is vulnerable and weak is a wakeup call to any organization that still uses an older version of SSL to encrypt its data, regardless of whether these standards apply.

As a result, every company should consider taking the following immediate action:

  1. Work with your IT stakeholders and those responsible for website operation to determine if your organization or a vendor for your organization uses SSL v. 3.0 (or any earlier version);
  2. If it does, evaluate with those stakeholders how to best disable these older versions, while immediately upgrading to an acceptable strong cryptographic protocol as needed;
  3. Review vendor obligations to ensure compliance with a stronger encryption protocol is mandated and audit vendors to ensure the vendor is implementing greater protection;
  4. If needed, consider retaining a reputable security firm to audit or evaluate your and your vendors’ encryption protocols and ensure vulnerabilities are properly remediated; and
  5.  Ensure proper testing prior to rollout of any new protocol.

Additional resources and materials:

  • NIST SP 800-57: Recommendation for Key Management – Part 1: General (Revision 3)
  • NIST SP 800-52: Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations (Revision 1)



read more

Consumer Privacy Rights – Germany To Enable Consumer Protection Organisations To Bring Actions For Privacy Violations

The German federal government has recently approved a bill that might substantially change the way consumer privacy rights are enforced throughout the country.

The bill aims to give consumer protection and similar organizations standing to bring an action for injunctive relief against commercial suppliers of goods or services that unlawfully collect or process personal consumer data for certain purposes such as advertisement, market or opinion research, and personality or user profiling.

Even though these uses entail a high risk of privacy violations, consumers frequently refrain from enforcing their related rights as they are unaware of the unlawful practice or deterred by the prospective costs of litigation or the market power of the suppliers. The bill is intended to alleviate this deficit by allowing public interest organizations to invoke privacy rights on behalf of consumers as a whole.

The German position is in-line with general considerations by the European Union to provide for legal mechanisms of collective redress where limited individual damage prevents potential claimants from pursuing an individual claim. At least to some extent, this is also reflected in the most recent draft version of the General Data Protection Regulation currently being debated by the European Commission, Parliament and Council.

In the past, similar collective redress systems have been instituted very successfully in Germany regarding consumer rights in other areas including, for example, the sale of consumer goods and general terms and conditions. If the concept can effectively be transferred to consumer privacy, suppliers of consumer goods and services will have to expect much closer scrutiny of their privacy practices in the future.




read more

The FTC Did Some Kid-ding Around in 2014

2014 was a busy year for the Federal Trade Commission (FTC) with the Children’s Online Privacy Protection Act (COPPA).  The FTC announced something new under COPPA nearly every month, including:

  • In January, the FTC issued an updated version of the free consumer guide, “Net Cetera:  Chatting with Kids About Being Online.”  Updates to the guide include advice on mobile apps, using public WiFi securely, and how to recognize text message spam, as well as details about recent changes to COPPA.
  • In February, the FTC approved the kidSAFE Safe Harbor Program.  The kidSAFE certification and seal of approval program helps children-friendly digital services comply with COPPA.  To qualify for a kidSAFE seal, digital operators must build safety protections and controls into any interactive community features; post rules and educational information about online safety; have procedures for handling safety issues and complaints; give parents basic safety controls over their child’s activities; and ensure all content, advertising and marketing is age-appropriate.
  • In March, the FTC filed an amicus brief in the 9th U.S. Circuit Court of Appeals, arguing that the ruling of U.S. District  Court for the Northern District of California in Batman v. Facebook that COPPA preempts state law protections for the online activities of teenagers children outside of COPPA’ coverage is “patently wrong.”
  • In April, the FTC updated its “Complying with COPPA:  Frequently Asked Questions” (aka the COPPA FAQs) to address how COPPA applies in the school setting.  In FAQ M.2, the FTC discussed whether a school can provide the COPPA-required consent on behalf of parents, stating that “Where a school has contracted with an operator to collect personal information from students for the use and benefit of the school, and for no other commercial purpose, the operator is not required to obtain consent directly from parents, and can presume that the school’s authorization for the collection of students’ personal information is based upon the school having obtained the parents’ consent.”  But, the FTC also recommends as “best practice” that schools provide parents with information about the operators to which it has consented on behalf of the parents.  The FTC requires that the school investigate the collection, use, sharing, retention, security and disposal practices with respect to personal information collected from its students.
  • In July, COPPA FAQ H.5, FAQ H.10, and FAQ H.16 about parental consent verification also were updated.  In FAQ H.5, the FTC indicates that “collecting a 16-digit credit or debit card number alone” is not sufficient as a parental consent mechanism, in some circumstances, “collection of the card number – in conjunction with implementing other safeguards – would suffice.”  Revised FAQ H.10 indicates that a developer of a child-directed app may use a third party for parental verification “as long as [developers] ensure that COPPA requirements are being met,” including the requirement to “provide parents with a direct notice outlining [the developer’s] information collection practices before the parent provides his or her consent.” In revised FAQ H.16, the FTC [...]

    Continue Reading



read more

Pressure Points: OCR Enforcement Activity in 2014

During 2014, the Office for Civil Rights (OCR) of the U.S. Department of Health & Human Services initiated six enforcement actions in response to security breaches reported by entities covered by the Health Insurance Portability and Accountability Act (HIPAA) (covered entities), five of which involved electronic protected health information (EPHI).  The resolution agreements and corrective action plans resolving the enforcement actions highlight key areas of concern for OCR and provide the following important reminders to covered entities and business associates regarding effective data protection programs.

  1. Security risk assessment is key.

OCR noted in the resolution agreements related to three of the five security incidents, involving QCA Health Plan, Inc., New York and Presbyterian Hospital (NYP) and Columbia University (Columbia), and Anchorage Community Mental Health Services (Anchorage), that each entity failed to conduct an accurate and thorough assessment of the risks and vulnerabilities to the entity’s EPHI and to implement security measures sufficient to reduce the risks and vulnerabilities to a reasonable and appropriate level.  In each case, the final corrective action plan required submission of a recent risk assessment and corresponding risk management plan to OCR within a relatively short period after the effective date of the resolution agreement.

      2.  A risk assessment is not enough – entities must follow through with remediation of identified threats and vulnerabilities.

In the resolution agreement related to Concentra Health Services (CHS), OCR noted that although CHS had conducted multiple risk assessments that recognized a lack of encryption on its devices containing EPHI, CHS failed to thoroughly implement remediation of the issue for over 3-1/2 years.

      3.  System changes and data relocation can lead to unintended consequences. 

In two of the cases, the underlying cause of the security breach was a technological change that led to the public availability of EPHI.  A press release on the Skagit County incident notes that Skagit County inadvertently moved EPHI related to 1,581 individuals to a publicly accessible server and initially reported a security breach with respect to only seven individuals, evidentially failing at first to identify the larger security breach.  According to a press release related to the NYP/Columbia security breach, the breach was caused when a Columbia physician attempted to deactivate a personally-owned computer server on the network, which, due to lack of technological safeguards, led to the public availability of certain of NYP’s EPHI on internet search engines.

      4.  Patch management and software upgrades are basic, but essential, defenses against system intrusion.

OCR noted in its December 2014 bulletin on the Anchorage security breach (2014 Bulletin) that the breach was a direct result of Anchorage’s failure to identify and address basic security risks. For example, OCR noted that Anchorage did not regularly update IT resources with available patches [...]

Continue Reading




read more

C-Suite – Changing Tack on the Sea of Data Breach?

The country awoke to what seems to be a common occurrence now: another corporation struck by a massive data breach.  This time it was Anthem, the country’s second largest health insurer, in a breach initially estimated to involve eighty million individuals.  Both individuals’ and employees’ personal information is at issue, in a breach instigated by hackers.

Early reports, however, indicated that this breach might be subtly different than those faced by other corporations in recent years.  The difference isn’t in the breach itself, but in the immediate, transparent and proactive actions that the C-Suite took.

Unlike many breaches in recent history, this attack was discovered internally through corporate investigative and management processes already in place.  Further, the C-Suite took an immediate, proactive and transparent stance: just as the investigative process was launching in earnest within the corporation, the C-Suite took steps to fully advise its customers, its regulators and the public at-large, of the breach.

Anthem’s chief executive officer, Joseph Swedish, sent a personal, detailed e-mail to all customers. An identical message appeared in a widely broadcast press statement.  Swedish outlined the magnitude of the breach, and that the Federal Bureau of Investigation and other investigative and regulatory bodies had already been advised and were working in earnest to stem the breach and its fallout.  He advised that each customer or employee with data at risk was being personally and individually notified.  In a humanizing touch, he admitted that the breach involved his own personal data.

What some data privacy and information security advocates noted was different: The proactive internal measures that discovered the breach before outsiders did; the early decision to cooperate with authorities and press, and the involvement of the corporate C-Suite in notifying the individuals at risk and the public at-large.

The rapid and detailed disclosure could indicate a changing attitude among the American corporate leadership.  Regulators have encouraged transparency and cooperation among Corporate America, the public and regulators as part of an effort to stem the tide of cyber-attacks.  As some regulators and information security experts reason, the criminals are cooperating, so we should as well – we are all in this together.

Will the proactive, transparent and cooperative stance make a difference in the aftermath of such a breach?  Only time will tell but we will be certain to watch with interest.




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law