Consumer Protection
Subscribe to Consumer Protection's Posts

France About to Embark on a Cookies Sweep Day

Impending sweep day to verify compliance with guidelines on cookies

During the week of September 15–19, 2014, France’s privacy regulator, the Commission Nationale de l’Informatique et des Libertés (CNIL), is organizing a “cookies sweep day” to examine compliance with its guidelines on cookies and other online trackers.

Starting in October 2014, the CNIL will also be conducting onsite and remote inspections to verify compliance with its guidelines on cookies.

Depending on the findings of the sweep and inspections, the CNIL may issue warnings or financial sanctions to non-compliant websites and applications.

Investigations gaining momentum

France is not the only country stepping up its data privacy efforts.  Parallel sweeps to the one conducted by the CNIL in September 2014 will be undertaken simultaneously by data protection authorities across the European Union.  The purpose of the coordinated action is to compare practices on the information given by websites to internet users and the methods to obtain their consent for cookies.

Nor is this the first time such a sweep has been organized in France.  In May 2013, the CNIL joined 19 counterparts worldwide in an audit of the 2,180 most visited websites and applications.  In that operation, known as “Internet Sweep Day”, the CNIL examined the compliance of 250 frequently visited websites and found that 99 percent of websites visited by French internet users collect personal information.  Of those that provided information on their data privacy policy, a considerable number did not render it easily accessible, clearly articulated or even written in French.

Compliance made simpler through CNIL guidelines

EU Directive 2002/58 on Privacy and Electronic Communications imposes an obligation to obtain prior consent before placing or accessing cookies and similar technologies on web users’ devices, an obligation incorporated into French law by Article 32-II of the French Data Protection Act.

Not all cookies require prior consent by internet users.  Exempt are cookies used “for the sole purpose of carrying out the transmission of a communication over an electronic communications network” and those that are “strictly necessary for the provision of an information service explicitly requested by the subscriber or user.”

For those cookies that require prior consent, the CNIL will verify how consent is obtained.  Under the CNIL guidelines, consent may be obtained either through an actual click or by the user’s further navigation within the site notwithstanding a continuing banner informing him or her of the website’s use of cookies.

Website owners can rely on tools made available by the CNIL to ensure their compliance with the cookie requirements.  In particular, a set of guidelines released by the CNIL in December 2013 explains how to obtain consent for the use of cookies and other online trackers in compliance with EU and French data protection requirements.

Under the CNIL guidelines, owners of websites may not force internet users to accept cookies.  Instead, the users must be able to block advertising cookies and still use the relevant service.  Internet users can withdraw their consent at any time, and cookies have a [...]

Continue Reading




read more

Processing Personal Data in Russia? Consider These Changes to Russian Law and How They May Impact Your Business

Changes Impacting Businesses that Process Personal Data in Russia

On July 21, 2014, a new law Federal Law № 242-FZ was adopted in Russia (Database Law) introducing amendments to the existing Federal Law “On personal data” and to the existing Federal Law “On information, information technologies and protection of information.”  The new Database Law requires companies to store and process personal data of Russian nationals in databases located in Russia.  At a minimum, the practical effect of this new Database Law is that companies operating in Russia that collect, receive, store or transmit (“process”) personal data of natural persons in Russia will be required to place servers in Russia if they plan to continue doing business in that market.  This would include, for example, retailers, restaurants, cloud service providers, social networks and those companies operating in the transportation, banking and health care spheres.  Importantly, while Database Law is not scheduled to come into force until September 1, 2016, a new bill was just introduced on September 1, 2014 to move up that date to January 1, 2015.  The transition period is designed to give companies time to adjust to the new Database Law and decide whether to build up local infrastructure in Russia, find a partner having such infrastructure in Russia, or cease processing information of Russian nationals.  If the bill filed on September 1 becomes law, however, that transition period will be substantially shortened and businesses operating in Russia will need to act fast to comply by January 1.

Some mass media in Russia have interpreted provisions of the Database Law as banning the processing of Russian nationals’ personal data abroad.  However, this is not written explicitly into the law and until such opinion is confirmed by the competent Russian authorities, this will continue to be an open question.  There is hope that the lawmakers’ intent was to give a much needed boost to the Russian IT and telecom industry, rather than to prohibit the processing of personal data abroad.  If this hope is confirmed, then so long as companies operating in Russia ensure that they process personal data of Russian nationals in databases physically located in Russia, they also should be able to process this information abroad, subject to compliance with cross-border transfer requirements.  

The other novelty of this new Database Law is that it grants the Russian data protection authority (DPA) the power to block access to information resources that are processing information in breach of Russian laws.  Importantly, the Database Law provides that the blocking authority applies irrespective of the location of the offending company or whether they are registered in Russia.  However, the DPA can initiate the procedure to block access only if there is a respective court judgment.  Based on the court judgment the DPA then will be able to require a hosting provider to undertake steps to eliminate the infringements.  For example, the hosting provider must inform the owner of the information resource that it must eliminate the infringement, or the hosting [...]

Continue Reading




read more

New Data Disposal Law in Delaware Requires Action by Impacted Businesses

While the federal government continues its inaction on data security bills pending in Congress, some U.S. states have been busy at work on this issue over the summer.  A new Delaware law H.B. 295, signed into law on July 1, 2014 and effective January 1, 2015, provides for a private right of action in which a court may order up to triple damages in the event a business improperly destroys personal identifying information at the end of its life cycle.  In addition to this private right of action, the Delaware Attorney General may file suit or bring an administrative enforcement proceeding against the offending business if it is in the public interest.

Under the law, personal identifying information is defined as:

A consumer’s first name or first initial and last name in combination with any one of the following data elements that relate to the consumer, when either the name or the data elements are not encrypted:

  • his or her signature,
  • full date of birth,
  • social security number,
  • passport number, driver’s license or state identification card number,
  • insurance policy number,
  • financial services account number, bank account number,
  • credit card number, debit card number,
  • any other financial information or
  • confidential health care information including all information relating to a patient’s health care history, diagnosis condition, treatment or evaluation obtained from a health care provider who has treated the patient, which explicitly or by implication identifies a particular patient.

Interestingly, this new law exempts from its coverage:  banks and financial institutions that are merely subject to the Gramm-Leach-Bliley Act, but the law only exempts health insurers and health care facilities if they are subject to and in compliance with the Health Insurance Portability and Accountability Act (HIPAA), as well as credit reporting agencies if they are subject to and in compliance with the Fair Credit Reporting Act (FCRA).

Given how broadly the HIPAA and FCRA exemptions are drafted, we expect plaintiffs’ attorneys to argue for the private right of action and triple damages in every case where a HIPAA- or FCRA-covered entity fails to properly dispose of personal identifying information, arguing that such failure evidences noncompliance with HIPAA or FCRA, thus canceling the exemption.   Note, however, that some courts have refused to allow state law claims of improper data disposal to proceed where they were preempted by federal law.  See, e.g., Willey v. JP Morgan Chase, Case No. 09-1397, 2009 U.S. Dist. LEXIS 57826 (S.D.N.Y. July 7, 2009) (dismissing individual and class claims alleging improper data disposal based on state law, finding they were pre-empted by the FCRA).

The takeaway?  Companies that collect, receive, store or transmit personal identifying information of residents of the state of Delaware (or any of the 30+ states in the U.S. that now have data disposal laws on the books) should examine their data disposal policies and practices to ensure compliance with these legal requirements.  In the event a business is alleged to have violated one of [...]

Continue Reading




read more

The California AG’s New Guide on CalOPPA – A Summary for Privacy Pros

Last week, the California Attorney General’s Office (AGO) released a series of recommendations entitled Making Your Privacy Practices Public (Guide) designed to help companies meet the requirements of California’s Online Privacy Protection Act (CalOPPA) and “provide privacy policy statements that are meaningful to consumers.”

As we have previously discussed, CalOPPA requires website operators to disclose (1) how they respond to Do Not Track (DNT) signals from browsers and other mechanism that express the DNT preference, and (2) whether third parties use or may use the site to track (i.e., collect personally identifiable information about) individual California residents “over time and across third party websites.”   Since the disclosure requirements became law, however, there has been considerable confusion among companies on how exactly to comply, and some maintain that despite W3C efforts, there continues to be no industry-wide accepted definition of what it means to “respond” to DNT signals.  As a result, the AGO engaged in an outreach process, bringing stakeholders together to provide comments on draft recommendations over a period of several months, finally culminating in the AGO publishing the final Guide earlier this week.

The Guide is just that – a guide – rather than a set of binding requirements.  However, the recommendations in the Guide do seem to present a road map for how companies might steer clear of an AGO enforcement action in this area.  As a result, privacy professionals may want to consider matching up the following key recommendations from the Guide with existing privacy policies, to confirm that they align or to consider whether it is necessary and appropriate to make adjustments:

  • Scope of the Policy:  Explain the scope of the policy, such as whether it covers online or offline content, as well as other entities such as subsidiaries.
  • Availability:  Make the policy “conspicuous” which means:
    • for websites, put a link on every page that collects personally identifiable information (PII).
    • for mobile apps that collect PII, put link at point of download, and from within the app – for example: put a link accessible from the “about” or “information” or “settings” page.
  • Do Not Track:
    • Prominently label the section of your policy regarding online tracking, for example: “California Do Not Track Disclosures”.
    • Describe how you respond to a browser’s Do Not Track signal or similar mechanisms within your privacy policy instead of merely providing a link to another website; when evaluating how to “describe” your response, consider:
      • Do you treat users whose browsers express the DNT signal differently from those without one?
      • Do you collect PII about browsing activities over time and third party sites if you receive the DNT signal?  If so, describe uses of the PII.
    • If you choose to link to an online program rather than describe your own response, provide the link with a general description of what the program does.
  • Third Party Tracking:
    • Disclose whether third parties are or may be collecting PII.
    • When drafting the disclosure [...]

      Continue Reading



read more

Incorporating Risk Analysis Into Your HIPAA Strategy

In building a stout privacy and security compliance program that would stand up well to federal HIPAA audits, proactive healthcare organizations are generally rewarded when it comes to data breach avoidance and remediation. But an important piece of that equation is performing consistent risk analyses.

McDermott partner, Edward Zacharias, was interviewed by HealthITSecurity to discuss these topics and more.

Read the full interview.




read more

The New Normal: Big Data Comes of Age

On May 1, 2014, the White House released two reports addressing the public policy implications of the proliferation of big data. Rather than trying to slow the accumulation of data or place barriers on its use in analytic endeavors, the reports assert that big data is the “new normal” and encourages the development of policy initiatives and legal frameworks that foster innovation, promote the exchange of information and support public policy goals, while at the same time limiting harm to individuals and society. This Special Report provides an overview of the two reports, puts into context their conclusions and recommendations, and extracts key takeaways for businesses grappling with understanding what these reports—and this “new normal”—mean for them.

Read the full article.




read more

Thinking Outside the HIPAA Box

On Wednesday, May 7, the Federal Trade Commission (FTC) held the third of its Spring Seminars on emerging consumer privacy issues.  This session focused on consumer-generated health information (CHI).  CHI is data generated by consumers’ use of the Internet and mobile apps that relates to an individual’s health.  The “H” in CHI defies easy definition but likely includes, at minimum, data generated from internet or mobile app activity related to seeking information about specific conditions, disease/ medical condition management tools, support and shared experiences through online communities or tools for tracking diet, exercise or other lifestyle data.

In the United States, many consumers (mistakenly) believe that all of their health-related information is protected, at the federal level, by the Health Information Portability and Accountability Act (HIPAA).  HIPAA does offer broad privacy protections to health-related information, but only to identifiable health information received by or on behalf of a “covered entity” or a third party working for a covered entity.  Covered entities are, essentially, health plans and health care providers who engage in reimbursement transactions with health plans (referred to as “Protected Health Information” or “PHI”). When HIPAA was enacted in 1996, PHI was the primary type of health information, but CHI, which is generally not also PHI, has changed that.  As FTC Commissioner Julie Brill noted her in her opening remarks, CHI is “health data stored outside the HIPAA silo.”

Without the limitations imposed by HIPAA, online service providers and mobile apps generally (except where state law requires differently) can treat CHI like other digital non-health data that they collect from consumers.  As a result, the FTC expressed concerned that CHI may be aggregated, shared and linked in ways that consumers did not foresee and may not understand.

The panelists at the FTC discussed the difficulty in defining CHI, and whether and how it is different from other kinds of data collected from consumers.  One panelist noted that whether a consumer considers his or her CHI sensitive is highly individualized.  For example, are the heart rate and exercise data collected by mobile fitness apps sensitive? Would the answer to this question change if these data points were linked with other data points that began to suggest other health or wellness indicators, just as weight?  Would the answer change if that linked data was used to predict socioeconomic status that is often linked to certain health, wellness and lifestyle indicators or used to inform risk rating or direct to consumer targeted advertising?

Panelists also discussed the larger and more general question of how to define privacy in a digital economy and how to balance privacy with the recognized benefits of data aggregation and data sharing.  These questions are compounded by the difficulty of describing data as being anonymized or de-identified – foundational principles in most privacy frameworks – because the quality of being “identifiable” in the digital economy may depend on the proximity of a piece of data to other pieces of data.

Though the “how” and “what” of additional [...]

Continue Reading




read more

Disclosures Need Not Contain Customers’ Actual Names to Violate the Video Privacy Protection Act Rules Hulu Court

In the latest of a string of victories for the plaintiffs in the Video Privacy Protection Act (VPPA) class action litigation against Hulu, LLC, the U.S. District Court for the Northern District of California ruled that Hulu’s sharing of certain customer information with Facebook, Inc. may have violated the VPPA, even though Hulu did not disclose the actual names of its customers.  The ruling leaves Hulu potentially liable for the disclosures under the VPPA and opens the door to similar claims against other providers of online content.

The decision by U.S. Magistrate Judge Laurel Beeler addressed Hulu’s argument on summary judgment that it could not have violated the VPPA because Hulu “disclosed only anonymous user IDs and never linked the user IDs to identifying data such as a person’s name or address.”  The court rejected Hulu’s argument, stating that “[Hulu’s] position paints too bright a line.”  Noting that the purpose of the VPPA was to prevent the disclosure of information “that identifies a specific person and ties that person to particular videos that the person watched” the court held that liability turned on whether the Hulu’s disclosures were “merely an anonymized ID” or “whether they are closer to linking identified persons to the videos they watched.”

Under this principle, the court held that Hulu’s disclosures to comScore, a metrics company that Hulu employed to analyze its viewership for programming and advertising purposes, did not violate the VPPA.  According to the court, Hulu’s disclosure to comScore included anonymized user IDs and other information that could theoretically be used to identify the particular individuals and their viewing choices.  But the plaintiffs had no evidence that comScore had actually used the information in that way.  As the evidence did not “suggest any linking of a specific, identified person and his video habits” the court held that the disclosures to comScore did not support a claim under the VPPA.

But the court held that Hulu’s disclosure to Facebook had potentially violated the VPPA.  Hulu’s disclosures to Facebook included certain cookies that Hulu sent to Facebook that allowed Hulu to load a Facebook “Like” button on users’ web browsers.  The court held that the cookies that Hulu sent to Facebook to accomplish this task “together reveal information about what the Hulu user watched and who the Hulu user is on Facebook.”  The court noted that this disclosure was “not merely the transmission of a unique, anonymous ID”; rather it was “information that identifies the Hulu user’s actual identity on Facebook” as well as the video that the Facebook user was watching.  Thus, the court held, Hulu’s disclosures to Facebook potentially violated the VPPA.

The Court’s ruling that disclosure of seemingly anonymous IDs can potentially lead to liability under the VPPA should cause companies that are potentially covered by the law to reexamine the ways in which they provide data to third parties.  Such companies should carefully consider not only what information is disclosed but also how the recipients of that data can reasonably be expected [...]

Continue Reading




read more

FTC Enforces Facebook Policies to Stop Jerk

The Federal Trade Commission (FTC) recently accused the operator of www.Jerk.com (Jerk) of misrepresenting to users the source of the personal content that Jerk used for its purported social networking website and the benefits derived from a user’s purchase of a Jerk membership.   According to the FTC, Jerk improperly accessed personal information about consumers from Facebook, used the information to create millions of unique profiles identifying subjects as either “Jerk” or “Not a Jerk” and falsely represented that a user could dispute the Jerk/Not a Jerk label and alter the information posted on the website by paying a $30 subscription fee.  The interesting issue in this case is not the name of the defendant or its unsavory business model; rather, what’s interesting is the FTC’s tacit enforcement of Facebook’s privacy policies governing the personal information of Facebook’s own users.

Misrepresenting the Source of Personal Information

Although Jerk represented that its profile information was created by its users and reflected those users’ views of the profiled individuals, Jerk in fact obtained the profile information from Facebook.  In its complaint, the FTC alleges that Jerk accessed Facebook’s data through Facebook’s application programming interfaces (API), which are tools developers can use to interact with Facebook, and downloaded the names and photographs of millions of Facebook users without consent. The FTC used Facebook’s various policies as support for its allegation that Jerk improperly obtained the personal information of Facebook’s users and, in turn, misrepresented the source of the information.  The FTC noted that developers accessing the Facebook platform must agree to Facebook’s policies, which include (1) obtaining users’ explicit consent to share certain Facebook data; (2) deleting information obtained through Facebook once Facebook disables the developers’ Facebook access; (3) providing an easily accessible mechanism for consumers to request the deletion of their Facebook data; and (4) deleting information obtained from Facebook upon a consumer’s request.  Jerk used the data it collected from Facebook not to interact with Facebook but to create unique Jerk profiles for its own commercial advantage.  Jerk’s misappropriation of user data from Facebook was the actual source of the data contrary to Jerk’s representation that the data had been provided by Jerk’s users.

Misrepresenting the Benefit of the Bargain

According to the FTC, Jerk represented that purchase of a $30 subscription would enable users to obtain “premium features,” including the ability to dispute information posted on Jerk and alter or delete their Jerk profile and dispute the false information on their profile.  Users who paid the subscription often received none of the promised benefits.  The FTC noted that contacting Jerk with complaints was difficult for consumers:  Jerk charged $25 for users to email the customer service department.

A hearing is scheduled for January 2015. Notably, the FTC’s proposed Order, among other prohibitions, enjoins Jerk from using in any way the personal information that Jerk obtained prior to the FTC’s action – meaning the personal information that was obtained illegally from Facebook.




read more

The Highest Court in the European Union Strikes Down the Data Retention Directive as Invalid

In a significant move, the Court of Justice of the European Union (CJEU) has ruled that the Data Retention Directive 2006/24/EC (Directive) is invalid. This decision is expected to have wide-reaching implications for privacy laws across the European Union.

On 8 April 2014, the CJEU held that the requirement imposed on internet service providers (ISP) and telecom companies to retain data for up to two years “entails a wide-ranging and particularly serious interference with [the] fundamental rights [to respect for private life and communications and to the protection of personal data] in the legal order of the EU, without such an interference being precisely circumscribed by provisions to ensure that it is actually limited to what is strictly necessary.”

The Directive

The Directive is a product of heightened security concerns in the aftermath of terrorist attacks around the world. It facilitated almost unqualified access by national authorities to the data collected by communications providers for the purpose of organised crime and terrorism prevention, investigation detection and prosecution. To enable this access, obligations were imposed on communications providers to retain certain data for between six months and two years.

The Ruling

Specifically, communications providers were required to retain traffic and location data as well as data necessary to identify users. It did not, however, permit the retention of communication content or of the information consulted by the user.

The CJEU found that the retained data revealed a phenomenal amount of information about individuals and their private lives. The data enabled the identification of persons with whom the user has communicated and by what means; the time and place of communication; and the frequency of communications with certain persons during a given period. From this data, a very clear picture could be formed of the private lives of users, including their daily habits, permanent or temporary places of residence, daily or other movement, activities carried out, social relationships and the social environments frequented.

The CJEU accepted the retention of data for use by national authorities for the legitimate objective of national security, however opined that the Directive went further than necessary to fulfil those objectives violating the proportionality principle.

It delineated five main concerns:

  1. Generality – The Directive applies to all individuals and electronic communications without exception.
  2. No Objective Criteria – The Directive did not stipulate any objective criteria and procedures with which national authorities should comply in order to access the data.
  3. No Proportionality of Retention Period – The minimum retention period of six months failed to provide for categories of data to be distinguished or for the possible utility of the data vis-à-vis the objectives pursued. Further, the Directive did not provide any objective criteria by which to determine the data retention period which would be strictly necessary according to the circumstances.
  4. Insufficient Safeguards – The Directive fails to provide sufficient safeguards against abuse and unlawful access and use of the data.
  5. Data may leave the EU – There is no requirement to retain the data in the EU [...]

    Continue Reading



read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law