Big Data
Subscribe to Big Data's Posts

Farewell ‘Safe Harbor,’ Hello ‘Privacy Shield’: Europe and U.S. Agree on New Rules for Transatlantic Data Transfer

After intense negotiations, and after the official deadline had passed on Sunday, 31 January 2016, the United States and the European Union have finally agreed on a new set of rules—the “EU-U.S. Privacy Shield”—for data transfers across the Atlantic. The Privacy Shield replaces the old Safe Harbor agreement, which was struck down by the European Court of Justice (ECJ) in October 2015. Critics already comment that the Privacy Shield will share Safe Harbor’s fate and will be declared invalid by the ECJ; nevertheless, until such a decision exists, the Privacy Shield should give companies legal security when transferring data to the United States.

While a text of the new agreement is not yet published, European Commissioner Věra Jourvá stated that the Privacy Shield should be in place in the next few weeks. According to a press release from the European Commission, the new arrangement

…will provide stronger obligations on companies in the U.S. to protect the personal data of Europeans and stronger monitoring and enforcement by the U.S. Department of Commerce and Federal Trade Commission (FTC), including through increased cooperation with European Data Protection Authorities. The new arrangement includes commitments by the U.S. that possibilities under U.S. law for public authorities to access personal data transferred under the new arrangement will be subject to clear conditions, limitations and oversight, preventing generalized access. Europeans will have the possibility to raise any enquiry or complaint in this context with a dedicated new Ombudsperson.

One of the most known critics of the U.S. data processing practices and initiator of the ECJ Safe Harbor decision, Austrian Max Schrems, already reacted to the news. Schrems stated on social media that the ECJ Safe Harbor decision explicitly says that “generalized access to content of communications” by intelligence agencies violates the fundamental right to respect for privacy. Commissioner Jourová, referring to the Privacy Shield, stated that “generalized access … may happen in very rare cases”—which could be viewed as contradictory to the ECJ decision. Critics also argue that an informal commitment by the United States during negotiations with the European Union is not something on which European citizens could base lawsuits in the United States if their data is transferred or used illegally.

The European Commission will now prepare a draft text for the Privacy Shield, which still must be ratified by the Member States. The EU Parliament will also review the draft text. In the meantime, the United States will make the necessary preparations to put in place the new framework, monitoring mechanisms and new ombudsperson.

 




read more

FTC Report Alerts Organizations about the Risks and Rewards of Big Data Analytics

On January 6, the Federal Trade Commission (FTC) released a report that it hopes will educate organizations on the important laws and research that are relevant to big data analytics. The report, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues, looks specifically at how big data is used after it is collected and analyzed and provides suggestions aimed at maximizing the benefits and minimizing the risks of using big data.

Risk and Rewards

The report argues that big data analytics can provide numerous opportunities for improvements in society. In addition to more effectively matching products and services to consumers, big data can create opportunities for low income and underserved communities. The report highlights a number of innovative uses of big data that provide benefits to underserved populations, such as increased educational attainment, access to credit through nontraditional methods, specialized health care for underserved communities, and better access to employment.

At the same time, the report shows that potential inaccuracies and biases might lead to detrimental effects for low-income and underserved populations. For example, organizations  could use big data to inadvertently exclude low-income and underserved communities from credit and employment opportunities, which may reinforce existing disparities or weaken the effectiveness of consumer choice.

Considerations for Using Big Data

The report outlines some of the consumer protection laws (in particular, the Fair Credit Reporting Act and FTC Act)  and equal opportunity laws that apply to the use of big data, especially with regard to possible issues of discrimination or exclusion. It also recommends that an organization consider the following questions to help ensure that its use of big data analytics does not lead to unlawful exclusion or discrimination:

How representative is your data set? 

If the data set is missing information from particular populations, take appropriate steps to address this problem.

Does your data model account for biases? 

Review data sets and algorithms to ensure that hidden biases do not have an unintended impact on certain populations.

How accurate are your predictions based on big data? 

Balance the risks of using correlative results, especially where the business’ policies could negatively affect certain populations.

Does your reliance on big data cause ethical or fairness concerns?

Consider whether fairness and ethical considerations advise against using big data in certain circumstances and whether the business can use big data in ways that advance opportunities for previously underrepresented populations.

Monitoring and Enforcement Ahead

The FTC stated that its collective challenge is to make sure that big data analytics continue to provide benefits and opportunities to consumers while adhering to core consumer protection values and principles. It has committed to continue monitoring areas where big data practices could violate existing laws and to bring enforcement actions where appropriate.  With that in mind, organizations that already use big data and those that are have been persuaded by reported benefits of big data should heed [...]

Continue Reading




read more

Court of Justice of the European Union Says Safe Harbor Is No Longer Safe

Earlier today, the Court of Justice of the European Union (CJEU) announced its determination that the U.S.-EU Safe Harbor program is no longer a “safe” (i.e., legally valid) means for transferring personal data of EU residents from the European Union to the United States.

The CJEU determined that the European Commission’s 2000 decision (Safe Harbor Decision) validating the Safe Harbor program did not and “cannot eliminate or even reduce the powers” available to the data protection authority (DPA) of each EU member country. Specifically, the CJEU opinion states that a DPA can determine for itself whether the Safe Harbor program provides an “adequate” level of personal data protection (i.e., “a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union” as required by the EU Data Protection Directive (95/46/EC)).

The CJEU based its decision invalidating that Safe Harbor opinion in part on the determination that the U.S. government conducts “indiscriminate surveillance and interception carried out … on a large scale”.

The plaintiff in the case that gave rise to the CJEU opinion, Maximilian Schrems (see background below), issued his first public statement praising the CJEU for a decision that “clarifies that mass surveillance violates our fundamental rights.”

Schrems also made reference to the need for “reasonable legal redress,” referring to the U.S. Congress’ Judicial Redress Act of 2015. The Judicial Redress Act, which has bi-partisan support, would allow EU residents to bring civil actions in U.S. courts to address “unlawful disclosures of records maintained by an [U.S. government] agency.

Edward Snowden also hit the Twittersphere with “Congratulations, @MaxSchrems. You’ve changed the world for the better.”

Background

Today’s CJEU opinion invalidating the Safe Harbor program follows on the September 23, 2015, opinion from the advocate general (AG) to the CJEU in connection with Maximilian Schrems vs. Data Protection Commissioner.

In June 2013, Maximilian Schrems, an Austrian student, filed a complaint with the Irish DPA. Schrems’ complaint related to the transfer of his personal data collected through his use of Facebook. Schrems’ Facebook data was transferred by Facebook Ireland to Facebook USA under the Safe Harbor program. The core claim in Schrems’ complaint is that the Safe Harbor program did not adequately protect his personal data, because Facebook USA is subject to U.S. government surveillance under the PRISM program.

The Irish DPA rejected Schrems’ complaint because Facebook was certified under the Safe Harbor Program. Schrems appealed to the High Court of Ireland, arguing that the Irish (or any other country’s) DPA has a duty to protect EU citizens against privacy violations, like access to their personal data as part of U.S. government surveillance. Since Schrems’ appeal relates to EU law (not solely Irish law), the Irish High Court referred Schrems’ appeal [...]

Continue Reading




read more

The Connected Car and Keeping YOU in the Driver’s Seat

Remember KITT? KITT (the Knight Industries Two Thousand) was the self-directed, self-driving, supercomputer hero of the popular 1980s television show Knight Rider. Knight Rider was a science fiction fantasy profiling the “car of the future.” The self-directed car is science fiction no more. The future is now and, in fact, we’ve seen a lot of press this year about self-driving or driverless cars.

Driverless cars, equipped with a wide variety of connected systems including cameras, radar, sonar and LiDar (light detection and ranging), are expected on the road within the next few years. They can sense road conditions, identify hazards and negotiate traffic, all from a remote command center. Just as with most connected devices in the age of the Internet of Things (IoT), these ultra-connected devices claim to improve efficiency and performance, and enhance safety.

Though not quite driverless yet, connected vehicles are already on the market, in-market and on the road. Like many IoT “things”, ultra-connected vehicles systems may be vulnerable to hacker attacks.

Christopher Valasek and Charlie Miller, two computer security industry leaders, have presented on this topic at various events, including the 2014 Black Hat USA security conference . They analyzed the information security vulnerabilities of various car makes and models, rating the vehicles on three specific criteria: (1) the area of their wireless “attack surface” (i.e., how many data incorporating features such as Bluetooth, Wi-Fi, keyless entry systems, automated tire monitoring systems); (2) access to the vehicles network through those data points; and (3) the vehicle’s “cyberphysical” features (i.e., connected features such as parking assist, automated braking, and other technological driving aides). This last category of features, combined with access through the data points outlined in items (1) and (2), presented a composite risk profile of each vehicle make’s hackability. Their conclusions were startling: radios, brakes, steering systems were all found to be accessible.

Miller and Valasek claim that their intent was to encourage car manufacturers to consider security in vehicle system connectivity and cyberphysical attributes. They approached vehicle manufacturers and shared their report with the Department of Transportation and the Society of Automobile Engineers. Some manufacturers promised to investigate their vehicle systems and correct the deficiencies. Some seemingly ignored the report altogether. They did, however, catch the attention of Senators Ed Markey (D-MA) and Richard Blumenthal (D-CT). On July 21, 2015, Senators Markey and Blumenthal introduced legislation that would direct the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure vehicles and protect drivers’ privacy. The Security and Privacy in Your Car Act, aptly coined “the SPY Car Act”, would also require manufacturers to establish a ‘cyber dashboard’ that rates vehicle security, informing consumers as to the security performance of their vehicle.

As proposed, the SPY Car Act would require that all motor vehicles manufactured in the U.S. be “equipped with reasonable measures to protect against hacking attacks.” All “entry points” are to be protected through “reasonable” measures against hacking. Internal networks are to [...]

Continue Reading




read more

‘Right to Be Forgotten’ in Russian Data Protection Law Has Passed All Stages of Approval

On July 14, 2015, Vladimir Putin, the president of the Russian Federation, has signed the law on implementation of the “right to be forgotten” (the Law). The Law comes into force on January 1, 2016.

1. New obligations imposed on search engines on the Internet

The right to be forgotten applies to the information that had been disseminated by search engine operators distributing advertisements on the Internet for attracting attention of Russian consumers in the following cases:

  • Information had been disseminated in contradiction of legislative requirements;
  • Information that is inaccurate;
  • Information that is accurate but is no longer relevant due to the subsequent development or actions of a data subject (with some exceptions).

2. How will the right to be forgotten be exercised?

The request to delist search results submitted by a data subject (applicant) must contain certain information prescribed by the Law (e.g., full name, passport data, contact information of the applicant, specific information that should be forgotten, reasons for delisting, reference to the Internet website containing information, which shall be delisted, and consent to processing of the applicant’s personal data).

It is important to note that the right to be forgotten may be exercised only by individuals, and not legal entities.

Within 10 business days as of the receipt of the delisting request, search engine must perform one of the following actions:

  • Delist search results related to personal information of the applicant in case search results gained via search requests included name and (or) surname of the applicant
  • Provide the applicant with substantiated written refusal to delist the said search results

If the applicant does not agree with the decision made by the search engine, he or she is entitled to file a respective claim to the competent court.

Information on filing the delisting request by the applicant must be kept confidential by the search engine.

3. Liability for non-compliance

Along with the finally approved Law, another initiative has been submitted to the State Duma on May 29, 2015, and may be considered by the State Duma in the autumn session this year. If passed, the new initiative would institute an administrative fine in the amount of RUR 100,000 (approximately EUR 1,580) for a search engine’s unlawful failure to delist the links related to data subject’s personal information upon his or her request, or in the amount of RUR three million (approximately EUR 47,619) for the search engine’s failure to comply with the court decision requiring delisting of such links.




read more

Don’t Miss the Upcoming Privacy + Security Forum

McDermott partners Heather Egan Sussman and Jennifer Geetter are scheduled to speak at the upcoming Privacy + Security Forum in Washington, D.C. on October 21–23, 2015. The Forum is an exciting new annual event, organized by Professors Daniel Solove and Paul Schwartz, that will bring together many of the biggest names in privacy and security to (1) break down the silos between privacy and security; and (2) bring more rigor to conferences so that participants gain useful practical knowledge. Ms. Sussman and Ms. Geetter have been invited to share their knowledge and experience in helping multi-national companies build highly successful and functional privacy and security programs.

Held in Washington, D.C., the Forum’s pre-conference workshops are on Wednesday, October 21, and the conference is on Thursday, October 22–Friday, October 23. There are now 100+ confirmed speakers with more to be announced soon. Click here for more information on speakers and sessions.

Want to attend? Contact Ms. Sussman or Ms. Geetter to receive the McDermott discount: 25 percent off the registration fee.




read more

Data Breach Insurance: Does Your Policy Have You Covered?

Recent developments in two closely watched cases suggest that companies that experience data breaches may not be able to get insurance coverage under standard commercial general liability (CGL) policies. CGLs typically provide defense and indemnity coverage for the insured against third-party claims for personal injury, bodily injury or property damage. In the emerging area of insurance coverage for data breaches, court decisions about whether insureds can force their insurance companies to cover costs for data breaches under the broad language of CGLs have been mixed, and little appellate-level authority exists.

On May 18, 2015, the Connecticut Supreme Court unanimously affirmed a state appellate court decision that an IBM contractor was not insured under its CGL for the $6 million in losses it suffered as the result of a data breach of personal identifying information (PII) for over 500,000 IBM employees. The contractor lost computer backup tapes containing the employees’ PII in transit when the tapes fell off of a truck onto the side of the road. After the tapes fell out of the truck, an unknown party took them. There was no evidence that anyone ever accessed the data on the tapes or that the loss of the tapes caused injury to any IBM employee. Nevertheless, IBM took steps to protect its employees from potential identity theft, providing a year of credit monitoring services to the affected employees. IBM sought to recover more than $6 million dollars in costs it incurred for the identity protection services from the contractor, and negotiated a settlement with the contractor for that amount.

The contractor filed a claim under its CGL policy for the $6 million in costs it had reimbursed to IBM. The insurer refused to pay. In subsequent litigation with the contractor, the insurer made two main arguments. First, it argued that it only had the duty to defend against a “suit,” and that the negotiations between the contractor and IBM were not a “suit.” Second, the insurer argued that the loss of the tapes was not an “injury” covered by the policy.

The Connecticut Supreme Court adopted both of the insurer’s arguments, and the decision highlights two key areas for any company considering whether it needs additional insurance coverage for data breaches: what constitutes an “injury” under a CGL, and when an insurer is required to reimburse a company for costs associated with an injury. First, the court held that the loss of the computer tapes was not a “personal injury” under the CGL, because there had been no “publication” of the information stored on the tapes. In other words, because there was no evidence that anyone accessed or used the stolen PII, the court found that the data breach did not constitute a “personal injury” under the policy—even though the contractor spent millions of dollars reimbursing IBM for costs associated with the data breach.

Second, the court found that the CGL policy only required the insurer to reimburse [...]

Continue Reading




read more

Data Broker’s Appeal to U.S. Supreme Court Could Reshape Future of Data Privacy Litigation

In a case that could shape the future of data privacy litigation, the Supreme Court recently agreed to review the decision by the U. S. Court of Appeals for the Ninth Circuit under the Fair Credit Reporting Act (FCRA) in Robins v. Spokeo, Inc.  At issue is the extent to which Congress may create statutory rights that, when violated, are actionable in court, even if the plaintiff has not otherwise suffered a legally-redressable injury.

Spokeo is a data broker that provides online “people search capabilities” and “business information search” (i.e., business contacts, emails, titles, etc.).   Thomas Robins (Robins) sued Spokeo in federal district court for publishing data about Robins that incorrectly represented him as married and having a graduate degree and more professional experience and money than he actually had.  Robins alleged that Spokeo’s inaccurate data caused him actual harm by (among other alleged harms) damaging his employment prospects.

After some initial indecision, the district court dismissed the case in 2011 on the grounds that Robins had not sufficiently alleged any actual or imminent harm traceable to Spokeo’s data.  Without evidence of actual or imminent harm, Robins did not have standing to bring suit under Article III of the U.S. Constitution.  Robins appealed.

On February 4, 2014, the Court of Appeals for the Ninth Circuit announced its decision to reverse the district court, holding that the FCRA allowed Robins to sue for a statutory violation: “When, as here, the statutory cause of action does not require proof of actual damages, a plaintiff can suffer a violation of the statutory right without suffering actual damages.” The Court of Appeals acknowledged limits on Congress’ ability to create redressable statutory causes of action but held that Congress did not exceed those limits in this case.  The court held that “the interests protected” by the FCRA were “sufficiently concrete and particularized” such that Congress could create a statutory cause of action, even for individuals who could not show actual damages.

Why Spokeo Matters

If the Supreme Court reverses the Ninth Circuit’s decision, the decision could dramatically redraw the landscape of data privacy protection litigation in favor of businesses by requiring plaintiffs to allege and eventually prove actual damages.  Such a ruling could severely limit lawsuits brought under several privacy-related statutes, in which plaintiffs typically seek statutory damages on behalf of a class without needing to show actual damages suffered by the class members.  Litigation under the FCRA, the Telephone Consumer Protection Act and the Video Privacy Protection Act (among others statutes) all could be affected.




read more

GPEN Children’s Privacy Sweep Announced

On 11 May 2015, the UK Information Commissioner’s Office (ICO), the French data protection authority (CNIL) and the Office of the Privacy Commissioner of Canada (OPCC) announced their participation in a new Global Privacy Enforcement Network (GPEN) privacy sweep to examine the data privacy practices of websites and apps aimed at or popular among children. This closely follows the results of GPEN’s latest sweep on mobile applications (apps),which suggested a high proportion of apps collected significant amounts of personal information but did not sufficiently explain how consumers’ personal information would be collected and used. We originally reported the sweep on mobile apps back in September 2014.

According to the CNIL and ICO, the purpose of this sweep is to determine a global picture of the privacy practices of websites and apps aimed at or frequently used by children. The sweep seeks to instigate recommendations or formal sanctions where non-compliance is identified and, more broadly, to provide valuable privacy education to the public and parents as well as promoting best privacy practice in the online space.

Background

GPEN was established in 2010 on the recommendation of the Organisation for Economic Co-operation and Development. GPEN aims to create cooperation between data protection regulators and authorities throughout the world in order to globally strengthen personal privacy. GPEN is currently made up of 51 data protection authorities across some 39 jurisdictions.

According to the ICO, GPEN has identified a growing global trend for websites and apps targeted at (or used by) children. This represents an area that requires special attention and protection. From 12 to 15 May 2015, GPEN’s “sweepers”—comprised of 28 volunteering data protection authorities across the globe, including the ICO, CNIL and the OPCC—will each review 50 popular websites and apps among children (such as online gaming sites, social networks, and sites offering educational services or tutoring). In particular, the sweepers will seek to determine inter alia:

  • The types of information being collected from children;
  • The ways in which privacy information is explained, including whether it is adapted to a younger audience (e.g., through the use of easy to understand language, large print, audio and animations, etc.);
  • Whether protective controls are implemented to limit the collection of childrens’ personal information, such as requiring parental permission prior to use of the relevant services or collection of personal information; and
  • The ease with which one can request for personal information submitted by children to be deleted.

Comment

We will have to wait some time for in-depth analysis of the sweep, as the results are not expected to be published until the Q3 of this year. As with previous sweeps, following publishing of the results, we can expect data protection authorities to issue new guidance, as well as write to those organisations identified as needing to improve or take more formal action where appropriate.




read more

Telehealth: Implementation Challenges in an Evolving Dynamic

As part of its four-part Digital Health webinar series, on April 14, 2015, McDermott Will & Emery presented “Telehealth: Implementation Challenges in an Evolving Dynamic.”

Telehealth (also known as telemedicine) generally refers to the use of technology to support the remote delivery of health care.  For example:

  • A health care provider in one place is connected to a patient in another place by video conference
  • A patient uses a mobile device or wearable that enables a doctor to monitor his or her vital signs and symptoms
  • A specialist is able to rapidly share information with a geographically remote provider treating a patient

While the benefits of telehealth are clear – for example, making health care available to those in underserved areas and for patients who cannot regularly visit their providers but need ongoing monitoring — implementing telehealth requires providers and patients, as well as payers, to adapt to a dynamic new health care, data sharing and reimbursement delivery framework.  The webinar explored these areas and more.

We are pleased to offer our readers access to the archived webinar and the slide presentation.  If you have questions or would like to learn more, please contact Dale Van Demark.




read more

STAY CONNECTED

TOPICS

ARCHIVES

2021 Chambers USA top ranked firm
LEgal 500 EMEA top tier firm 2021
U.S. News Law Firm of the Year 2022 Health Care Law