Foreword from the Editors

In our digital age, technology has brought attention to “privacy” in unprecedented ways.  And privacy, although a term frequently used, has no set meaning.  Ready electronic access to information, for instance, raises questions about the misuse of public records (and, more broadly, whether one can or should ever be forgotten), while lack of access raises concerns about government abuse going unobserved.  Near-daily stories of data breaches now have a connection to matters that are surprisingly close to home, as the “internet of things” means that even our home appliances are collecting and sharing our goings-on.  Businesses need to consider not only whether they are adequately protected against cyberattacks, but also whether they have adequate cyber insurance in place. In this special issue, we reflect on these timely and compelling questions.

– The Boston Bar Journal Board of Editors


Cyberattack Risk: Not Just For Personal Data

Szpak_Markharrington_sethsullivan_lindsey

by Mark Szpak, Seth Harrington and Lindsey Sullivan

Practice Tips

In August, the United States Department of Justice (“DOJ”) and the Securities Exchange Commission (“SEC”) unsealed complaints alleging a scheme to hack into computer systems of newswire services in order to steal material nonpublic information, which the hackers then allegedly used to place trades.

This case is strikingly different than many other recently reported data-breach cases.  Typically such cases have involved an attacker breaking into a company’s network to access personal nonpublic information (e.g., credit card numbers, medical history, social security numbers) that potentially could be sold to other criminals who would use it to attempt to commit identity theft or fraud.  This hack involved information concerning publicly traded companies, obtained not from the companies themselves, but third-party newswire services.  These complaints highlight that cyberattack risk is not limited to the theft of personal information but extends to any confidential information that hackers may seek to exploit for financial gain – trade secrets, insider information, customer prospects, bid packages, marketing data, business plans, etc. Companies need to understand this risk as well as how to prevent it and manage it if it occurs.

The Alleged Hacking and “Insider” Trading Scheme

The criminal complaints filed by the DOJ allege that nine individuals hacked into the computer systems of newswire services Marketwired, PR Newswire, and Business Wire, accessed nonpublic information, and allegedly used it to generate $30 million in illegal profits.   The civil complaint, brought by the SEC against 32 individuals, alleges that the defendants generated more than $100 million in illegal profits by trading on the stolen nonpublic information in violation of federal antifraud laws and related SEC rules.

These newswire services were engaged by major publicly traded companies to publish corporate releases and, as a result, received confidential information hours and even days before the information was publicly released.  By infiltrating the computer systems of these newswire services, the criminals were able to access – and act upon– the releases ahead of the market.

Few are surprised that the newswire services were targeted, but the extent of the scheme is drawing attention.  The hacking allegedly lasted five years, during which the criminal attackers allegedly accessed over 150,000 press releases.  In one instance, according to the SEC complaint, the hackers and traders were able to act within the 36-minute period between when the press release was provided to the newswire service and public disclosure of the release, executing trades that resulted in $511,000 in profit.

Potential Exposure

Compared to other cybercases, these complaints represent the relatively rare occurrence in which claims are brought against the perpetrators of the data breach and the individuals who seek to use and profit from the stolen information.   As this article goes to press, no litigation is known to have been initiated against either the newswire services or the companies whose information is alleged to have been stolen in this attack.   Yet, based on trends in litigation and regulatory enforcement efforts in matters involving data breaches of personal information, one can expect that claims against hacked entities or their clients may begin also to arise even where only nonpersonal information is involved.

With respect to private litigation, potential claims could face a number of hurdles.  Any potential plaintiff would have to allege a cognizable injury as well as the breach of a duty owed by the defendant to the particular plaintiff.  Many courts in breach cases have dismissed claims (under both tort and contract theories) based on the attenuated relationship between the plaintiff and defendant regarding an alleged duty to safeguard information for the benefit of the plaintiff.  As we move beyond personal information, each new digital information context will raise questions regarding whether a duty to anticipate and protect against criminal cybertheft can be fairly imposed, in what  circumstances, pursuant to what standards, and, if so, to whom is it owed.

With respect to regulators, the SEC has made clear its position regarding the importance of cybersecurity.  In March 2014, Chair Mary Jo White explained that “the SEC have been focused on cybersecurity-related issues for some time” because “[c]yber threats [] pose non-discriminating risks across our economy to all of our critical infrastructures, our financial markets, banks, intellectual property, and, as recent events have emphasized, the private data of the American consumer.”  Other regulators (most notably the FTC) have also staked out a position of overlapping jurisdiction.

Best Practices for Companies

In a world where the electronic landscape and the sophistication of cyberhackers are both moving at high speed, here are nonetheless a few best practices that companies facing an actual or potential data security incident (i.e., all companies) can follow to mitigate potential risk:

  • Think carefully about third-party vendors— Companies rely on numerous third parties for everything from corporate disclosures to marketing advice. Thoughtful contracting and training can go a long way to reducing the risk of loss or misuse.
  • Supplement perimeter detection systems— According to the indictments in the newswire case, the criminal hackers were resident in the victims’ systems for years. The case illustrates the potential significance of taking a “defense-in-depth” approach to security and system monitoring.
  • Be realistic about law enforcement and regulators— Notifying and cooperating with law enforcement can be important for many reasons, and the same is true for governmental regulators.  But law enforcement usually focuses on getting the criminal attacker, while regulators (by comparison) often focus instead on examining any role the company had in having been criminally attacked.  Keeping that difference in mind can be significant in dealing simultaneously with these respective governmental actors.
  • Involve outside experts (both legal and forensic) at the earliest sign of a possible problem— Never guess or assume what may have taken place. Forensic experts can help your team assess whether an attack or breach has occurred, the actual scope of the breach, and how to contain it, while legal experts (both internal and outside counsel) can direct that forensic review and assess potential legal obligations involving notification, public statements, remediation, responding to law enforcement, dealing with regulators, preparing for litigation, and protecting the record.
  • Carefully draft external statements— When an incident occurs, all outward facing statements should be carefully crafted to say only what is necessary, and to avoid committing to specifics until facts are definitely known. Before an incident occurs, promising any level of protection is risky because, if a hacker makes it into the system, the company’s statements will inevitably be second-guessed.
  • Check your insurance— For the sake of planning, assume that erstwhile attackers will be able to access any system in your network. Consider, then, what kind of attack or what kind of data loss could cause the most exposure or disruption.  Then make sure your insurance will actually cover those costs and that any related exposure to liability is indeed included.  Evaluate your incident response preparedness through “tabletop exercises” to confirm that you have identified the potential risks and expenses.
  • Avoid creating a bad record— Preservation of evidence after discovering a data breach often involves much more than just the usual email and paper files. In a network attack, the relevant evidence may include large groups of servers, firewall configuration records, network access logs, security management databases, vulnerability scan results, software hotfix schedules, or any number of other forensic or technical data sources that in most litigation rarely come into play.  Identifying that relevant forensic and technical evidence and then maintaining it, while preserving applicable privileges and minimizing the interruption of critical ongoing company operations, can in many cases pose enormous challenges.

The panoply of costs that a cyberhack can impose make it clear that a well-developed program to secure all types of business information, not just personal information, can provide a competitive advantage.  And when data thieves strike, regardless of the type of data they target, following a prompt and careful response protocol can pay significant legal dividends.

Mark Szpak is a partner in Ropes & Gray’s privacy & data security practice. He focuses on the wide range of challenges that arise after a computer network intrusion, including defending against multidistrict class actions in the U.S. and Canada, handling forensic investigations and responding to regulators.

Seth Harrington, also a partner in Ropes & Gray’s privacy & data security practice, represents clients in all aspects of the response to a privacy or data security incident, and he regularly advises clients on indemnification and insurance matters, including cyber risk insurance.

Lindsey Sullivan is an associate in Ropes & Gray’s business & securities litigation practice, where she focuses on assisting clients through forensic investigations and preservation efforts around privacy and data security breaches.

 


Assessing the Right to be Forgotten

Lyons_Daniel by Daniel Lyons

Heads Up 

From its inception the Internet has been disrupting business models, as once-ubiquitous brands like Blockbuster, Borders, and Encyclopedia Britannica can attest. But as more of our activities move online, society is beginning to realize how it can disrupt individual lives as well. In 2013, the tech world watched in real time as an ill-advised tweet to 170 followers began trending worldwide and cost 30-year-old PR director Justine Sacco her job while she flew from London to Cape Town, oblivious to the firestorm she had ignited below. More recently, the hack of the adultery facilitating website Ashley Madison has revealed financial information, names, and intimate details about millions of users online. Our lives increasingly leave digital fingerprints that can prove embarrassing or damaging when revealed on the network.

The “Right to be Forgotten” is the European Union’s attempt to smooth these rough edges of cyberspace. The term originated with Mario Costeja Gonzalez of Spain, who defaulted on a mortgage in 1998. To foreclose on the property, the bank dutifully published a notice of default in Costeja Gonzalez’s local newspaper and its online companion. Because Google indexed the site, the notice featured prominently in search results for Costeja Gonzalez’s name, even years afterward. Embarrassed that his default was among the first facts the Internet recited about him, Costeja Gonzalez sued both the paper and Google under the EU Data Protection Directive, which governs the transnational flow of personal information in EU countries. He alleged that the notice infringed on his right to privacy and requested that the companies delete them.

The European Court of Justice (“ECJ”) largely agreed, at least as to Google. Deciding the case on laws governing privacy and protection of personal data, the court explained in a decision dated May 13, 2014, an individual should have the right to request that a search engine remove links to information about an individual that are “inadequate, irrelevant or no longer relevant, or excessive.” Importantly, the individual need not show the revelation of the information is prejudicial, because one’s right to privacy should override a search engine’s economic interests in listing search results. But the court was careful to note that there could be an exception if the individual’s right to privacy was outweighed by the public’s interest in having access to the information in question.

The Costeja Gonzalez opinion addresses an important digital-age problem. It is exceptionally easy to post false, misleading, or simply embarrassing personal information online, and once that information is posted, it is exceptionally difficult for the subject to remedy the situation. Costeja Gonzalez’s embarrassment at a decades-old foreclosure may seem trivial. But the same dynamics plague countless others like Ms. Sacco who are forever tarred by a momentary lapse in judgment. It also affects wholly innocent victims whose private details are posted online, such as the subjects of so-called “revenge porn” sites.

Such incidents illustrate the dark side of the information revolution. The genius of the Internet is its ability to reduce information costs. Any information can be reduced to a series of 1s and 0s, replicated, and transmitted anywhere around the world, instantaneously and virtually without cost. This makes it an exceptional tool for communication and learning. But it can hurt those whose self-interest depends upon controlling the flow of information. Dictators have been hobbled by the Internet’s ability to perpetuate ideas and information while connecting underground resistance groups. More benignly, record labels and movie studios have fought a decade-long war against online piracy. What copyright is to Universal, privacy is to the individual: a right to determine if and when certain information becomes public. The Right to be Forgotten is an attempt to force the Internet to respect these rights, by regulating one of the few bottlenecks in the Internet ecosystem: search engines that guide users to information online.

But the ECJ decision is an unworkable solution that risks doing more harm than good. First, the decision applies only to search engines, meaning the information in question is never actually “forgotten.” Google must suppress links to Costeja Gonzalez’s foreclosure notice, but the newspaper itself remains free to leave the notice available online. Second, the court’s standard is astonishingly vague. The decision relies upon Google and other search engines to determine whether a particular link is “inadequate, irrelevant…or excessive,” and if so, whether the “public interest” nonetheless requires the link to remain posted. The court envisions Google analysts assessing the harm that each item causes to the claimant, and carefully balancing that harm against the public’s right to know a particular fact. In reality, Google faces liability for denying legitimate takedown requests but not for granting frivolous ones. This means that the company is likely to err on the side of granting most requests rather than evaluating each request individually—especially when one considers the cost of evaluating potentially millions of such requests each year. Numerous commentators have criticized the similar selection bias evident in the Digital Millennium Copyright Act copyright takedown regime under US law, leading to the removal of a significant amount of non-infringing material.

More generally, the Right to be Forgotten decision raises broader questions about an Orwellian power to distort history. Unsurprisingly, media organizations are some of the decision’s biggest critics, as they fear individuals will misuse the process to sanitize their pasts. There is some evidence to support this concern: among the first claimants was a British politician seeking to hide his voting record from the public and a convicted sex offender who wants his status kept hidden.  In Massachusetts, it runs counter to the current push for broader public access to court proceedings, particularly in cases involving police officers and other public officials charged with criminal offenses.  In this sense, the EU decision is only part of a broader social conversation about selective disclosure, which also includes the ethics of photoshopping models, contracts prohibiting users from posting negative reviews online, and the use of social media to present idealized images of ourselves online. As the merits of the “Right to be Forgotten” are debated in the United States, it is important that any dialogue, as well as any proposed solutions, carefully balance the rights of both the individual and society to open, accurate, and fair historical information.

Daniel Lyons is an Associate Professor (with tenure) at Boston College Law School, where he specializes in telecommunications, Internet law, administrative law, and property.


Don’t Click This Article!

By Richard J. Yurko

Vantage Point

Each of us lives in a digital soup where, every day, we leave an online record of our activities.  For the convenience of an ATM card, we leave traces of our banking transactions.  For the social benefit of “connecting” with acquaintances, our Facebook, Twitter, Linked-In, email, and other accounts record what we look at and digitally touch.  For the sake of a few cents off at the store, our loyalty cards compile a rich history of our shopping habits.  For the sake of our iPhone, we let Apple know our location virtually every moment of the day.  This digital soup not only has practical implications for everyday life, but also potentially changes the landscape of two core legal doctrines, the constitutional right to be secure in our private affairs from government intrusion and the common law right to be let alone from private actors.  These issues recently surfaced within a divided United States Supreme Court.

Thousands of digital data points can be and are being aggregated, cross-referenced, and enriched with still other data, like public records, our credit scores, and political donations.  See, e.g., Sullivan, “Data Snatchers! The Booming Market for Your Online Identity”, PCWorld.com (June 26, 2012); Sengupta, “Should Personal Data Be Personal?”, New York Times (February 24, 2012).  This enriched data is, in many respects, more thorough, more accurate, and more detailed than any file ever compiled by J. Edgar Hoover.  It is possible that we can be known better by these data aggregators than by our own friends and kin.

I am annoyed when data aggregations are used to try to sell me a particular product that just happens to be on sale at a store on my walk to work.  Individually, I am not much troubled by the use of this data by the company that first collected it, which may track what brand of over-the-counter headache medicine I buy so that it can offer me an appealing coupon.  I am much more troubled if the first party that collected the information then sells it to third parties with unknown motivations – – commercial, political or nefarious.

Annoyance and displeasure give away to apprehension when purchased data can be enriched and cross-indexed with other information and then used by powerful corporate interests without my knowledge or anticipation.  Moreover, what is to prevent the government from routinely accessing or purchasing such detailed, enriched data aggregations for any purpose?  And if the government could buy such data aggregations, what is to stop the government from simply requesting and obtaining the same material from private aggregators, without any subpoena, warrant or judicial oversight?

Indeed, the availability of this detailed information can be used to undermine the underpinnings of essential constitutional safeguards or the common law right to privacy.  Although, certainly, the constitutional right to privacy is substantially different from the common law right to be let alone, they share one common foundation.  Often, both common law and constitutional principles are grounded on the “reasonable expectations” of the parties and, with respect to privacy, those expectations may be less reasonable if intensely personal data is freely available to anyone who wants to buy it.

That issue was recently raised in United States v. Jones, 132 S. Ct. 945 (2012).  In Jones, the majority opinion, authored by Justice Scalia and joined by Justices Roberts, Kennedy, Thomas, and Sotomayor, avoided complex issues arising from the warrantless attachment of a GPS tracking device to a suspect’s automobile by resorting to the 18th Century common law of trespass. The majority concluded that, because the installation necessarily involved a trespass to the suspect’s property right in his vehicle, the resultant search and seizure required a warrant.  A four-justice concurrence would have found the search and seizure impermissible without a warrant, on a different ground, because it violated the suspect’s “reasonable expectation of privacy,” relying on Katz v.United States, 389 U.S. 347 (1967).  The concurrence, authored by Justice Alito and joined by Justices Ginsberg, Breyer, and Kagan, rejected the majority’s resort to trespass law as too narrow a basis for principled application going forward.

By far, however, the most provocative question in Jones was raised by Justice Sotomayer in her lone separate concurrence.  Justice Sotomayer joined with the majority but she wrote separately, I believe, to raise a question.  She was apparently unwilling to join the four-justice concurrence, applying the “reasonable expectation of privacy” test, because she suggested that our notion of privacy may have to undergo reevaluation in a world in which, with varying degrees of inattention and consciousness, we tolerate third parties collecting a wealth of personal data about us.

Questions about the collection, retention, supplementation, use, misuse, sale, dissemination, and extensive re-use of detailed personal data could be thrashed out in Washington, in fifty state legislatures across the country, or through regulations promulgated elsewhere in the world.  Indeed, there are conversations on these subjects at the Federal Trade Commission, in some state legislatures, and in the European Union.  There is an outside chance that, just the way child labor laws, worker’s rights, consumer rights, and economic justice notions were debated and decided in the state legislatures and then again in Congress, this would happen on questions of privacy in the digital age. The FTC has issued papers in this area and may well act. See Federal Trade Commission, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers (FTC Report, March 2012); see also Consumer Data Privacy in a Networked World, The White House, (February 2012) (recommending legislative and regulatory action).

But I am not optimistic that these issues will be decided quickly or at all by legislative or regulatory means.  The corporations that collect, dissect, enrich, and/or package your personal data for resale are some of the most powerful companies in the world.  Rashid, “Google, Microsoft Survival Conflicts With Internet Data Privacy,” eWeek.com, February 7, 2012.  Quite possibly, in their own enlightened self-interest, they may block legislative or regulatory action.  Moreover, one can question, in this rapidly evolving digital world, whether any law or regulation can sufficiently address the myriad ways in which data can be collected, aggregated and re-used.  Any regulation on, say, the use of “cookies,” could be outmoded even before being promulgated or implemented.  Courts, by contrast, exist to decide questions that arise in disputes between contending parties and decisions on principles in those cases can extend across technological platforms.  That is how the common law developed and, to some extent, how constitutional law has progressed as well.

Well over a century ago, Louis Brandeis and Samuel Warren wrote their seminal piece articulating a right to privacy in the Harvard Law Review.  At that time, the danger seemed to come from yellow journalists writing about and photographing private persons to satisfy what was characterized as a public lust for gossip.  Brandeis and Warren wove together hitherto unconnected strands of cases to fashion an argument for a common law right to privacy.  By giving such a name to the “right to be let alone,” they gave lawyers and judges a means to articulate the right to control the intimate details of one’s own life.  The premise of Warren and Brandeis, however, was that privacy was like the water from a spigot with the individual controlling the spigot.  Samuel Warren & Louis Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 198 (1890). They said, “The common law secures to each individual the right of determining, ordinarily, to what extent his thoughts, sentiments, and emotions shall be communicated to others.”

In the last two decades, rapid technological change and remarkable inattention by the public at large have seemed to cede control of that spigot to Facebook, Apple, and hundreds of other less-well-known companies.  If these corporations now control the spigots of our personal details shared online, can the government hand be far away?  If the government is buying and using the data, will we ever know?  If the government is buying the data, should there be some control on that?  Conversely, if we see the greater danger as coming from misuse by private parties of digital data aggregations, is government actually the solution, not the problem, by regulating how and when such information can be collected and shared?

Whether in the role of common law jurists or constitutional arbiters, it may rest with judges to take the first stab at re-examining the right to privacy, or the “reasonable expectation of privacy,” in a digital world.  The right to be let alone from government interference has, obviously, a constitutional dimension.  The right to be let alone from private interference, as a common law principle, applies to private as well as governmental actors.

In conversations in judges’ chambers across the country, the judicial branch may be asked by litigants to return some measure of control of the spigot of private data to the individual.  It should be a lively discussion between judge and law clerk.  Judges, generally a generation older than their clerks, will remember a time when the public reacted with shock to governmental dossiers and enemies’ lists.  Law clerks, some of whom may have grown up in the digital soup and the stunning trade-off between privacy and convenience, may have an entirely different view.  Together, they may be able to fashion a new understanding of privacy where incidental disclosure to a third-party providers of services simply through the use of everyday electronic gadgets does not eliminate the broader right to be “let alone.”  That, at least, is my hope, so that we can move towards the new understanding of privacy rights in a digital era of pervasive commercial tracking.

Rich Yurko is the founder of the Boston business litigation boutique, Yurko, Salvesen & Remz, P.C., which publishes a weekly Boston business litigation update.