by Mark Szpak, Seth Harrington and Lindsey Sullivan
In August, the United States Department of Justice (“DOJ”) and the Securities Exchange Commission (“SEC”) unsealed complaints alleging a scheme to hack into computer systems of newswire services in order to steal material nonpublic information, which the hackers then allegedly used to place trades.
This case is strikingly different than many other recently reported data-breach cases. Typically such cases have involved an attacker breaking into a company’s network to access personal nonpublic information (e.g., credit card numbers, medical history, social security numbers) that potentially could be sold to other criminals who would use it to attempt to commit identity theft or fraud. This hack involved information concerning publicly traded companies, obtained not from the companies themselves, but third-party newswire services. These complaints highlight that cyberattack risk is not limited to the theft of personal information but extends to any confidential information that hackers may seek to exploit for financial gain – trade secrets, insider information, customer prospects, bid packages, marketing data, business plans, etc. Companies need to understand this risk as well as how to prevent it and manage it if it occurs.
The Alleged Hacking and “Insider” Trading Scheme
The criminal complaints filed by the DOJ allege that nine individuals hacked into the computer systems of newswire services Marketwired, PR Newswire, and Business Wire, accessed nonpublic information, and allegedly used it to generate $30 million in illegal profits. The civil complaint, brought by the SEC against 32 individuals, alleges that the defendants generated more than $100 million in illegal profits by trading on the stolen nonpublic information in violation of federal antifraud laws and related SEC rules.
These newswire services were engaged by major publicly traded companies to publish corporate releases and, as a result, received confidential information hours and even days before the information was publicly released. By infiltrating the computer systems of these newswire services, the criminals were able to access – and act upon– the releases ahead of the market.
Few are surprised that the newswire services were targeted, but the extent of the scheme is drawing attention. The hacking allegedly lasted five years, during which the criminal attackers allegedly accessed over 150,000 press releases. In one instance, according to the SEC complaint, the hackers and traders were able to act within the 36-minute period between when the press release was provided to the newswire service and public disclosure of the release, executing trades that resulted in $511,000 in profit.
Compared to other cybercases, these complaints represent the relatively rare occurrence in which claims are brought against the perpetrators of the data breach and the individuals who seek to use and profit from the stolen information. As this article goes to press, no litigation is known to have been initiated against either the newswire services or the companies whose information is alleged to have been stolen in this attack. Yet, based on trends in litigation and regulatory enforcement efforts in matters involving data breaches of personal information, one can expect that claims against hacked entities or their clients may begin also to arise even where only nonpersonal information is involved.
With respect to private litigation, potential claims could face a number of hurdles. Any potential plaintiff would have to allege a cognizable injury as well as the breach of a duty owed by the defendant to the particular plaintiff. Many courts in breach cases have dismissed claims (under both tort and contract theories) based on the attenuated relationship between the plaintiff and defendant regarding an alleged duty to safeguard information for the benefit of the plaintiff. As we move beyond personal information, each new digital information context will raise questions regarding whether a duty to anticipate and protect against criminal cybertheft can be fairly imposed, in what circumstances, pursuant to what standards, and, if so, to whom is it owed.
With respect to regulators, the SEC has made clear its position regarding the importance of cybersecurity. In March 2014, Chair Mary Jo White explained that “the SEC have been focused on cybersecurity-related issues for some time” because “[c]yber threats  pose non-discriminating risks across our economy to all of our critical infrastructures, our financial markets, banks, intellectual property, and, as recent events have emphasized, the private data of the American consumer.” Other regulators (most notably the FTC) have also staked out a position of overlapping jurisdiction.
Best Practices for Companies
In a world where the electronic landscape and the sophistication of cyberhackers are both moving at high speed, here are nonetheless a few best practices that companies facing an actual or potential data security incident (i.e., all companies) can follow to mitigate potential risk:
- Think carefully about third-party vendors— Companies rely on numerous third parties for everything from corporate disclosures to marketing advice. Thoughtful contracting and training can go a long way to reducing the risk of loss or misuse.
- Supplement perimeter detection systems— According to the indictments in the newswire case, the criminal hackers were resident in the victims’ systems for years. The case illustrates the potential significance of taking a “defense-in-depth” approach to security and system monitoring.
- Be realistic about law enforcement and regulators— Notifying and cooperating with law enforcement can be important for many reasons, and the same is true for governmental regulators. But law enforcement usually focuses on getting the criminal attacker, while regulators (by comparison) often focus instead on examining any role the company had in having been criminally attacked. Keeping that difference in mind can be significant in dealing simultaneously with these respective governmental actors.
- Involve outside experts (both legal and forensic) at the earliest sign of a possible problem— Never guess or assume what may have taken place. Forensic experts can help your team assess whether an attack or breach has occurred, the actual scope of the breach, and how to contain it, while legal experts (both internal and outside counsel) can direct that forensic review and assess potential legal obligations involving notification, public statements, remediation, responding to law enforcement, dealing with regulators, preparing for litigation, and protecting the record.
- Carefully draft external statements— When an incident occurs, all outward facing statements should be carefully crafted to say only what is necessary, and to avoid committing to specifics until facts are definitely known. Before an incident occurs, promising any level of protection is risky because, if a hacker makes it into the system, the company’s statements will inevitably be second-guessed.
- Check your insurance— For the sake of planning, assume that erstwhile attackers will be able to access any system in your network. Consider, then, what kind of attack or what kind of data loss could cause the most exposure or disruption. Then make sure your insurance will actually cover those costs and that any related exposure to liability is indeed included. Evaluate your incident response preparedness through “tabletop exercises” to confirm that you have identified the potential risks and expenses.
- Avoid creating a bad record— Preservation of evidence after discovering a data breach often involves much more than just the usual email and paper files. In a network attack, the relevant evidence may include large groups of servers, firewall configuration records, network access logs, security management databases, vulnerability scan results, software hotfix schedules, or any number of other forensic or technical data sources that in most litigation rarely come into play. Identifying that relevant forensic and technical evidence and then maintaining it, while preserving applicable privileges and minimizing the interruption of critical ongoing company operations, can in many cases pose enormous challenges.
The panoply of costs that a cyberhack can impose make it clear that a well-developed program to secure all types of business information, not just personal information, can provide a competitive advantage. And when data thieves strike, regardless of the type of data they target, following a prompt and careful response protocol can pay significant legal dividends.
Mark Szpak is a partner in Ropes & Gray’s privacy & data security practice. He focuses on the wide range of challenges that arise after a computer network intrusion, including defending against multidistrict class actions in the U.S. and Canada, handling forensic investigations and responding to regulators.
Seth Harrington, also a partner in Ropes & Gray’s privacy & data security practice, represents clients in all aspects of the response to a privacy or data security incident, and he regularly advises clients on indemnification and insurance matters, including cyber risk insurance.
Lindsey Sullivan is an associate in Ropes & Gray’s business & securities litigation practice, where she focuses on assisting clients through forensic investigations and preservation efforts around privacy and data security breaches.
“If you don’t know where you are going, you might wind up someplace else.”
— Attributed to Yogi Berra
Massachusetts has one of the country’s most stringent statutory and regulatory schemes relating to data privacy and security. The complexity and scope of available insurance products dealing with “cyber” exposures, in Massachusetts and throughout the business world, has dramatically increased over the past several years and is now as fractured and complicated as is the law, which differs from state to state and from country to country. Insurance underwriters, insurance brokers, technologists, security professionals, pundits and others offer conflicting advice about how to best move through this maze of insurance policies, technology, and the many potentially applicable state and federal regulations that often conflict. Imagine that there is growing apprehension that a company is at risk. At some point, a lawyer is called to advise on insurance protection. What is that lawyer to do?
The first step is to establish a team of professionals and client representatives who will, together, work through the issues that will allow the development of a meaningful strategy. The team should include the lawyer, an insurance professional, a technology resource (internal to the client’s business operations or external), and a representative of the client who is sufficiently vested with authority so that access to required information will be facilitated. Once the team is in place, the following should happen, in more or less this sequence:
1. The team should develop a realistic understanding of the client’s cyber/privacy and data risk profile. It is important to analyze not just electronic exposures, but traditional paper-based exposures as well. Among the many factors to consider are the following:
A. The type and location of protected information that is procured, handled, managed and stored by the client. Protected information includes, but is not limited to, private personal information (which is defined differently in various jurisdictions and under different regulatory schemes but often consists of an individual’s first name, last name, and either a social security number, bank account number or other similar data point), and confidential business information.
B. The federal, state, and local statutory and regulatory schemes that impact the client’s obligations with respect to protected information. Most states have adopted data privacy regimes that are grounded in statutes (in Massachusetts the applicable statute is Mass. Gen. Laws ch. 93H) and implemented through a series of regulations. Several federal agencies, including the FTC and the SEC, are focused in meaningful ways on the security of personal and other confidential information that is handled by businesses. Courts are, in most instances, finding statutory and regulatory support for robust enforcement actions by these agencies. It is important to keep in mind that many states, Massachusetts among them, have taken the position that their privacy schemes are meant to be protective of their citizens wherever those citizens conduct commerce.
C. The commercial obligations that have been assumed by the client by contract or otherwise in connection with data security and privacy. These should be charted, and compliance measured.
D. The security of non-electronic records that contain protected information.
E. The client’s network and electronic information storage infrastructure. As with non-electronic records, this infrastructure should be assessed by qualified professionals, and a plan should be established for correction of deficiencies.
2. Next, insurance coverage that is already in place should be reviewed. Among the policies to be reviewed are:
A. General Liability policies
B. Directors and Officers Liability policies
C. Errors and Omissions policies
D. Fiduciary policies
E. Crime policies
F. Professional Liability policies
G. Commercial Property policies
The risk profile that has been developed should be reviewed in the context of the insurance coverage that is present in these policies (there are no true “standard forms” and careful, term-specific analysis is required). The insurance professional who is part of the team should assist in identifying potential exposures that are not within the scope of the existing coverage.
3. Having established a risk profile, assessed the protection afforded by the insurance coverage in place and begun the process of correcting deficiencies, the team should next consider whether existing coverage should be supplemented, including whether stand-alone cyber/privacy coverage should be procured. The policy wordings that might be employed to supplement existing policies, and the policy forms that are available as stand-alone products, are not standard forms of insurance. Nearly all wordings can and should be specially negotiated.
As the stand-alone cyber/privacy insurance market has evolved, these general coverage types have become “standard” in most offerings (with the caveat that while the coverage “type” may be standard, the implementation varies from insurer to insurer, and from product to product, in meaningful ways):
A. Third party coverage against claims asserting a “data privacy wrongful act,” a “network security wrongful act,” or other similar coverage grant. This coverage affords the cyber/privacy equivalent of general liability coverage. A client purchases this coverage to protect against third party claims alleging damages due to the client’s handling of protected information.
B. Third party coverage for claims relating to violation of intellectual property rights or copyright.
C. Various types of first party coverages (coverage that will pay an insured for loss that the insured suffers itself, rather than indemnifying an insured for claims asserted by others), such as:
1. Notification and related expense coverage;
2. Coverage for regulatory fines and penalties;
3. Coverage for the expense of recreating information that is damaged, compromised or destroyed as the result of a data security incident, or other covered occurrence;
4. Coverage for the expense resulting from the inability to use a network or other asset as the result of a covered event; and
5. Coverage for fines and penalties payable as the result of a failure to maintain appropriate levels of Payment Card Industry compliance in connection with credit or payment card exposures (this is not as generally available).
There are, of course, additional issues that will arise in the course of developing an appropriate mitigation strategy and insurance structure. For example, it may be necessary to allow an insurer, or several insurers, to independently audit a client’s infrastructure. It may be that an insurer adds exclusions to a policy that render otherwise appropriate coverage difficult to accept – for example, adding an exclusion that would allow an insurer to avoid payment obligations in the event that there is a change in network structure, levels of security protection, or the like. These types of potentially devastating exclusions, sometimes based on ambiguous terms that are difficult to either understand in an operational sense or manage, can make otherwise meaningful protection unacceptable.
So, dealing with the structure of an effective cyber/privacy insurance program requires knowing what you’ve got, knowing what’s lacking, and filling gaps in a targeted way. Know where you’re starting, understand the potential end points, and you’ll get where you’re going and not someplace unexpected.
Alan M. Reisch is a Director in the Litigation Group at Goulston & Storrs, as well as a Founder of the firm’s risk management affiliate Fort Hill Risk Management, and counsels clients in connection with insurance coverage and portfolio analysis, risk assessment and management, fraud, data privacy and other related issues.
In February 2015, the Supreme Judicial Court authorized Massachusetts trial and appellate courts to conduct pilot projects on electronic filing and service.[i] The Court also issued Interim Electronic Filing Rules for the pilot projects.[ii] Before the Interim Rules were issued, Trial Court Chief Justice Paula Carey appointed a 24-member Trial Court Public Access to Court Records Committee to develop a uniform policy for court records in written and electronic form.[iii] The Committee will publish proposed rules for public comment and, after considering the public comments, present the proposed rules to the Trial Court and the Supreme Judicial Court for their consideration.
Although electronic filing and service is familiar to federal practitioners who use the PACER system, because Massachusetts state courts have far more expansive jurisdiction than the federal courts, additional study was required to determine how best to administer a state court electronic court record system. This article considers three of the many issues raised by the Commonwealth’s transition from paper to electronic court records and offers the following conclusions:
1. The public has a constitutional and common law right of access to electronic court records;
2. The public has a commensurate right of access to electronically maintained alphabetical indices of criminal cases; and
3. Permitting remote access to criminal case files would not violate the Criminal Offender Record Information Act (“CORI”), G.L. c. 6, § 167, et seq.
Public Access to Electronic Court Records
The shift to electronic court files is not likely to alter the public’s well-recognized constitutional and common law rights to on-site access to court records. The Supreme Court repeatedly has held that the First Amendment grants the public a right of access to criminal proceedings. See Globe Newspaper Co. v. Superior Court, 457 U.S. 596, 600, 610-11 (1982) (striking down Massachusetts statute imposing mandatory closure of sex-offense trials during the testimony of minor victims).[iv] Several Circuit Courts of Appeal (including the First Circuit) have held that the public’s First Amendment access rights extend to judicial documents filed in criminal cases. See, e.g., In re Globe Newspaper Co., 729 F.2d 47, 52 (1st Cir. 1984); Globe Newspaper Co. v. Pokaski, 868 F.2d 497, 505 (1st Cir. 1989). The Supreme Judicial Court similarly has stated that “the public has a First Amendment right of access to court records such as the transcripts of judicial proceedings and the briefs and evidence submitted by the parties.” The Republican Co. v. Appeals Court, 442 Mass. 218, 223 n.8 (2004).[v]
The SJC also has recognized a common law right of access to judicial records in both criminal and civil cases. See, e.g., Boston Herald, Inc. v. Sharpe, 432 Mass. 593, 604 (2000); Republican, 442 Mass. at 222. Cf. Commonwealth v. Winfield, 464 Mass. 672, 672-73, 679 (2013) (no constitutional or common law right to court reporter’s “room recording” that is not the official record of the trial, is not filed with the court, and is not referenced in the court file). Many of these principles are incorporated into the recently amended Uniform Rules on Impoundment Procedure, which now apply to criminal and civil case records. See C.J. Paula M. Carey and Joseph Stanton, Amendments to the Uniform Rules of Impoundment Procedure, BBA Journal Summer 2015 Vol. 59.
None of these cases or rules establish an absolute right of access to judicial records. Judges are authorized to impound court records on a case-by-case basis or if required by statute or court rule, provided that the governing constitutional or common law standards are met.[vi] “Under the First Amendment to the United States Constitution, ‘[t]he burden falls on the party seeking closure to demonstrate that (1) there exists a substantial probability that permitting access to court records will prejudice his fair trial rights; (2) closure will be effective in protecting those rights, and that the order of closure is narrowly tailored to prevent potential prejudice; and (3) there are no reasonable alternatives to closure.’” Republican, 442 Mass. at 223 n.8 (quoting Sharpe, 432 Mass. at 605 n.24). The common law right of access similarly permits impoundment of judicial records upon a showing of “good cause,” a standard which the Supreme Judicial Court has said “take[s] into account essentially the same factors as required by the First Amendment: ‘the competing rights of the parties and alternatives to impoundment.’” Id.[vii]
Given these well-established constitutional and common law principles, there should be little doubt that the public’s right of access to electronically maintained court files is comparable to its historical right to inspect conventional court records. In both situations, “[i]t is desirable that [judicial proceedings] should take place under the public eye . . . because it is of the highest moment that those who administer justice should always act under the sense of public responsibility, and that every citizen should be able to satisfy himself with his own eyes as to the mode in which a public duty is performed.” Republican, 442 Mass. at 222 (quoting Cowley v. Pulsifer, 137 Mass. 392, 394 (1884) (Holmes, J.)). As discussed below, however, other questions remain about the manner in which the public will be allowed to access electronic records.
Public Access to Electronically Maintained Alphabetical Indices of Criminal Cases
Alphabetical indices of criminal case files have been available to the public since at least the 18th century. See Globe Newspaper Co. v. Fenton, 819 F. Supp. 89, 91-93 (D. Mass. 1993). See also Massachusetts Body of Liberties, art. 48 (1641) (“Every inhabitant of the Country shall have free liberty to search and review any rolls, records or registers of any Court or office….”). Although converting to electronic records will make it unnecessary for clerks to continue to keep conventional, hard-copy alphabetical indices, clerks still will be required under Massachusetts law to maintain alphabetical indices (even if in electronic form). See G.L. c. 221, § 23 (“Each clerk shall keep an alphabetical list of the names of all parties to every action or judgment recorded in the records and a reference to the book and page thereof….”). As a practical matter, moreover, some form of an alphabetical index or search function will be needed to efficiently organize and retrieve case information.
Will the public have a right to use the newly created electronic indices of criminal cases? Case law concerning public access to conventional alphabetical indices and docket sheets suggests that the answer is yes. See Fenton, 819 F. Supp. at 90-91 (public has First Amendment right of access to alphabetical indices of criminal cases); Hartford Courant Co. v. Pellegrino, 380 F.3d 83, 86 (2d Cir. 2004) (First Amendment right of access to docket sheets).
The Fenton court struck down on First Amendment grounds a provision of CORI (since repealed) that prohibited public access to the alphabetical indices of criminal cases in order to promote privacy and rehabilitation interests. Fenton, 819 F. Supp. at 93.[viii] See generally New Bedford Std.-Times Pub. v. Clerk, Third Dist. Ct., 377 Mass. 404, 412, 413 (1979); J. Brant, et al., Public records, FIPA and CORI: how Massachusetts balances privacy and the right to know, 15 Suffolk U. L. Rev. 23, 59-60 (1981).
Fenton held that the historical tradition of access to alphabetical indices, combined with the positive role access has on public oversight and understanding of the courts, required that alphabetical indices to criminal cases be publicly available absent case-specific findings that a restriction on access was narrowly tailored and effectively served a compelling state interest. 819 F. Supp. at 91-99.
Throughout the courts a sprawling amalgam of papers reflects action in connection with judicial proceedings. It is not misleading to think of courthouse papers as comprising a vast library of volumes for which docket sheets are the tables of contents. Without the card catalogue provided by alphabetical indices, a reader is left without a meaningful mechanism by which to find the documents necessary to learn what actually transpired in the courts. The indices thus are a key to effective public access to court activity. And the importance of public access to the proper functioning of our judicial system cannot be overstated.
Id. at 94. Fenton has been cited approvingly by the SJC. See Globe Newspaper Co. v. Dist. Attorney for Middle Dist., 439 Mass. 374, 382 n.12 (2003) (“[a]s a result of [Fenton], the public has access to court clerks’ alphabetical indices of defendants’ names and may thereby obtain access to court records concerning an individual defendant”); Roe v. Attorney General, 434 Mass. 418, 435-436 (2001) (citing Fenton for the proposition that the “denial of public access to court alphabetical indices of criminal defendants violated First Amendment to the United States Constitution”).
These decisions provide strong support for the proposition that the public should have a commensurate right of access to electronically maintained alphabetical indices (or “card catalogues”) of criminal cases. Absent recognition of such a right, the modernization of court files would have the unintended consequence of reducing public oversight of the courts.
Remote Electronic Access to Criminal Case Records and CORI
Recognizing a public right of access to electronic court records ensures that the computerization of judicial records will not diminish the public’s longstanding right to obtain information about the functioning of the judicial system. Other questions remain, such as whether the public should have remote access to case files over the World Wide Web. PACER, for example, permits registered users to remotely search court records by a party’s name in individual federal district courts, courts of appeal and bankruptcy courts to obtain both civil and criminal case records.[ix] Federal Rule of Civil Procedure 5.2 addresses some privacy concerns raised by online access by requiring PACER users to redact certain personal identifying information from their electronic filings. Similar requirements are contained in the Supreme Judicial Court’s proposed new Rule 1:24, Personal Identifying Information. Despite such safeguards, privacy advocates have concerns about the difference between, on the one hand, requiring members of the public to travel to individual courthouses to examine a court record and, on the other hand, permitting the public to access records online either on a court-by-court, county-by-county, or state-wide basis. In addition to these public policy issues, online access to court records also raises a legal issue unique to Massachusetts law: would permitting remote web access to electronic court records in criminal cases violate CORI?
As initially enacted, CORI restricted public access to certain criminal record information held by the executive and judicial branches. See generally New Bedford Std.-Times, 377 Mass. at 412, 413; Public records, FIPA and CORI, supra, 15 Suffolk U. L. Rev. at 58-60. The statutory provision that prohibited public access to alphabetical indices of criminal cases was struck down by Fenton and, thereafter, repealed as part of the 2010 amendments to the statute. Compare St. 1977, c. 841 and G.L. c. 6, § 172(m). See generally G. Massing, CORI Reform–Providing Ex-Offenders with Increased Opportunities without Compromising Employers’ Needs, 55 Boston Bar Journal 21, 22 (Winter 2011).
The combination of Fenton and the 2010 amendments to CORI have led some to conclude that CORI no longer has any application to court records. See Guide to Public Access, Sealing & Expungement, Administrative Office of the District Court Department of the Trial Court (Rev. Ed. 2013) at 8 (“The CORI Law Does Not Limit Access to Clerk’s Records”); see also id. at 8 n.27, 11 & n.34. This conclusion is supported by G.L. c. 6, § 172(m)(2), which provides in relevant part: “[n]otwithstanding this section . . ., the following shall be public records: . . . chronologically maintained court records of public judicial proceedings.” Id. See also Middle Dist., 439 Mass. at 382 (“[d]ocket numbers are assigned chronologically and maintained by courts as part of their court records, criminal proceedings against adult defendants are public proceedings, and docket number information thus falls squarely within the second listed exception to the CORI statute.”); id. at 385 (“There is no violation of the CORI statute when the search specifications consist of information that would also be revealed on the court’s records accessible to the public.”).
Privacy advocates argue that remote access to electronic court records would provide the public with the type of aggregated criminal history information still protected by CORI. The 2010 amendments to CORI authorized the Department of Criminal Justice Information Services (“DCJIS”) to create an electronic database of criminal offender record information and strictly limited access to that database to enumerated persons and entities. See G.L. c. 6, § 172(a). See also id. at §§ 167(e), G.L. c. 6, § 168A, G.L. c. 6, § 168C, 172(29), (30). The statute also makes it a crime to knowingly obtain or attempt to obtain criminal offender record information under false pretenses or to knowingly communicate such information “except in accordance with [CORI].” G.L. c. 6, § 178.
A discussion of the public policy arguments for and against online access to court records of criminal cases is beyond the scope of this article.[x] As a matter of statutory construction, however, it is difficult to argue that CORI forbids remote access to court records (whether on a state-wide, county-wide, or court-by-court basis), particularly given the unintended consequences of such an interpretation. For example, the statute draws no distinction between electronic and conventional court records. If CORI applies to accessing electronic court records remotely, then it also would apply to accessing conventional records in a courthouse, a conclusion that would upend centuries of tradition and raise significant constitutional issues of free speech and separation of powers. See generally Fenton, 819 F. Supp. at 98-99; Opinion of the Justices, 365 Mass. 639, 645-647 (1974) (executive branch agency that controlled electronic data processing in the judicial branch would violate art. 30 of the Declaration of Rights). Nor does the statute distinguish between remote and on-site access, or between state and federal courts. Broadly interpreting CORI as applying to court records thus would implicate PACER users as well. Under these circumstances, the criminal penalties imposed for obtaining criminal offender record information under false pretenses or communicating such information except in accordance with the statute seem best understood as protecting the DCJIS database, not court files. See G.L. c. 6, § 178.
Electronic court records represent a great technological advance for the delivery of legal services and justice. But that advance should not render obsolete a far greater innovation ― the Founders’ vision of a presumptively public judicial system. There may be many issues to consider before permitting remote Web access to court records, but violating CORI most likely is not one of them.
Jonathan M. Albano is a partner at Morgan Lewis & Bockius LLP in Boston. He represents the press in courtroom access and privacy matters and was counsel on behalf of media interests in some of the cases cited in this article.
[iii] Committee members include representatives of the Trial Court, the Superior Court, the Boston Municipal Court, the Housing Court, the Land Court, the Probate and Family Court, the Appeals Court, and the Supreme Judicial Court. A transcript of a June 15, 2015 public hearing held by the Committee, as well as written comments received from 36 persons and organizations, is available at http://www.mass.gov/courts/court-info/commissions-and-committees/tc-access-records.html.
[iv] See also Richmond Newspapers v. Virginia, 448 U.S. 555, 580 (1980); Press-Enterprise Co. v. Superior Court, 464 U.S. 501, 513 (1984); Press-Enterprise Co. v. Superior Court, 478 U.S. 1, 10 (1986).
[v] Other courts also have recognized a First Amendment right of access to civil proceedings and records. See generally Publicker Ind., Inc. v. Cohen, 733 F.2d 1059, 1070 (3d Cir. 1984). The Supreme Judicial Court has not yet addressed whether Article 16 of the Declaration of Rights of the Massachusetts Constitution grants the public a comparable right of access to court records.
[vi] The Massachusetts Appeals Court maintains a list of materials that are not available for public inspection. See http://www.mass.gov/courts/docs/appeals-court/impoundment-sources.pdf. But see Commonwealth v. Jones, 472 Mass. 707, 731 (2015) (despite statutory requirement of G. L. c. 233, § 21B that rape shield hearings must be held in camera, Constitution requires trial court to make case-specific findings before closing hearing).
[vii] “The exercise of the power to restrict access, however, must recognize that impoundment is always the exception to the rule, and the power to deny public access to judicial records is to be strictly construed in favor of the general principle of publicity.” Republican, 442 Mass. at 223 (quotation and citation omitted).
[viii] See St. 1977, c. 841 (“the following shall be public records: … (2) chronologically maintained court records of public judicial proceedings, provided that no alphabetical or similar index of criminal defendants is available to the public, directly or indirectly”) (emphasis added).
[ix] The more than one million users of PACER, which is an acronym for Public Access to Court Electronic Records, include attorneys, pro se filers, government agencies, trustees, data collectors, researchers, educational and financial institutions, commercial enterprises, the media, and the general public. See https://www.pacer.gov/.
[x] See, e.g., N. Gomez-Velez, Internet Access to Court Records – Balancing Public Access and Privacy, 51 Loy. L. Rev. 365 (2005); P. Martin, Online Access to Court Records – From Documents to Data, Particulars to Patterns, 53 Villanova L. Rev. 855 (2008). See generally U.S. Dep’t of Justice v. Rep. Comm. for Freedom of the Press, 489 U.S. 749, 764 (1989).
From its inception the Internet has been disrupting business models, as once-ubiquitous brands like Blockbuster, Borders, and Encyclopedia Britannica can attest. But as more of our activities move online, society is beginning to realize how it can disrupt individual lives as well. In 2013, the tech world watched in real time as an ill-advised tweet to 170 followers began trending worldwide and cost 30-year-old PR director Justine Sacco her job while she flew from London to Cape Town, oblivious to the firestorm she had ignited below. More recently, the hack of the adultery facilitating website Ashley Madison has revealed financial information, names, and intimate details about millions of users online. Our lives increasingly leave digital fingerprints that can prove embarrassing or damaging when revealed on the network.
The “Right to be Forgotten” is the European Union’s attempt to smooth these rough edges of cyberspace. The term originated with Mario Costeja Gonzalez of Spain, who defaulted on a mortgage in 1998. To foreclose on the property, the bank dutifully published a notice of default in Costeja Gonzalez’s local newspaper and its online companion. Because Google indexed the site, the notice featured prominently in search results for Costeja Gonzalez’s name, even years afterward. Embarrassed that his default was among the first facts the Internet recited about him, Costeja Gonzalez sued both the paper and Google under the EU Data Protection Directive, which governs the transnational flow of personal information in EU countries. He alleged that the notice infringed on his right to privacy and requested that the companies delete them.
The European Court of Justice (“ECJ”) largely agreed, at least as to Google. Deciding the case on laws governing privacy and protection of personal data, the court explained in a decision dated May 13, 2014, an individual should have the right to request that a search engine remove links to information about an individual that are “inadequate, irrelevant or no longer relevant, or excessive.” Importantly, the individual need not show the revelation of the information is prejudicial, because one’s right to privacy should override a search engine’s economic interests in listing search results. But the court was careful to note that there could be an exception if the individual’s right to privacy was outweighed by the public’s interest in having access to the information in question.
The Costeja Gonzalez opinion addresses an important digital-age problem. It is exceptionally easy to post false, misleading, or simply embarrassing personal information online, and once that information is posted, it is exceptionally difficult for the subject to remedy the situation. Costeja Gonzalez’s embarrassment at a decades-old foreclosure may seem trivial. But the same dynamics plague countless others like Ms. Sacco who are forever tarred by a momentary lapse in judgment. It also affects wholly innocent victims whose private details are posted online, such as the subjects of so-called “revenge porn” sites.
Such incidents illustrate the dark side of the information revolution. The genius of the Internet is its ability to reduce information costs. Any information can be reduced to a series of 1s and 0s, replicated, and transmitted anywhere around the world, instantaneously and virtually without cost. This makes it an exceptional tool for communication and learning. But it can hurt those whose self-interest depends upon controlling the flow of information. Dictators have been hobbled by the Internet’s ability to perpetuate ideas and information while connecting underground resistance groups. More benignly, record labels and movie studios have fought a decade-long war against online piracy. What copyright is to Universal, privacy is to the individual: a right to determine if and when certain information becomes public. The Right to be Forgotten is an attempt to force the Internet to respect these rights, by regulating one of the few bottlenecks in the Internet ecosystem: search engines that guide users to information online.
But the ECJ decision is an unworkable solution that risks doing more harm than good. First, the decision applies only to search engines, meaning the information in question is never actually “forgotten.” Google must suppress links to Costeja Gonzalez’s foreclosure notice, but the newspaper itself remains free to leave the notice available online. Second, the court’s standard is astonishingly vague. The decision relies upon Google and other search engines to determine whether a particular link is “inadequate, irrelevant…or excessive,” and if so, whether the “public interest” nonetheless requires the link to remain posted. The court envisions Google analysts assessing the harm that each item causes to the claimant, and carefully balancing that harm against the public’s right to know a particular fact. In reality, Google faces liability for denying legitimate takedown requests but not for granting frivolous ones. This means that the company is likely to err on the side of granting most requests rather than evaluating each request individually—especially when one considers the cost of evaluating potentially millions of such requests each year. Numerous commentators have criticized the similar selection bias evident in the Digital Millennium Copyright Act copyright takedown regime under US law, leading to the removal of a significant amount of non-infringing material.
More generally, the Right to be Forgotten decision raises broader questions about an Orwellian power to distort history. Unsurprisingly, media organizations are some of the decision’s biggest critics, as they fear individuals will misuse the process to sanitize their pasts. There is some evidence to support this concern: among the first claimants was a British politician seeking to hide his voting record from the public and a convicted sex offender who wants his status kept hidden. In Massachusetts, it runs counter to the current push for broader public access to court proceedings, particularly in cases involving police officers and other public officials charged with criminal offenses. In this sense, the EU decision is only part of a broader social conversation about selective disclosure, which also includes the ethics of photoshopping models, contracts prohibiting users from posting negative reviews online, and the use of social media to present idealized images of ourselves online. As the merits of the “Right to be Forgotten” are debated in the United States, it is important that any dialogue, as well as any proposed solutions, carefully balance the rights of both the individual and society to open, accurate, and fair historical information.
Daniel Lyons is an Associate Professor (with tenure) at Boston College Law School, where he specializes in telecommunications, Internet law, administrative law, and property.
Any Calls, Texts, or Photos May Be Used Against You: Warrantless Cell Phone Searches and Personal PrivacyPosted: April 1, 2014
The world envisioned by the Supreme Court in Chimel v. California, 395 U.S. 752 (1969) – one where physical objects such as spare handcuff keys, drugs, gambling ledgers, and weapons could be found on the person of any arrestee – is now a much different place. Historically, searches incident to arrest have been justified to prevent escape, the destruction of evidence and to protect the arresting officers from dangerous weapons. Smartphone technology has changed the landscape and offered new challenges for our courts. In the vast majority of arrests these days, the police locate a cell phone on or near an arrestee, seize it, and seek to search the device pursuant to the search incident to arrest exception to the warrant requirement. This situation obviously implicates incrimination issues, as well as privacy concerns, because one handheld device can contain enormous amounts of personal information collected over lengthy periods of time, and much or even all of this data might be arguably inadmissible or irrelevant to an individual’s conduct or intent at the time of arrest. For this reason, courts applying the search incident to arrest doctrine must carefully balance the government’s ability to seize and use personal data of an arrestee to incriminate them, against the risk of allowing an unreasonable intrusion into our personal lives.
This article will provide an overview of the two most recent Massachusetts Supreme Judicial Court (“SJC”) decisions on the issue, and will highlight two cases currently pending before the Supreme Court of the United States.
The SJC has ruled that police can conduct a limited cell-phone search without a warrant pursuant to the search incident to arrest exception. In both Commonwealth v. Phifer, 463 Mass. 790 (2012) and Commonwealth v. Berry, 463 Mass. 800 (2012), the SJC held that checking the arrestee’s cell phone call history in order to discover evidence of the crime of arrest was acceptable under the search incident to arrest exception to the warrant requirement. In Phifer, officers viewed the defendant speaking on his cell phone shortly before engaging in a drug transaction. After police arrested the defendant and a codefendant, the codefendant provided police with his phone number. The subsequent search of the defendant’s cell phone involved a “few ‘simple manipulations’” to display the recent call logs where police matched several recent calls to the codefendant’s phone number. In upholding the search, the Phifer court limited its ruling to the facts of that case, holding that when police had probable cause to believe the search of the cell phone would reveal evidence of crime, the search was constitutional.
But Berry presented a different situation. The police witnessed the defendant selling heroin to a customer from within a vehicle. Officers arrested the defendant and the customer, and seized their cell phones incident to arrest. Unlike Phifer, neither officer witnessed either arrestee use his cell phone before or during the illegal transaction. Still, police reviewed Mr. Berry’s recent call history and dialed the most recent number, correctly presuming that it belonged to the customer. The SJC stated that this “very limited search” was reasonable due to the police officer’s knowledge that cell phones are used in drug transactions, even if police had no particularized suspicion that either the defendant or the customer had used a cell phone to conduct this transaction.
While the Berry court sought to limit its decision to the facts of the case, the effect is likely to be far reaching, and applied to many other scenarios. Indeed, the facts present in Berry include 1) experienced officers with knowledge and training in drug transactions; 2) a high crime area; and 3) general knowledge that cell phones are often used in drug transactions. Such general facts will be present in virtually every drug arrest, and thus every arrestee’s cell phone will seemingly be subject to a “limited” search incident to arrest. The Berry court did not require any particularized nexus between the officers’ witnessing the use of a cell phone and a target drug transaction, despite a clear opportunity to do so, given the important factual differences between the usage of the cell phone in the Phifer and Berry offenses.
In April 2014, the United States Supreme Court will revisit these issues. In People v. Riley, No. D059840, 2013 WL 475242 (Cal. Ct. App. Oct. 16, 2013), cert. granted sub nom. Riley v. California,No. 13-132, 2013 WL 3938997 (U.S. Jan. 17, 2014),the Court will consider whether a post-arrest search of the petitioner’s cell phone violates his Fourth Amendment rights. There, police stopped Mr. Riley for having expired vehicle tags. During the stop, the police learned that he was driving with a suspended license and arrested him. Pursuant to policy, the officers conducted an “inventory search” of his vehicle and, in the process, found guns hidden underneath the vehicle’s hood. Officers placed the defendant under arrest and seized his cell phone. Officers then conducted two warrantless searches of the cell phone’s content—one at the scene during which the officer scrolled through the defendant’s contact list, and one at the police station during which a different officer searched photographs and video clips contained therein. The cell phone was a “smartphone that was capable of accessing the Internet, capturing photos and videos, and storing both voice and text messages, among other functions,” according to Mr. Riley’s certiorari petition. Mr. Riley was charged with attempted murder and assault with a semiautomatic weapon, based in part on the contents seized from his cell phone—including infamous gang-members’ names and incriminating photographs—that proved critical to the government’s investigation and charging decision.
Mr. Riley argues in his Petition that “Federal courts of appeals and state courts of last resort are openly and intractably divided over whether the Fourth Amendment permits the police to search the digital contents of an arrestee’s cell phone incident to arrest. This issue is manifestly significant.” While the State, in its opposition brief, “acknowledges that there is a growing conflict concerning whether the Fourth Amendment permits law enforcement officers to search the contents of a cell phone incident to arrest,” it argues that the police officers’ search of Mr. Riley’s cell phone did not constitute a Fourth Amendment violation. In support of its position, the State argues that courts “categorically allow the police to search any item of personal property on an arrestee’s person at the time of his lawful arrest,” if the search was reasonable.
A second case accepted by the United States Supreme Court, United States v. Wurie, 728 F.3d 1 (1st Cir. 2013), cert. granted, No. 13-212, 2013 WL 4402108 (U.S. Jan. 17, 2014), addresses whether the Fourth Amendment permits the government to conduct a post-arrest warrantless search of an arrestee’s cell phone call log. There, the police witnessed what they believed to be a drug transaction within a vehicle. Police arrested the defendant for distributing crack cocaine and removed him to the police station. The officer seized two cell phones from Mr. Wurie and eventually used the personal contacts and telephone numbers to determine his home address. Officers then obtained a search warrant for Mr. Wurie’s home where they discovered a firearm, ammunition and drug paraphernalia. The government convicted him of numerous drug crimes and for being a felon in possession. On appeal, the First Circuit overturned his conviction, holding that the search incident to arrest exception “does not authorize the warrantless search of data within a cell phone that is seized from an arrestee’s person” unless another exception to the warrant requirement applies.
The Solicitor General submitted a writ of certiorari arguing that it is well-settled that “a custodial arrest based on probable cause justifies a full search of an arrestee and any items found on an arrestee, including items such as wallets, calendars, address books, pagers and pocket diaries.” He further argued that “the cell phone at issue was a comparatively unsophisticated flip phone” and, as a result, this particular case is not suitable for determining the scope of Fourth Amendment rights pertaining to cell phone searches.
The State advanced similar arguments below, and the First Circuit considered and disagreed with each. As to the argument that police may search any item on the arrestee, the First Circuit held that Chimel does not authorize even a limited warrantless search of a cell phone because such a search is not necessary to preserve destructible evidence or promote officer safety. The First Circuit also rejected the idea that the particular phone’s storage capacity should be a factor, quoting the Seventh Circuit’s reasoning that “[e]ven the dumbest of modern cell phones gives the user access to large stores of information.”
It would seem that, even if the Supreme Court holds that searches of cell phones incident to arrest are constitutional, there must be a reasonableness standard applied to limit and condition the nature, scope and extent of such searches. The implication of the upcoming decisions may be far reaching. As the First Circuit in Wurie recognized, the evolution of technology makes the government’s reach into private data ever more problematic. Today, individual cell phones act as bank cards, home security surveillance portals, and repositories for intimate details such as personal health information and social security numbers. Tomorrow, technology will turn another corner, allowing more information to be immediately available to whomever may access a personal cell phone. As technology evolves, and personal e-data continues to be inextricably intertwined with our everyday lives, the law as it applies to devices that possess such personal information will be critical to the debate over personal privacy and governmental intrusion.
Gerry Leone is a former Middlesex County District Attorney. He is a partner with Nixon Peabody LLP and conducts internal and governmental investigations for public and private clients. Gerry also represents individuals and organizations facing complex civil and criminal matters.
Linn Foster Freedman is a partner with Nixon Peabody LLP and is leader of the firm’s Privacy & Data Protection group. Linn practices in data privacy and security law, and complex litigation.
Kathryn M. Sylvia is an associate with the firm and member of the firm’s Privacy & Data Protection team. She concentrates her practice on privacy and security compliance under both state and federal regulations.