Gelfgatt, Jones, and the Future of Compelled Decryption

by Eric A. Haskell

Legal Analysis

As great quantities of data have come to repose in electronic devices, obtaining access to the content of those devices has come to be greatly important to law enforcement in many criminal investigations.  It sometimes happens that law enforcement has a right—typically pursuant to a search warrant—to search for data on a particular device, but is prevented from doing so by the presence of a password or other “key” that makes the data inaccessible or unreadable.  Law enforcement sometimes can bypass the password on its own.  See generally O.S. Kerr & B. Schneider, Encryption Workarounds, 106 Geo. L.J. 989 (2018).  But, other times, the only practical way law enforcement can execute the search is with the help of a person who knows the password.  Because the person who knows the password often is the suspect, their help generally is available only if compelled by court order.  Such “compelled decryption” implicates not only the constitutional requirement that the search of the device be “reasonable,” but also the suspect’s constitutional privilege against compelled self-incrimination.

Basic Principles of the Privilege Against Self-Incrimination

The Fifth Amendment to the federal Constitution provides that “[n]o person . . . shall be compelled . . . to be a witness against himself . . . .”  Article 12 of the Massachusetts Declaration of Rights similarly provides that “[n]o subject shall . . . be compelled to accuse, or furnish evidence against himself.”  Decisional law has interpreted these privileges to bar the government from: (1) compelling a person; (2) to make a testimonial communication; (3) that is incriminating.  Fisher v. United States, 425 U.S. 391, 408 (1976); Commonwealth v. Burgess, 426 Mass. 206, 218 (1997).

The privilege does not protect against compelled provision of a physical identifier such as fingerprints, a blood sample, or a handwriting exemplar.  See generally Schmerber v. California, 384 U.S. 757, 767 (1966); Commonwealth v. Brennan, 386 Mass. 772, 776-83 (1982).  This is because such identifiers do not “extort[] . . . information from the accused” or “attempt to force him to disclose the contents of his own mind,” and thus are not viewed as sufficiently “testimonial” for the privilege to attach.  Doe v. United States, 487 U.S. 201, 210-11 (1988).  Nor does the privilege shield documents from being disclosed pursuant to compulsion, even if their contents are incriminating.  United States v. Hubbell, 530 U.S. 27, 35-36 (2000).  This is because “the creation of those documents was not ‘compelled’ within the meaning of the privilege.”  Id.

The privilege may apply where the mere act of producing a document or a thing is “testimonial” in that it implies an incriminating assertion of fact, such as: that the demanded object exists; that the object produced is authentic; or that the suspect possesses or controls the object.  Fisher, 425 U.S. at 410; Commonwealth v. Hughes, 380 Mass. 583, 588-93 (1980).  But this “act of production” doctrine does not apply where law enforcement already has independent evidence of the incriminating assertions that the act of production would imply.  In other words, if the act of production “adds little or nothing to the sum total of [law enforcement’s] information,” then any facts implied by the act of production are “foregone conclusions” and the privilege does not apply.  Fisher, 425 U.S. at 411; Hughes, 380 Mass. at 592.

Commonwealth v. Gelfgatt

In Gelfgatt, the defendant was arrested in connection with a complex fraud scheme that involved the creation and recording of forged mortgage assignments.  Commonwealth v. Gelfgatt, 468 Mass. 512, 514-15 (2014).  On the day of his arrest, investigators seized several encrypted devices from his home and also interviewed the defendant, who asserted that he was capable of decrypting them.  Id. at 516-17.  After the defendant was charged with forgery, uttering, and attempted larceny, the Commonwealth filed a motion seeking to compel him to enter the passwords into the encrypted devices.  Id. at 517-18 & n.10.  The Superior Court denied the motion and reported the case to the SJC.

The SJC determined that the contents of the devices were not privileged on self-incrimination grounds because they had been “voluntarily created by the defendant in the course of his real estate dealings.”  Id. at 522 n.13.  The SJC then held that the defendant’s act of entering the passwords would be a testimonial act of production, because it would implicitly acknowledge his “ownership and control of the computers and their contents.”  Id. at 522.  But, the SJC continued, the defendant had already acknowledged as much in his statement to the police; thus, any facts implied by his entering the passwords were foregone conclusions.  Id. at 523-24.  In doing so, the SJC commented that the “foregone conclusion” exception would apply where law enforcement already was aware of “(1) the existence of the evidence demanded; (2) the possession or control of that evidence by the defendant; and (3) the authenticity of the evidence.”  Id. at 522 (citing Fisher, 425 U.S. at 410-13).

Commonwealth v. Jones

In Jones, the defendant was arrested and later charged with sex trafficking and deriving support from prostitution.  Commonwealth v. Jones, 481 Mass. 540, 543-44 (2019).  At the time of his arrest, he possessed a cellular telephone that, the police learned from other sources, he had used to facilitate prostitution transactions.  Id.  The Commonwealth filed a motion seeking to compel him to decrypt the telephone (although, as discussed below, the motion imprecisely described what it sought to compel him to do).  The motion judge demurred, interpreting Gelfgatt to require the Commonwealth to establish “(1) the existence of the evidence demanded; (2) the possession or control of that evidence by the defendant; and (3) the authenticity of the evidence,” and concluding that the Commonwealth had failed to demonstrate those propositions with “reasonable particularity.”  Id. at 545, 548, 553 n.14.  The Commonwealth subsequently made a renewed motion, furnishing additional evidence that, it argued, showed that the defendant’s knowledge of the telephone’s password was a foregone conclusion.  Id.  But the motion judge declined to consider the newly-furnished evidence without a showing that it had been unknown or unavailable to the Commonwealth at the time of the initial motion.  Id. at 545, 558-59.  The Commonwealth then sought relief before a single justice of the SJC, who reserved and reported the case to the full Court on three questions: (1) what burden of proof the Commonwealth must bear to establish the “foregone conclusion” exception to the privilege under Gelfgatt; (2) whether the Commonwealth had met that burden; and (3) whether the Commonwealth was required, in a renewed Gelfgatt motion, to show that any newly-furnished evidence had been unknown or unavailable at the time of the initial motion.

Before answering those questions, the SJC addressed a threshold issue:  What factual assertions must the Commonwealth demonstrate are “foregone conclusions” in order to obtain a Gelfgatt order?  The SJC answered that, when the Commonwealth seeks to compel a defendant to enter a password into a device, “the only fact conveyed . . . is that the defendant knows the password, and can therefore access the device.”  Id. at 547-48.  The Court rejected the proposition that the compelled entry of a password also asserts the defendant’s ownership and control of the device, observing that “individuals may very well know the password to an electronic device that is owned and controlled by another person.”  Id. at 547 n.8.  Accordingly, the SJC concluded, the Commonwealth may invoke the “foregone conclusion” exception simply by showing that the defendant knows the password.  Id.

Turning to the reported questions and relying on article 12, the SJC held that the Commonwealth must make that showing beyond a reasonable doubt.  Id. at 551-55.  Applying that standard, the Court found that the Commonwealth had shown the defendant’s knowledge of the password beyond a reasonable doubt, where: (1) the defendant possessed the telephone at the time of his arrest; (2) one month before his arrest, when asked by the police for his number, the defendant had provided the telephone’s number; (3) a woman told the police that the defendant used the telephone to facilitate prostitution transactions; (4) the telephone’s subscriber records were associated with a second number that was associated with the defendant; and (5) the telephone’s cellular site location information (CSLI) placed it in the same locations at the same times as another telephone that was confirmed to belong to the defendant.  Id. at 555-58 (“[S]hort of a direct admission, or an observation of the defendant entering the password himself and seeing the phone unlock, it is hard to imagine more conclusive evidence of the defendant’s knowledge of the [telephone’s] password.”).  Finally, the Court found that the motion judge abused his discretion by declining to consider evidence presented in the Commonwealth’s renewed motion that was not shown to have been unknown or unavailable at the time of the initial motion.  Id. at 558-61.  The Court observed that a Gelfgatt motion, “[m]uch like a search warrant application,” is an “investigatory tool,” the factual support for which may evolve over the course of an investigation.  Id. at 559-60.

The Future of Compelled Decryption

Although Gelfgatt and Jones mark the SJC as a national leader on compelled decryption issues, important questions remain to be answered.

  • Non-Gelfgatt Decryption Procedures

The order in Gelfgatt required the defendant to appear at a digital forensic lab, to enter the password into each device, and “immediately [to] move on . . . .”  468 Mass. at 517 n.10.  It also forbade the Commonwealth from viewing or recording the password entered by the defendant.  Id.  In contrast, the order sought in Jones was “not perfectly clear” as to what it would require the defendant to do, but “suggested that it sought to require the defendant to make a written disclosure of the actual password.”  481 Mass. at 546 n.9.  Acknowledging the possible infirmity with such a procedure, the SJC construed the order sought in Jones as tracking the one sought in Gelfgatt, and approved its issuance on that basis.  Id.

The SJC was correct to hesitate when faced with a request to compel the defendant to disclose his password to law enforcement.  This is because the compelled disclosure of a password is not a testimonial act of production to which the “foregone conclusion” exception might apply: Rather, it is a “pure” testimonial statement to which the “foregone conclusion” exception cannot apply.  See id. (acknowledging as much in dicta); see also United States v. Oloyede, Nos. 17-4102, 17-4186, 17-4191, & 17-4207, — F.3d —, 2019 WL 3432459 (4th Cir. Jul. 31, 2019) (distinguishing between suspect’s typing password into device and giving password to law enforcement).

Furthermore, in both Gelfgatt and Jones, the suspect was not compelled to produce any particular files from the device after decrypting it; that was left to the analyst executing the warrant.  In Jones, the SJC highlighted this aspect of the decryption procedure, observing that “the analysis would have been different” if the suspect had been compelled to produce particular files, because doing so “would implicitly testify to the existence of the files, [the suspect’s] control over them, and their authenticity.”  481 Mass. at 548 n.10.  In that situation, the Commonwealth would have been obligated to prove, beyond a reasonable doubt, that those additional assertions were foregone conclusions before it could obtain a corresponding Gelfgatt order.  Cf. Hubbell, 530 U.S. at 44-45 (act of production is privileged where grand jury subpoena would require recipient to produce documents whose existence and location were previously unknown to government).

  • Cloud-Based Storage

Gelfgatt and Jones both involved a tangible device that was in the physical possession of law enforcement.  But their holdings as to the privilege against compelled self-incrimination can also be applied to a request to compel decryption of a cloud-based digital space.  Such a request would follow the same analysis, with law enforcement required to: (1) have a right to search the cloud location; (2) show beyond a reasonable doubt that the suspect knows the password to access the cloud location, thereby availing itself of the “foregone conclusion” exception; and (3) allow the suspect to input the password in a way that law enforcement does not see or record.

  • Biometric Keys

In both Gelfgatt and Jones, the sought-after “key” was an alphanumeric password.  But a key can also take the form of a biometric such as a facial scan, retinal scan, or fingerprint.  Biometric keys introduce two novel questions:  (1) is compelled biometric decryption properly viewed as a testimonial act of production, and thus within the scope of the privilege against compelled self-incrimination?; and, if so, (2) what must law enforcement show is a “foregone conclusion” before it can compel such biometric decryption?

Courts have answered the first question both ways.  Some have viewed the compelled biometric decryption as no different than compelled provision of a traditional physical identifier, and thus nontestimonial.[1]  See, e.g., State v. Diamond, 905 N.W.2d 870, 875-76 (Minn. 2018); In re Search of [Redacted], 317 F. Supp. 3d 523, 535-37 (D.D.C. 2018); In re Search Warrant Application for [Redacted], 279 F. Supp. 3d 800, 803-05 (N.D. Ill. 2017); Commonwealth v. Baust, 89 Va. Cir. 267, 2014 WL 10355635 (Va. Cir. Ct. Oct. 28, 2014).  Others have reasoned that, unlike providing a physical identifier, compelled biometric decryption implies factual assertions about the suspect’s relationship with the device.  See, e.g., Seo v. State, 109 N.E.3d 418 (Ind. App. 2018), vacated and transferred to Ind. Supreme Court, 112 N.E.3d 1082 (Ind. 2018); In re Application for Search Warrant, 236 F. Supp. 3d 1066, 1073-74 (N.D. Ill. 2017); In re Search of a Residence in Oakland, Cal., 354 F. Supp. 3d 1010, 1015-16 (N.D. Cal. 2019); In re Search of White Google Pixel 3 XL Cellphone, No. 1:19-mj-10441, 2019 WL 2082709 at *3-4 (D. Idaho May 8, 2019).  No Massachusetts court has yet issued a published opinion on this issue.

In this author’s view, law enforcement should be prepared for a Massachusetts court to depart from the traditional treatment of compelled provision of a physical identifier, and instead to view compelled biometric decryption as a testimonial act of production.  Compelled provision of a physical identifier has been deemed nontestimonial not because it does not assert facts, but rather because the facts that it does assert are so “self-evident” as to be “[in]sufficiently testimonial for purposes of the privilege.”  Fisher, 425 U.S. at 411 (compelled handwriting exemplar is nontestimonial for purposes of the privilege, despite its asserting both that handwriting belongs to suspect and that suspect is literate); accord Commonwealth v. Nadworny, 396 Mass. 342, 363-64 (1985) (fact that defendant is right-handed, unlike handwriting exemplar itself, is testimonial, although “trivial”).  But, when law enforcement seeks to compel biometric decryption, its object is not merely provision of the biometric standing alone:  If it were, the method of capturing the biometric would not matter, and investigators could just as well take a photograph of the suspect’s face, or ink-and-paper impressions of his fingerprints.  Rather, the object of compelled biometric decryption is the interaction of the biometric, in a pre-programmed fashion, with a particular device.  The successful interaction of biometric and device, in contrast to the biometric standing alone, asserts at least one fact that neither is trivial nor is self-evident from the biometric—specifically, it asserts that the suspect’s biometric is capable of decrypting the device.[2]  See In re Application for Search Warrant, 236 F. Supp. 3d at 1073.  In other words, compelled biometric decryption asserts facts that are basically similar to those asserted by compelled decryption using a password.

This reasoning simultaneously answers both the first question of whether compelled biometric decryption should be viewed as a testimonial act of production (it should) and the second question of what law enforcement must establish is a “foregone conclusion” before it can compel such a biometric.  If the assertion implied by the compelled biometric decryption is that the suspect’s biometric is capable of decrypting the device, then, pursuant to Jones, that is what the Commonwealth must prove beyond a reasonable doubt.  As in Jones, the Commonwealth can do so through either direct evidence (e.g., that the suspect actually used his biometric to decrypt the device) or circumstantial evidence (e.g., that the suspect used the device in a manner indicating that he must have had the ability to do so).

As a practical matter, the utility of compelled biometric decryption to law enforcement may be circumscribed.  This is because some biometric-based security technologies—including Apple’s popular fingerprint-based Touch ID—self-disable if, since the last time the device was unlocked, too much time has passed, or the device has been restarted or has lost power, or multiple attempts to unlock the device have been unsuccessful.  See About Touch ID Advanced Security Technology.  In addition, law enforcement may have limited ability to both maintain power to a biometrically locked device and to secure it from network activity (i.e., to minimize the risk of remote wiping or deletion of data).  Perhaps for these reasons, federal practice has often encountered requests to compel biometric decryption made as part of an application for an omnibus search warrant to also authorize law enforcement: (1) to seize the device; and (2) to search the device for particular data after it has been seized and decrypted using the compelled biometric.  See, e.g., In re Search of a Residence in Oakland, 354 F. Supp. 3d at 1013; In re Search of [Redacted], 317 F. Supp. 3d at 525-26; In re Search Warrant Application for [Redacted], 279 F. Supp. 3d at 801-02; In re Application for Search Warrant, 236 F. Supp. 3d at 1066-67.

  • Ex Parte Gelfgatt Proceedings

Gelfgatt and Jones each arose in the posture of a motion filed in a criminal case in the Superior Court.  This posture suggests that, in those cases, any evidence contained on the encrypted device was not necessary to support charges against the defendant.  But some investigations will require a compelled decryption before charges can be brought.  It thus seems likely that some Gelfgatt motions will arise in an ex parte posture.

The Appeals Court has already addressed a Gelfgatt motion arising out of a grand jury investigation, concerning a device that the police had previously obtained a warrant to search.  See In re Grand Jury Investigation, 92 Mass. App. Ct. 531 (2017), further appellate review denied, 478 Mass. 1109 (2018).  The Commonwealth filed a sealed Gelfgatt motion in the Superior Court and attached documents containing grand jury evidence that, the Commonwealth argued, satisfied its burden under the “foregone conclusion” exception.  Id. at 532.  The Commonwealth served the motion, but not the attachments, on counsel for the individual whom it sought to compel.  The Appeals Court affirmed the Superior Court’s issuance of a Gelfgatt order, concluding that the attachments showed that it was a foregone conclusion that the individual knew the password, among other things.  Id. at 534-35; see also Burgess, 426 Mass. at 215-16 (Fifth Amendment applies in same way to grand jury witness/target as to indicted defendant).  The Appeals Court also specifically affirmed the non-disclosure of the attachments to counsel, reasoning that grand jury materials are secret, and that both the Superior Court judge and the appellate court could review the attachments on an ex parte basis.  Id. at 535-36.

It is a small step from In re Grand Jury Investigation to think that at least some Gelfgatt orders may be sought as part of a search warrant application.  Indeed, search warrant applications bear similarities to the motions sustained in Gelfgatt, Jones, and/or In re Grand Jury Investigation:  They are ex parte, they rely on affidavits rather than live testimony, and they form an “investigatory tool that aids investigators in obtaining material and relevant evidence related to a defendant’s conduct.”  Jones, 481 Mass. at 559.  As noted, search warrant applications seeking compelled biometric decryption have appeared in federal practice.  See, e.g., In re Search of a Residence in Oakland, 354 F. Supp. 3d at 1015-16; In re Search of [Redacted], 317 F. Supp. 3d at 535-37; In re Search Warrant Application for [Redacted], 279 F. Supp. 3d at 803-05; In re Application for Search Warrant, 236 F. Supp. 3d at 1073-74.  Nonetheless, a search warrant application seeking a Gelfgatt order in state court would entail innovations to Massachusetts search warrant practice that the applicant must be prepared to address.

The applicant must be prepared to show that the act sought to be compelled is of a type of evidence for which the Legislature has authorized issuance of a search warrant.  See G.L. c. 276, § 1 (enumerating categories of evidence that may be sought by search warrant).  Compelled biometric decryption likely will fall into that category.  See, e.g., In re Lavigne, 418 Mass. 831, 834-35 (1994) (statute authorizes use of warrant to procure bodily sample from suspect); cf. In re Search of [Redacted], 317 F. Supp. 3d at 540 n.13 (declining to decide whether Fed. R. Crim. P. 41 authorizes issuance of warrant to compel biometric decryption, and instead issuing warrant under All Writs Act, 28 U.S.C. § 1651).  Compelled decryption using a password, on the other hand, might not.

The applicant must also be prepared to show that the application does not trigger an adversarial hearing, which the SJC has required as a prerequisite for issuance of warrants for some especially invasive searches.  E.g., Lavigne, 418 Mass. at 835 (warrant to extract blood sample from suspect must be preceded by adversarial hearing at which court can weigh intrusiveness of procedure against need for evidence); Commonwealth v. Banville, 457 Mass. 530, 539-40 (2010) (warrant to obtain suspect’s DNA using buccal swab would have been preceded by adversarial hearing if it had occurred in Massachusetts).  So long as compelled biometric decryption “[does] not involve penetration into [the suspect’s] body,” Banville, 457 Mass. at 539 n.2, it likely will not trigger such a hearing.  See also Commonwealth v. Miles, 420 Mass. 67, 83 (1995) (ex parte order compelling suspect to appear and have his body inspected for poison ivy need not be preceded by hearing).

The applicant must take care to particularly identify the person whose biometric is to be compelled, perhaps by including a photograph and/or detailed physical description of that person in the warrant application papers.  This stems in part from the “particularity” requirement applicable to any search warrant.  See G.L. c. 276, § 2.  It also follows from this author’s view (above) that compelled biometric decryption may be analyzed under the “foregone conclusion” exception to the “act of production” privilege: If that view is accepted, the identity of the person whose biometric is to be compelled would form one aspect of the “foregone conclusion” that, under Jones, the Commonwealth must prove beyond a reasonable doubt.  The need for particularity in identifying the person whose biometric is to be compelled likely precludes law enforcement from obtaining a warrant to compel “any person present” at the warrant execution to apply his/her biometrics to a device.  Cf. In re Search of a Residence in Oakland, 354 F. Supp. 3d at 1014 (denying such authorization); In re Application for Search Warrant, 236 F. Supp. 3d at 1068-70 (same).

And the applicant should be explicit about the different burdens it must sustain to obtain such a warrant.  That a crime has occurred and that evidence related to the crime reasonably may be expected to be found in a particular place—requirements for issuance of any search warrant—need be demonstrated only to the level of probable cause.  That it is a foregone conclusion that a particular person’s biometric is capable of decrypting the device, however, must be demonstrated beyond a reasonable doubt in accordance with Jones.[3]  The applicant should consider explicitly articulating the applicable burdens in the warrant application papers, for the benefit of the reviewing judicial officer.

 

Eric A. Haskell is an Assistant Attorney General and a member of the BBJ Board of Editors. This article represents the opinions and legal conclusions of its author and not necessarily those of the Office of the Attorney General. Opinions of the Attorney General are formal documents rendered pursuant to specific statutory authority.

[1] To ensure that even the act of placing a finger on the screen of a device does not disclose the suspect’s thoughts, the orders in some of those cases have required the police—not the suspect—to select the finger that the suspect must place on the screen.  See In re Search of [Redacted], 317 F. Supp. 3d at 537, 539; In re Search Warrant Application for [Redacted], 279 F. Supp. 3d at 804.

[2] It also strongly implies that the suspect was the person who previously programmed the device to decrypt in response to his biometric; unlike an alphanumeric password, a biometric is unique and non-transferable.  Contrast Jones, 481 Mass. at 547 n.8 (suspect’s knowledge of password to device does not necessarily imply that he owns or controls device, because password can be transferred between persons).

[3] An additional showing might be required to authorize the suspect’s temporary detention for the purpose of compelling his biometric, although that showing may well be subsumed by the two discussed in the body text.  See Hayes v. Florida, 470 U.S. 811, 816-17 (1985) (holding that police cannot transport suspect to station for fingerprinting without probable cause or prior judicial authorization, but suggesting that seizure of suspect in field for fingerprinting may be permissible based on less than probable cause in some circumstances); see also In re Search of [Redacted], 317 F. Supp. 3d at 532-33 (applying Hayes to authorize warrant to detain person for compelled biometric decryption if: “(1) the procedure is carried out with dispatch and in the immediate vicinity of the premises to be searched, and if, at time of the compulsion, the government has (2) reasonable suspicion that the suspect has committed a criminal act that is the subject matter of the warrant, and (3) reasonable suspicion that the individual’s biometric features will unlock the device, that is, for example, because there is a reasonable suspicion to believe that the individual is a user of the device”); cf. Commonwealth v. Catanzaro, 441 Mass. 46, 52 (2004) (search warrant implies authority to detain occupants of premises while search is conducted).


Look Before You Click: The Enforceability of Website and Smartphone App Terms and Conditions

by Kevin Conroy and John Shope

Legal Analysis

Modern technology allows individuals to conduct an ever-increasing number of activities through websites and internet-connected smartphone apps.  The proprietors of those platforms frequently make their use subject to terms and conditions, some of which—such as arbitration clauses, forum selection clauses, waivers, licenses, and indemnification provisions—carry potentially significant legal consequences.  Most users will not have read the terms and, in some instances, may not have even seen the terms or any reference to them.  Do these terms amount to an enforceable contract?  In at least some circumstances, the answer may be “no.”  Answering the question in particular cases involves fact-intensive analysis and potential evidentiary challenges.  Businesses offering such platforms, and their counsel, should be aware of these complexities and take precautions to maximize the likelihood that courts will enforce their terms.

The First Circuit and the Massachusetts Appeals Court have addressed this issue in cases involving the terms and conditions of a ride-sharing app and an email account.  See Cullinane v. Uber Techs., Inc., 893 F.3d. 53 (1st Cir. 2018); Ajemian v. Yahoo!, Inc., 83 Mass. App. Ct. 565 (2013).  In each case, the court concluded that users were not bound by the terms and conditions.  Cullinane, 893 F.3d at 64; Ajemian, 83 Mass. App. Ct. at 575-76.  Both courts employed a two-part test to assess whether the terms at issue amounted to an enforceable contract, asking: (1) whether the terms were “reasonably communicated” to the user, and (2) whether the terms were accepted by the user.  Ajemian, 83 Mass. App. Ct. at 574-75; Cullinane, 893 F.3d at 62 (citing Ajemian).  This two-part test is consistent with the approach taken by other courts around the country.  E.g., Meyer v. Uber Techs., Inc., 868 F.3d 66, 76 (2d Cir. 2017) (applying California law and articulating the test on a motion to compel arbitration as whether “the notice of the arbitration provision [contained in the terms] was reasonably conspicuous and manifestation of assent unambiguous as a matter of law.”).

A Spectrum of User Interfaces

Analysis of whether the requirements of “reasonable communication” and “acceptance” are satisfied begins with the interface presented to the user.  While the possible variations are endless, interface designs tend to fall within three general categories, often referred to as “clickwrap,” “browsewrap,” and “sign-in-wrap” (sometimes called “hybridwrap”).  In “clickwrap” interfaces, the user is required to take a distinct, affirmative action to indicate assent to the terms, such as checking a box or clicking a button stating “I agree.”  Courts considering this category of interface generally have little trouble finding the necessary notice and assent.  E.g., Wickberg v. Lyft, Inc., 356 F. Supp. 3d 179, 184 (D. Mass. 2018).

On the other end of the spectrum is “browsewrap,” where a user receives notice of the terms only by means of a link at the bottom of the webpage (often undifferentiated from other links) or buried in the menus or settings of an app.  A typical browsewrap interface does not offer any notice outside of the terms themselves that the user is purportedly agreeing to be bound.  Nor does it offer the user any reason to follow the link and read the terms.  Courts generally find that browsewrap interfaces do not create enforceable agreements.  See Ajemian, 83 Mass. App. Ct. at 576 (“[W]e have found no case where [a forum selection clause] has been enforced in a browsewrap agreement”).

The question becomes more complicated and fact-intensive in the case of “sign-in-wrap” interfaces, where the user is informed that signing in, creating an account, or taking some other specified action (but not an action distinct from the user’s intended use of the website or app) will signify assent to the terms, which are often available by following a link within or adjacent to the text of the notice.  In such cases, the enforceability of the terms depends on how clearly the interface design notifies the user that he or she will be bound by taking the specified action.  Compare Cullinane, 893 F.3d at 64 (finding that the design of Uber’s account creation interface did not provide adequate notice to user) with Meyer, 868 F.3d at 79 (assessing a different version of Uber’s account creation interface and finding that the design did provide adequate notice).

The Importance of Good Design

Several common design features of “sign-in-wrap” interfaces have received judicial attention in determining issues of enforceability.  While courts do not demand perfection, incorporating multiple design features that promote notice of the terms and make clear the user’s manifestation of assent will increase the likelihood that the terms will be enforced.

Clearly important are the size and color of the language informing the user that proceeding will signify agreement to the terms and the link to the terms.  Making these elements as large as other elements on the screen (preferably larger) and in a color that contrasts with the background so as to promote their readability will bolster the argument that the terms were reasonably communicated to the user.  A perception that the notice or link is hidden in tiny or otherwise difficult-to-read font may cause a court to find that the user did not have adequate notice.  Compare Meyer, 868 F.3d at 78-79 (enforcing terms where text notifying user that creating account would signify assent to the terms, although small, was clearly visible, in contrasting color on an uncluttered screen) with Cullinane, 893 F.3d at 62-64 (holding terms unenforceable in part because the notification appeared in a dark gray, small, non-bold font on a black background and because the screen contained many other elements in equal or larger font size).

The design of the interface should also make obvious to the user that the full content of the terms are available to read by following a link.  See Cullinane, 893 F.3d at 63 (questioning “whether a reasonable user would have been aware that the gray rectangular box was actually a hyperlink”).  Although blue underlined text may be the quintessential indicator of a hyperlink, other appearances may also be adequate, so long as they are sufficiently differentiated from the surrounding text.  E.g., Wickberg v. Lyft, Inc., 356 F. Supp. 3d 179, 181 (D. Mass. 2018) (pink, non-underlined link); Selden v. Airbnb, Inc., No. 16-cv-00933 (D.D.C. Nov. 1, 2016) (red, non-underlined links).

The placement of the notice and link are also important.  If the notice and link appear above the button a user clicks to proceed, a user reading from top to bottom would encounter these elements, and have an opportunity to investigate the linked terms, before encountering the button to proceed.  Courts have also enforced terms where the notice and link are placed below, but in reasonable proximity to, the relevant button.  Compare Meyer, 868 F.3d at 78 (finding that placement of the notification text and link directly below the relevant button, immediately visible without any scrolling, contributed to enforceability of terms) with Specht v. Netscape Communs. Corp., 306 F.3d 17, 23 (2d Cir. 2002) (not enforcing terms where reference to the terms would have been visible “only if [the user] had scrolled down to the next screen”); see also McKee v. Audible, Inc., No. CV 17-1941-GW(Ex), 2017 U.S. Dist. LEXIS 174278, at *27-28 (C.D. Cal. July 17, 2017) (placement of notice and link to terms at the bottom of the screen “approximately 30-40% of the screen’s length below” the button to proceed, separated by a horizontal line, contributed to inadequate notice).

Placing the notice and link below the relevant button creates another potential obstacle to enforcement of the terms:  if the screen prompts the user to enter information such as a username, password, or email address, users on a smartphone or tablet may see a software keyboard appear on the screen when they begin to enter the requested information.  Because this software keyboard generally appears at the bottom of the screen, it may obscure the notice and link.  At least one court has found that this contributed to lack of the necessary notice, see McKee, 2017 U.S. Dist. LEXIS 174278, at *27-28, although it is reasonable to argue that what matters is what the user sees before he or she engages the keyboard.

Courts also give attention to the particular words used to inform the user that proceeding will signify assent to the terms.  If the user is not required to take any action to assent to the terms other than the actions inherent in the ordinary use of the website or app (such as signing in or creating an account), the consequences of that action should be clear to the user.  One way to accomplish this is to match the language of the notice to the action the user takes.  For example, if the user is required to click a button labelled “Create Account,” the notice should inform the user that “by clicking ‘Create Account’ you indicate acceptance of our terms and conditions.”  Where the words used for the notice do not parallel the description of the action, a court may question whether it is sufficiently clear to a user that he or she is assenting to the terms by taking that action.  See, e.g., TopstepTrader, LLC v. OneUp Trader, LLC, No. 17 C 4412, (N.D. Ill. Apr. 18, 2018) (declining to enforce terms where user clicked a button labelled “Sign Up,” accompanied by a statement reading “I agree to the terms and conditions,” because the website “gave the user no explicit warning that by clicking the ‘Sign Up’ button, the user agreed to the [t]erms”); see also McKee, 2017 U.S. Dist. LEXIS 174278, at *22-23 (identifying lack of parallel wording as a factor weighing against enforcement of the terms); but see Meyer, 868 F.3d at 80 (“Although the warning text used the term ‘creat[e]’ instead of ‘register,’ as the button was marked, the physical proximity of the notice to the register button and the placement of the language in the registration flow make clear to the user that the linked terms pertain to the action the user is about to take.”).

Finally, the timing and context in which the terms are presented can also contribute to the enforceability of the terms.  Several courts have observed that, where the terms are presented in conjunction with a purchase or the creation of an account involving a transactional relationship, an average user is more likely to understand that the transaction or relationship will be subject to the terms.  See Meyer, 868 F.3d at 80 (“The transactional context of the parties’ dealings reinforces our conclusion.”); Selden v. Airbnb, Inc., No. 16-cv-00933 (D.D.C. Nov. 1, 2016) (“The act of contracting for consumer services online is now commonplace in the American economy.  Any reasonably-active adult consumer will almost certainly appreciate that by signing up for a particular service, he or she is accepting the terms and conditions of the provider.”).

Litigation Challenges

Litigating the question of whether a user is bound by the terms of a website or app can present challenges beyond analyzing the interface type and design choices.  Because the party seeking to enforce the terms bears the burden to prove adequate notice and manifestation of assent, that party (often the proprietor of the website or app) will need to present evidence of what the user actually saw and did.  Where that party is seeking to enforce an arbitration or forum selection clause, it will likely want to satisfy this burden early in the case, before conducting discovery.

The proponent of the terms thus should maintain records of when the user accessed the website or app and what it looked like at those times.  Because websites and apps are occasionally redesigned, and terms are occasionally updated, simply presenting screenshots of the current version of the website or app is unlikely to satisfy the burden of establishing what the user saw and did.  Instead, the proponent of the terms must be prepared to establish when the user took the relevant action on the website or app, what the operative version of the website or app looked like at that time, and which version of its terms were presented to the user.  Providing such evidence may be particularly challenging depending on the amount of time that has passed and the ability of the proponent to access or recreate historic versions of the website or app.

Presenting evidence of how the interface appeared to a particular user may be further complicated if the appearance varied based on the device used to access it.  A website, for example, may appear differently when viewed on a laptop or desktop computer screen than when viewed on a smartphone.  The differing screen size may affect what is immediately visible to the user without scrolling and the relative conspicuousness of the notice and terms vis-à-vis other elements.  In the case of smartphone users, there might also be meaningful variation in the appearance of the interface depending on the size of the phone used.  See, e.g., Cullinane, 893 F.3d at 56 n.3 (noting the 3.5 inch screen size of the iPhone used to access the app in question and reproducing the screenshots in the opinion to correspond to that size).  Inability to identify the device used could prevent early enforcement of an arbitration clause or forum selection clause and require further discovery.  Conversely, the party challenging the terms might argue insufficient notice by offering competing evidence as to what he or she saw when using the website or app.  For example, even if the proponent can establish that the user accessed the website on a desktop computer, the user may have done so in a browser window that occupied less than the full screen, changing the appearance of the interface and potentially the adequacy of the notice.  A user will not, however, avoid enforcement of the terms simply by asserting that he or she does not recall seeing notice of the terms or did not read the terms.

Conclusion

Given the potential consequences of enforcement of terms, such as application of an arbitration clause foreclosing a class action, challenges to enforcement will likely continue to arise.  Prudent counsel will do well to guard against such challenges through recommending careful design choices and electronic records retention.

John A. Shope is a partner in the Boston office of Foley Hoag, where he specializes in class action defense, consumer law, and commercial arbitration. He also serves as an arbitrator for the AAA and CPR.

Kevin J. Conroy is a litigation associate Boston office of Foley Hoag. Kevin focuses on complex business disputes and shareholder disputes.


A Signed Text Message Can Result in a Binding Real Estate Contract

saccardi

by Peter F. Carr, II

Practice Tips

The commonplace reliance upon and acceptance of text messaging in commercial dealings has forced courts to examine the legal implications of texting within the seminal rule that a contract concerning real estate shall not be enforced “[u]nless the promise, contract or agreement upon which such action is brought, or some memorandum or note thereof, is in writing and signed by the party to be charged therewith or by some person thereunto by him lawfully authorized.” G.L. c. 259, § 1, Fourth. At the trial court level, courts have embraced the concept of “contract by text message” for real estate so long as additional key elements are established. One, the text message must either contain or incorporate by express reference all material terms of an agreement concerning land. Two, the text message must conclude with the signature of the “party to be charged” or its authorized agent. A formal signature or even a complete first and last name is not required. However, the sequencing is critical. The cases to date largely have turned on whether the name of the sender appears at the end of the text message to signify the authentication of its preceding substance. The text message is sufficiently signed and binding provided that it concludes with a “mark” to indicate that the sender adopts the message.

Recent cases from the Land Court bear out these core concepts. In a commercial real estate dispute that hinged on text message exchanges between the parties’ brokers, a judge denied a special motion to dismiss a lis pendens that was issued in favor of the plaintiff buyer seeking to enforce a sales contract. St. John’s Holdings LLC v. Two Electronics, LLC, 24 LCR 190, 16 MISC 000090 (RBF), 2016 WL 1460477 (Mass. Land Ct. April 14, 2016), aff’d, 92 Mass. App. Ct. 1114 (2017). Although the defendant seller ultimately prevailed at trial, the Court steadfastly held that the text message of the seller’s broker satisfied the signed writing requirement of the Statute of Frauds. The text message incorporated by reference the final letter of intent for the purchase and sale of the property following ongoing negotiations between the parties. The contract was deemed signed and accepted by the seller when the seller’s broker concluded the text message with the inclusion of his first name. The Court held, “In the context of these exchanges between the parties, the court infers that the text message sent by [Tim, the seller’s broker] was intended to be authenticated by his deliberate choice to type his name at the conclusion of his text message.”  In another case, the Court similarly found compliance with the Statute of Frauds because, “[t] he broker’s writing her first name ‘Laurie’ at the end of the text message constitutes a signature for the purpose of the Statute of Frauds.” However, the Court ruled that the text message did not contain or incorporate sufficient material terms to form a contract. Fiore v. Lindsey, No. 17 MISC 000533 (RBF), 2017 WL 5969332 (Mass. Land Ct. Nov. 29, 2017). In contrast and underscoring the critical nature of the sequencing, text exchanges between brokers did not satisfy the Statute of Frauds where none of the operative texts concluded with the names of the brokers, even though those names appeared in the bodies of the messages. Donius v. Milligan, 24 LCR 440, 443, No. 16 MISC 00277 (HPS), 2016 WL 3926577 (Mass. Land Court July 25, 2016). The Court denied relief because the “text messages here are not signed by either the proposed buyer or seller, nor are they signed by the agents.” In addition, the Court ruled that the substance of the text messages evidenced mere negotiations.

Although no appellate court has yet addressed squarely text messages in the context of the Statute of Frauds, the prior appellate decisions affirming the binding nature of informal email exchanges coupled with the expanding usage of electronic communications arguably signal that reviewing courts are likely to embrace the theories established at the trial court level. Accordingly, to avoid being bound to an agreement involving land that may never have been intended, parties should insist upon more formal means of communicating with clear documentation, at least as negotiations proceed. A party involved in a transaction may be wise to limit or eliminate all text messaging with a counterparty during the course of negotiations, or to include written disclaimers to memorialize that text messages will not be accepted as part of a deal. At a minimum, parties must avoid a course of conduct which creates the presumption that a text message is sufficient to express offer and acceptance. Otherwise, as the judge observed in St. John’s Holdings, “a text message, all too familiar to most teenagers and their parents, can constitute a writing sufficient under the Statute of Frauds to create an enforceable contract for the sale of land.”

Peter F. Carr, II is a member of the litigation department of Eckert Seamans Cherin & Mellott, LLC, a regional law firm, practicing out of the Boston Office since joining the firm in 1995 after completing a clerkship with the Massachusetts Appeals Court assigned to former Chief Justice Joseph Warner. Peter’s daily practice covers a wide variety of business counselling and commercial litigation matters to include substantial trial experience. Peter served as trial and appellate counsel in the Land Court case of St. John’s Holdings referenced above.


The Coming Age of Artificial Intelligence: What Lawyers Should Be Thinking About

fidnick

by José P. Sierra

The Profession

During the Spring and Fall legal conference seasons, emails addressing “data breaches,” “improving cyber defenses,” and “what you (or your general counsel/board) need to know about cyber security/insurance,” hit our inboxes on an almost weekly basis.  Although it took some time, everybody now wants a slice of hot “cyber” pie, and law firms have been quick to jump on the cyber security bandwagon and form cyber-practices.  What hasn’t gotten the same rapt attention of conference organizers, tech vendors, and the legal community is the coming age of artificial intelligence, or “AI.”  There are at least two reasons for this.  First, although large-scale deployment of self-driving cars is just over the horizon, most of the bigger, life-changing AI products are still years away. Second, most laypeople (including lawyers) do not understand what AI is or appreciate the enormous impact that AI technology will have on the economy and society.  As a result, those in the “vendor” community (which includes lawyers) have yet to determine how their clients and their clients’ industries will be affected, and how they themselves can profit from the AI revolution.

AI and What It Will Mean for Everyone

AI may be defined as a machine or super computer that can simulate human intelligence by acquiring and adding new content to its memory, learning from and correcting its prior mistakes, and even enhancing its own architecture, so that it can continue to add content and learn.  A few years ago, AI development and its celebrated successes were limited to machines out-playing humans in games like chess (e.g., IBM’s Watson beating world chess champion Gary Kasparov and winning on the show, Jeopardy).  And while most of us are now familiar with “intelligent” assistants like “Siri,” “Alexa,” and other “smart” devices, for the average person, the full import of AI’s capabilities and potential hasn’t been grasped (though the advent of autonomous cars has given us some glimpse of things to come).

Already, AI can do many things that people can do (and in some cases better).  In addition to driving cars, AI can detect and eliminate credit card payment fraud before it happens, trade stocks, file insurance claims, discover new uses for existing drugs, and detect specific types of cancer.  Then there is the work that most people think can be done only by humans, but which AI can do today, including: (1) predicting the outcome of human rights trials in the European Court of Human Rights (with 79% accuracy); (2) doing legal work  –  numerous law firms have “hired” IBM’s Ross to handle a variety of legal tasks, including bankruptcy work, M&A due diligence, contracts review, etc.; and, more disturbing than possibly replacing lawyers, (3) engaging in artistic/creative activities, like oil canvas painting, poetry, music composition, and screenplay writing.  In short, almost no realm of human endeavor – manual, intellectual, or artistic – will be unaffected by AI.

What AI Will Mean for Lawyers

Some legal futurists think that AI simply will mean fewer jobs for lawyers, as “law-bots” begin to take over basic tasks.  Other analysts focus on the productivity and cost-savings potential that AI technology will provide.  Two other considerations of the impending AI revolution merit discussion:  revenue opportunities and the role lawyers can and should play in shaping AI’s future.

How AI May Shape the Legal Economy                 

Some of the most profitable practice areas in an AI-driven economy are likely to be:

  • Patent Prosecution and Litigation. This one should be obvious and already has taken off.  Fortunes will be made or broken based on which companies can secure and defend the IP for the best AI technologies.
  • M&A. Promising AI start-ups with good IP will become targets for acquisition by tech-giants and other large corporations that want to dominate the 21st century economy.
  • Antitrust. Imagine that Uber, once it has gone driverless, decides to buy Greyhound and then merges with Maersk or DHL shipping, which then merges with United Airlines.  How markets are (re)defined in an AI-driven economy should keep the antitrust bar very busy.
  • Labor and Employment. AI technology has the potential to disrupt and replace human labor on a large-scale.  To take just one example, in an AI-created driverless world, millions of car, taxi, bus, and truck drivers will find themselves out of work.  What rights will American workers have when AI claims their jobs?  How will unions and professional organizations protect their members against possible long-term unemployment?  Labor and employment lawyers will be at the forefront of labor re-alignment issues.
  • Tax. If AI reduces the human labor pool, as expected, and there is a corresponding loss in tax revenue, the tax code will most likely need to be revised, which will mean new strategies for the tax bar.
  • Cyber-law/compliance. The importance of protecting IP, proprietary, and confidential information, and the legal exposure of not doing so, will be even greater in the higher-stakes world of AI.
  • Criminal Defense. Will AI help law enforcement solve crimes?  Will AI be used to commit crimes?  If so, both prosecutors and the defense bar will be busy prosecuting and representing more than the typical criminal defendant.

How Lawyers May Help Shape AI

Is there a role for the legal profession in the coming AI age other than helping our clients adapt to a “brave new world?” In my view, lawyers should play a necessary and leading role.  For if AI has the potential to affect every industry and occupation and permanently eliminate jobs along the way, society’s leaders cannot afford to leave the decisions about which AI technologies will be developed in what industries (and which ones won’t) to sheer market forces.  Private industry and investors are currently making these decisions based on one overarching criterion — profit — which means everything is on the table.  Although that approach propelled the industrial and digital revolutions of the last two centuries, jobs lost by those revolutions were eventually replaced by higher-skilled jobs.  For example, teamsters of horse-powered wagons were replaced by modern teamsters, i.e., truck drivers.  That won’t be the case following an AI revolution.  The ultimate question, therefore, in the coming AI century is what areas of human endeavor do we, as a society, want to keep in human hands, even if such endeavors can be accomplished faster, cheaper, and better by AI machines?  As the profession responsible for protecting society’s interests through law and policy, lawyers cannot afford to take a back seat to the free-for-all development of AI, but instead must lead and help shape the AI century to come.

José P. Sierra is partner at Holland & Knight. He focuses his practice in the areas of white collar criminal defense, healthcare fraud and abuse, pharmaceutical and healthcare compliance, and business litigation.


Making Sense of the Internet of Things

Lefkowitz_peterby Peter M. Lefkowitz

The Profession

We have seen the marketing. According to a recent report by a top consulting firm, the Internet of Things will have an annual economic impact of between $4 trillion and $11 trillion by 2025.  Another firm has announced that there will be 50 billion internet-connected devices globally by 2020.  And companies already have rebranded in grand fashion, declaring the arrival of “Smart Homes,” “Smart Cities,” the “Smart Planet,” the “Industrial Internet” (the contribution of the author’s company), and even the “Internet of Everything.”  We also have seen the reality of Fitbits that record our activity and suggest changes to our exercise and sleep patterns, cars that accept remote software updates, and airplane engines that communicate maintenance issues from the tarmac.  For all of this potential, and even greater claimed potential, our shared late-night admission is that none of us has a well-defined picture what, precisely, the Internet of Things is or does.

This combination of wide promise and shared confusion is not a trivial matter.  Companies are setting long-term strategy based upon Jetsons-like glimmers of the future; consumer expectations and fears are being set in an environment of rapidly-evolving offerings and — most critically for attorneys providing advice to clients considering investments in this area  — legislators and regulators are being asked to set legal and enforcement frameworks without a clear picture of the future product landscape or whether products still in their infancy will create anticipated harm.  In order to advise properly in this area, and to avoid regulatory frameworks getting far ahead of actual product development, it is important that lawyers appreciate the scope of Internet of Things technology and the policy implications of internet-connected goods and the data they create and use.

So what is the Internet of Things?  Simply put, the Internet of Things, or IoT, is a set of devices that connect to and send or receive data via the internet, but not necessarily the devices people most often think of as being connected to the internet.  In the consumer world, IoT includes smart meters that measure home energy use, refrigerators that can report back on maintenance needs or whether the owner needs more eggs, and monitors that can record blood sugar results and communicate via Bluetooth to a connected insulin pump.  It also increasingly includes cars that sense other cars in close proximity and record and report on driver speed, location and music listening choices.  And in the industrial space, offerings include an array of sensors and networks that measure and manage the safety and efficiency of oil fields or the direction, speed and service life of wind turbines and airplane engines;  X-ray and CT machines with remote dose monitoring; and badge-based radio-frequency identification systems that analyze whether medical providers are washing their hands in the clinical setting and the resulting impact on infection rates.  This definition generally does not include computers, tablets and other computing devices, although — with smartphone apps advancing to the point of measuring movement and heart rate and reading bar codes to compare prices at local retailers — one could argue that the iPhone and Android phone are the Swiss Army Knives of personal internet-based data collection and use.  In turn, IoT devices generate large sets of sensor-based data, or Big Data, which can be aggregated and analyzed to generate observations concerning the world around us and to improve products and services in healthcare, energy, transportation and consumer industries.

These developments have not been lost on government.  The White House has commissioned two major studies on the potential of Big Data.  The Federal Trade Commission held a full-day workshop to discuss IoT in the home, in transportation and in healthcare, and FTC staff subsequently issued a comprehensive report discussing benefits and risks of IoT.  Branches of the European Commission are encouraging companies to establish European research and development footholds for internet-based devices.  The European Commission noted the development of internet-based devices and the prospect of a Digital Single Market as inspirations for the anticipated replacement of the European Data Privacy Directive.  And European Data Protection Commissioners have boldly asserted their authority, declaring that in light of the risk presented by sensor-based devices, “big data derived from the internet of things . . . should be regarded and treated as personal data” under European data privacy law.  Unfortunately, the Commissioners did not distinguish industrial uses such as wind turbines and oil wells from consumer goods that actively collect personal information.

The FTC report above summarizes many of the practical and policy challenges presented by emerging IoT technologies and the views of advocates for industry and consumers.  Security is, for many, the most compelling issue.  Internet-connected devices must collect data accurately; those data sets need to be communicated securely to data centers; and devices and back-end computing systems need to be protected against hackers, both to protect the data collected from devices and to protect the networks and devices against hijacking.  Recent stories of rogue engineers using laptops to break into parked cars and controlling car brakes remotely, and the dystopian nightmare of a hacked pacemaker on the TV drama Homeland, have not helped mitigate these concerns.  This risk is compounded by the prospect of “big data warehouses” that can store and analyze zettabytes of data in support of technological breakthroughs.

Separately, there is the question of notice and consent for the collection and use of IoT data.  As the FTC staff report notes, it is significantly easier to provide notice about a company’s data practices on a computer screen than on a piece of medical equipment or in a friend’s car that already is collecting and reporting a wide array of data.  This problem is compounded in industrial settings, for example, where passenger weight is analyzed to optimize airplane engine function, or where data sets from and surrounding an MRI machine are communicated to the hospital network to read the scan and to the device manufacturer to facilitate maintenance and product improvement.

Other questions abound.  Will data from an internet-connected device be used for unanticipated purposes, such as devising large consumer medical or credit reports, without the consumer having the ability to know what is being done or how to correct or delete data?  Will providers use data to discriminate improperly, or will better use of data create a more level playing field, facilitating new services at lower prices for a wider swath of consumers?  And are some issues already addressed by current regulatory frameworks like HIPAA or the Fair Credit Reporting Act, related standards like the Payment Card Industry security rules, or extensive regulatory frameworks governing security and data use for government contractors, transportation providers and energy providers?

In turn, certain baselines have emerged.  First, “security by design” and “privacy by design,” the practices of building security and privacy protections into the development lifecycle of goods and networks, are essential.  These requirements become even more compelling in light of the recent decision of the Third Circuit in FTC v. Wyndham Corporation Worldwide, holding, among other things, that the FTC has authority to bring claims alleging “unfairness” for a company’s purported failure to properly secure networks and data.  Second, companies collecting data from IoT devices must carefully consider how much data they need and whether it can be de-identified to minimize privacy risk, whether the data will be aggregated with other data, and whether consumer choice is needed to make specific use of the resulting data set.  And in light of privacy and national security laws around the world — including recent data localization and national security laws in Russia and China — companies will need to evaluate where data is transferred globally and where to locate the associated databases and possibly even global computing, service and engineering staff.

Much of the promise and peril of the Internet of Things and Big Data are in the future.  Google and Dexcom, a maker of blood sugar monitoring devices, recently announced an initiative to make a dime-sized, cloud-based disposable monitor that would communicate the real-time glucose values of diabetes patients directly to parents and medical providers.  No date has been announced, although recent advances in remote monitoring suggest hope.  And the journal Internet of Things Finland recently published an article announcing the proof-of-concept for a “wearable sensor vest with integrated wireless charging that . . . provides information about the location and well-being of children, based on received signal strength indication (RSSI), global positioning system (GPS), accelerometer and temperature sensors.”

Thus far, rule-making has focused on security standards for connected devices and related computing networks.  The FDA has issued detailed security guidance for connected devices and systems, and the Department of Defense has issued security standards for contractors that include an expansive definition of government data subject to coverage under the U.S. Department of Commerce’s NIST 800-171 standard for protecting sensitive federal information.  However, there has not been a push in the U.S. for comprehensive legislation governing internet-connected goods and services.   As the FTC staff report explained: “[t]his industry is in its relatively early stages.  Staff does not believe that the privacy and security risks, though real, need to be addressed through IoT-specific legislation at this time.  Staff agrees with those commentators who stated that there is great potential for innovation in this area, and that legislation aimed specifically at IoT at this stage would be premature.”

The marketplace for internet-connected goods and services surely will continue to expand, and the product and service landscape will advance rapidly.  Whether we will see more than $10 trillion dollars of annual economic impact has yet to be determined.  In this fast-moving environment, companies considering investment in the Internet of Things and Big Data and the attorneys who advise them would be well served to monitor the evolving regulatory and legislative landscape.

Peter Lefkowitz is Chief Counsel for Privacy & Data Protection, and Chief Privacy Officer, at General Electric. Mr. Lefkowitz previously served on the Boston Bar Journal’s Board of Editors.