Boston’s Victory Against Airbnb Poses Risks to Internet Economy

by Daniel Lyons

Legal Analysis

Like many popular tourist destinations, Boston benefits from the sharing economy. Innovative intermediaries such as Airbnb have helped middle-class residents supplement their incomes by monetizing their greatest assets: their homes. The new short-term rental market allows homeowners to keep up with rising living costs while providing additional capacity to attract tourists who contribute to the local economy.

Also like many cities nationwide, Boston has struggled with the unintended consequences of this new marketplace. Policymakers are concerned that the new market is incentivizing owners to remove long-term rentals from the housing stock, particularly in popular and space-constrained areas like Chinatown. To mitigate this risk, a new City of Boston ordinance (City of Boston Code, Ordinances, § 9-14) requires homeowners to register short-term rental properties with the City and prohibits certain categories of properties from being offered as short-term rentals.

But it is the enforcement mechanism that has drawn the most controversy. In addition to punishing individual homeowners who run afoul of the rules, the ordinance fines intermediaries like Airbnb $300 per day for each ineligible rental booked on the site.[1] Presumably, the fine is designed to entice these intermediaries to police their sites for violations. But while this attempt to deputize Airbnb reduces the City’s enforcement costs, it cuts against one of the fundamental tenets of Internet governance: that platforms generally are not liable for a user’s misuse of a neutral tool. This immunity, codified in Section 230 of the Communications Decency Act, 47 U.S.C. § 230, makes it possible for companies from eBay to Twitter to connect millions of users without having to monitor their every interaction for potential legal violations. In Airbnb v. City of Boston, 386 F. Supp. 3d 113 (D. Mass. 2019), the federal district court upheld the Ordinance against a Section 230 challenge, in a decision that weakens this core statutory protection and may have significant ramifications for the broader Internet economy.

Background: Section 230

Section 230 is the legal cornerstone of the modern Internet economy. Jeff Kosseff, Professor of Cybersecurity at the United States Naval Academy describes it as The Twenty-Six Words That Created the Internet. The statute provides that

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Congress passed Section 230 in 1996 to address the holding of Stratton Oakmont v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995), which held that online service providers could be held liable as publishers for defamatory statements made by their users. Section 230 itself states that it was designed to “preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation,” 47 U.S.C. § 230(b)(2), by giving platforms discretion to decide when and how to police their sites. It contains exceptions for claims arising under federal criminal statutes (including, in particular, sex trafficking), intellectual property laws (which are governed by a different intermediary liability regime), or state laws that “are consistent with this section.”  47 U.S.C. § 230(e).

The following year, the seminal case Zeran v. America Online, 129 F.3d 327 (4th Cir. 1997), displayed the expansive scope of the statute in the defamation context. This case involved ads posted on America Online (AOL) selling offensive T-shirts that made light of the 1995 Oklahoma City terrorist bombing. The ads falsely listed plaintiff Ken Zeran as the vendor and included Zeran’s home telephone number, prompting irate AOL users to inundate Zeran with angry calls and death threats. Zeran sued AOL, alleging that he notified the company of the defamatory posts but it unreasonably delayed in removing them. The Fourth Circuit found that Section 230 immunized AOL from liability even for messages that the company knew were defamatory. The court justified this broad immunity by noting that with “millions of users,” interactive computer services process a “staggering” amount of information. Id.. “Faced with potential liability for each message republished by their services, interactive computer providers might choose to severely restrict the number and type of messages posted,” a threat to free speech that Congress sought to guard against. Id..

Subsequent court cases have extended Section 230 far beyond the defamation context, to immunize Craigslist against claims of facilitating housing discrimination, eBay from products liability claims, and StubHub from violations of state ticket scalping laws. It is the resulting broad immunity, protecting intermediaries from liability for most user misconduct, that has shaped much of the current Internet ecosystem. Section 230 entices online news outlets and blogs to permit comment threads without fear of what readers may say. It allows Amazon, TripAdvisor, and Yelp to aggregate and display consumer feedback about products and services. Without Section 230, social media platforms like Facebook and Twitter likely would not exist—or would not be free—because of the high cost of screening every post for potential liability.

Of course, while Section 230 shields the platform from intermediary liability, the user remains liable if the underlying post violates the relevant law. And as the Ninth Circuit explained in Fair Housing Council of San Fernando Valley v Roommate.com, 521 F.3d 1157 (9th Cir. 2008), the platform loses its immunity if it is responsible, in whole or in part, for formulating the offending message.

Section 230 and Boston’s Short-Term Rental Ordinance

Given this robust history of Section 230, it seemed an uphill battle for Boston and similar cities seeking to deputize platforms to enforce short-term rental regulations. Like eBay and StubHub listings, the content of an Airbnb listing is written by the individual homeowner. While a local ordinance could penalize individual homeowners for listing ineligible properties, Section 230 prohibits a local ordinance from forcing Airbnb to “verify” that listed properties comply with the law by punishing it for listing an illegal unit. In 2012, a court struck down a comparable attempt by the State of Washington to fine online classified ad publishers unless they verified that models featured in online prostitution ads were adults. See Backpage.com v. McKenna, 881 F. Supp. 2d 1262 (W.D. Wash. 2012).

Boston sought to circumvent Section 230 by punishing not the listing of an illegal unit, but rather providing booking services for an illegal unit. The law provides that “any Booking Agent who accepts a fee for booking a unit as a Short-Term Rental, where such unit is not an eligible Residential unit, shall be fined” $300 per violation per day. Airbnb sued to enjoin the provision, arguing that the focus on a booking fee rather than the listing was a distinction without a difference, that the effect of the ordinance was to hold intermediaries liable for their users’ misrepresentations, and that Section 230 therefore preempts the ordinance.

On preliminary injunction, the court sided with the City.[2] The court found that the penalty provision punished Airbnb for the company’s own conduct, namely accepting a fee for booking an ineligible unit.[3] The court explained that the fine is not tied to the content of the underlying listing, and noted that Airbnb remains free to list ineligible units without incurring liability, as long as it does not provide booking services for one.[4] In essence, it requires the company, at the booking stage, to confirm that a listing is eligible under the statute before collecting a fee to complete the transaction.[5] The decision mirrored, and relied upon, two recent decisions upholding similar ordinances in California: HomeAway.com, Inc. v. City of Santa Monica, 918 F.3d 676, 680 (9th Cir. 2019), and Airbnb, Inc. v. City & Cty. of San Francisco, 217 F. Supp. 3d 1066, 1071 (N.D. Cal. 2016). In the process, the court rejected Airbnb’s argument that the First Circuit has interpreted Section 230 more broadly than the Ninth Circuit.[6]

Although Airbnb appealed the decision to the First Circuit, it ultimately settled before argument to reduce its financial exposure. Under the settlement agreement, the company agreed to require any user posting a Boston listing to provide a City-issued Registration Number. The company also agreed to send Boston a monthly report of active listings within the City. The City will then notify Airbnb of listings that it believes are ineligible, which Airbnb will deactivate within 30 days. The agreement provides that compliance with this procedure will constitute a safe harbor shielding against booking agent liability under the ordinance.

Unintended Consequences of Court Decision

One can sympathize with Boston’s desire to rein in the excesses of the short-term rental market. Tourist demand for alternatives to traditional lodging remains high, increasing the risk that short-term rentals will siphon off housing stocks in an already capacity-constrained residential market. This is especially problematic if the properties in question receive benefits (such as low-income assistance) designed to encourage residential stability, if the property poses a risk to tourists, or if increased tourist activity harms the local community.

In that sense, it is both expected and appropriate that the City would regulate Boston homeowners who seek to participate in the short-term rental market, just as it does innkeepers and landlords. Boston has authority to decide which properties can be made available and on what terms. And it is free to enforce those regulations directly against individual violators, by dedicating resources to reviewing listings, identifying properties that are out of compliance with the ordinance, and bringing appropriate enforcement action against the lawbreakers.

But the court’s approval of the City’s plan to commandeer platforms to aid enforcement reflects a potentially problematic shift in Section 230 jurisprudence. As an initial matter, the court’s distinction between listing and booking seems strained. The court posited that Airbnb remains free to list illegal units, as long as it doesn’t actually book them. But as Professor Eric Goldman of Santa Clara University notes in connection with the similar San Francisco ordinance, listing properties that the company cannot or will not book could set up Airbnb for a false advertising suit; if it wishes to adhere to its preexisting business model and avoid bait-and-switch liability, the company effectively must verify that listings are eligible before posting.

Even if, as the court suggested, Airbnb need only verify eligibility at the point of booking, the verification obligation imposes significant costs upon these intermediaries. The court minimized this obligation, stating the ordinance “simply requires Airbnb to cross-reference bookings against the City’s list of ineligible units before collecting its fees.”[7] But this simplifies the burden that Airbnb faces. Boston’s ordinance punishes the accepting of a fee for booking an ineligible unit, a category that includes:

  • Units subject to affordability covenants or housing assistance under local, state, or federal law;
  • Units prohibited from leasing or subleasing under local, state, or federal law; and
  • Units subject to three or more violations of any municipal ordinance or state law relating to excessive noise, improper trash disposal, or disorderly conduct within a six-month period.[8]

While the ordinance requires the City to create an ineligible units list, it does not provide a safe harbor for booking agents that cross-reference bookings against that list. On its face, then, booking agents must independently determine whether each Boston booking violates any of the myriad eligibility requirements.

The settlement reduced Airbnb’s compliance costs, but the ordinance remains as written for other booking agents. Of course, the cost of even the settlement’s modified monitor-and-takedown procedure is not trivial—particularly if, as Professor Goldman notes, other cities follow Boston’s example. Airbnb and other intermediaries must keep abreast of nuanced ordinances in myriad cities and states nationwide and tailor their algorithms to verify eligibility. While this increased cost may not make the booking model uneconomic, it could lead some booking companies to withdraw from more heavily regulated markets.

The proliferation of ordinances like Boston’s could also entrench existing companies by raising the costs of entry for new entrepreneurs in this space. Indeed, this could be one reason why Airbnb settled the Boston case and similar litigation in Miami Beach, Florida: as the market leader, Airbnb can perhaps bear these compliance costs easier than its competitors. The settlement agreement itself suggests that Airbnb is using regulation to secure its position: a provision titled “Fairness Across Platforms” requires the City to negotiate with Airbnb’s competitors, three of which are listed by name, mandates that the City provide Airbnb a copy of any agreement it enters with another platform, and provides for Airbnb to modify its agreement if another platform receives a more favorable provision. It also requires the City to confer with Airbnb to discuss compliance efforts taken against platforms that have not entered such agreements.

Ramifications for the Broader Internet Economy

The Boston Airbnb decision shows that the erosion of Section 230 immunity is now spreading beyond the Ninth Circuit. Other cities that share Boston’s concerns about the growth of the short-term rental market now have a model to enlist platform providers as enforcers. For Airbnb and similar platforms, this likely means staffing additional compliance resources to learn and respond to a growing number of local regulations.

Entrepreneurs and those advising platform-based startups should also recognize that this erosion is not necessarily limited to the short-term housing market. The court’s approval of a verification obligation could potentially open the door to significant state and local regulation of the Internet economy. For example, Professor Goldman notes that licensing boards could require that online marketplaces verify that sellers have appropriate business licenses before completing a transaction. Cities may require ride share operators to assure that drivers meet local qualifications. States could require eBay and other clearinghouses to confirm that goods comply with local commerce and product liability laws. And payment processors further up the supply chain could find themselves saddled with similar verification requirements.

The court’s decision also shapes how future tech entrepreneurs should structure their businesses. By bifurcating Airbnb’s listing and booking functions, the decision favors certain business models over others. Airbnb faces liability for facilitating rental of an ineligible property, while online classified ad companies like Craigslist retain Section 230 immunity for the same action, based solely on how each company chooses to fund its activities. Going forward, this decision incentivizes companies to move away from collecting fees for facilitating transactions, and instead to embrace advertising-based revenue models, or models that charge a fee per listing—both of which would remain protected under Section 230.

It is too early to state with precision what effect this decision will have on the development of the sharing economy. But the court’s decision, coupled with the San Francisco and Santa Monica cases, suggest that local regulators may have a powerful new tool to address their public policy concerns. Internet-based platform providers must adapt if they wish to continue relying upon Section 230 to shield innovative new efforts to connect buyers and sellers online.

[1] As the court clarified, “ineligible” properties are those that categorically cannot be offered as short-term rentals. The statute does not punish booking agents for booking eligible but unregistered properties.

[2] Airbnb, 386 F.Supp.3d at 120.

[3] Id.

[4] Id. at 120-121.

[5] The Court contrasted this Penalty Provision with another part of the statute, the “Enforcement Provision,” which prohibits Airbnb from operating within Boston unless it enters an agreement with the city to “actively prevent, remove, or de-list any eligible listings.” See id. at 123-124. At oral argument, the city conceded that the threat of banishment for failure to monitor and remove listings effectively imposed liability on Airbnb for publication of third-party conduct, and on the basis of that concession, the court enjoined the Enforcement Provision. Id. at 123. The court also enjoined parts of a data reporting provision on unrelated grounds. Id. at 124-125.

[6] Id. at 120 n.5.

[7] Airbnb, 386 F.Supp.3d at 121.

[8] See An Ordinance Allowing Short-Term Residential Rentals in the City of Boston, Section 9-14.4A.

 

Daniel Lyons is a Professor at Boston College Law School, where he researches and writes in the areas of telecommunications, energy, and administrative law. Professor Lyons is also a Visiting Fellow at the American Enterprise Institute, where he regularly blogs about tech policy issues.


Fair Housing Enforcement in the Age of Digital Advertising: A Closer Look at Facebook’s Marketing Algorithms

by Nadiyah Humber and James Matthews

Legal Analysis

Introduction

The increasing use of social media platforms to advertise rental opportunities creates new challenges for fair housing enforcement.  The Fair Housing Act, 42 U.S.C. §§ 3601-19 (“FHA”) makes it unlawful to discriminate in the sale or rental of housing on the basis of race, color, religion, sex, familial status, national origin, and disability (“protected classes”).  The FHA also prohibits discriminatory advertising, including distributing advertisements in a way that denies people information about housing opportunities based on their membership in a protected class.  Accordingly, advertisers and digital platforms that intentionally or unintentionally cause housing advertisements to be delivered to users based on their membership in a protected class may be liable for violating the FHA.

In March 2018, in response to what they perceived to be discriminatory advertising on Facebook, the National Fair Housing Alliance (“NFHA”) and several housing organizations filed suit in federal court in New York City.[1]  The lawsuit alleged that Facebook’s advertising platform enabled landlords and real estate brokers to prevent protected classes from receiving housing ads.  Facebook settled the suit on March 19, 2019.[2]  As part of the settlement, Facebook agreed to make a number of changes to its advertising portal so that housing advertisers can no longer choose to target users based on protected characteristics such as age, sex, race, or zip code.  Facebook also committed to allow experts to study its advertising platform for algorithmic bias.  It remains to be seen whether this agreement goes far enough in curtailing discriminatory advertising practices, as Facebook is confronting further enforcement action from a government watchdog in respect to similar issues.  Moreover, a recent research study found that Facebook’s digital advertising platform may still lead to discriminatory outcomes despite changes already made.

On August 13, 2018, the Assistant Secretary for Fair Housing and Equal Opportunity filed a complaint with the Department of Housing and Urban Development (“HUD”) alleging that Facebook is in violation of the FHA.  The Office of Fair Housing and Equal Opportunity determined in March, 2019 (the same time as the settlement agreement with NFHA) that reasonable cause exists and issued an official Charge against Facebook.[3]

Notwithstanding these suits and administrative actions, it remains that, for fair housing claims to survive in court against media giants like Facebook, HUD and future plaintiffs must first successfully argue that Facebook is not protected by the Communications Decency Act (“CDA”).[4]

Communications Decency Act

Congress enacted the CDA, in part, to prohibit obscene or indecent material from reaching children on the internet, and also to safeguard internet ingenuity.[5]  What was meant as a protectionist measure for the young, impressionable, and inventive, however, evolved into a powerful defense tool used by web applications, like Facebook.  Section 230 of the CDA immunizes providers of interactive computer services against liability arising from content created by third parties.  To overcome the CDA hurdle, litigants have to demonstrate that Facebook “materially contributes” to the management of content on their platform.  Fair Hous. Counsel of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1163 (9th Cir. 2008).  While many online service providers have successfully used Section 230 in their defense, the protections offered to internet service providers are not absolute.

The CDA contains requirements that restrict the application of Section 230.[6]  The language in Section 230 prevents a “provider or user of an interactive computer service” from being “treated as the publisher or speaker of any information” that is exclusively “provided by another content provider.”[7]  The U.S Court of Appeals for the Ninth Circuit concluded that publishing amounts to “reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009).  The idea is that website operators would no longer be liable for deciding to edit or remove offending third-party content.

Based on this reading, the law immunizes only “certain internet-based actors from certain kinds of lawsuits.”[8] The statute, as discussed in Roommates.com, LLC, 521 F.3d at 1162, provides no protection to online content that was created by a website operator or developed – in whole or in part – by the website operator.  Courts have reaffirmed the CDA’s limited scope to protect self-policing service providers that act as “publishers” of third-party content, as opposed to liability against all categories of third-party claims (i.e. violations of civil rights laws at issue in this article).  Barnes, 570 F.3d at 1105; Accord Doe v. Internet Brands, Inc., 824 F.3d 846, 852-53 (9th Cir. 2016).  These limitations are crucial.  If a plaintiff can show that Facebook developed the content on its platform in whole or in part or, content aside, that Facebook produces discriminatory outcomes via mechanisms on its platform developed by Facebook, it may be excluded from Section 230 immunity.

Optimization Discrimination Study

In a recent study by researchers at Northeastern University, [9] evidence of Facebook’s control over ad dissemination demonstrates how Facebook manages output of information based on headlines, content, and images, using “optimization.”[10]  In short, the Study set out to determine how advertising platforms themselves play a role in creating discriminatory outcomes.  The Study highlighted the mechanisms behind, and impact of, ad delivery, which is a process distinct from ad creation and targeting.  For example, the Study found that inserting musical content stereotypically associated with Black individuals was delivered to over 85% Black users, while musical content stereotypically associated with White people was delivered to over 80% White users.  The researchers concluded that “ad delivery process can significantly alter the audience the ad is delivered to compared to the one intended by the advertiser based on the content of the ad itself.”  The study also simulated marketing campaigns and found that Facebook’s algorithms “skewed [ad] delivery along racial and gender lines,” which are protected categories under the FHA.  These results suggest that, even if a housing advertiser can no longer choose to explicitly target ads based on attributes like age, gender, and zip code, a housing advertiser could still use Facebook’s marketing platform to steer ads away from protected segments of users by manipulating the content of the ad itself.  Moreover, the platform may cause such discriminatory outcomes regardless of whether or not the advertiser intended such results.

Case Law Interpreting CDA

The Study’s findings set the foundation for evaluating Facebook’s control over the manipulation of content and ad distribution on their platform.  Two seminal cases, Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997) and Roommates.com, LLC, 521 F.3d 1152 (2008), outline tests to determine when online platforms are considered content managers versus content providers.[11]  The Study makes a strong case for why Facebook is a content manager, eliminating immunity under Section 230.  Litigants can also persuasively distinguish their arguments against Facebook from a recent decision interpreting Section 230 liability.  In Herrick v. Grindr, LLC, 306 F. Supp.3d 579 (2018), the U.S. Court of Appeals for the Second Circuit ruled in favor of Grindr (a same-sex dating application) on all but one of the plaintiff’s claims.  The plaintiff had argued that Grindr failed to monitor and remove content created by the plaintiff’s ex-partner, and the court concluded that Section 230 barred several of his claims because they were “inextricably related” to Grindr’s role in editing or removing offending content (which is protected conduct under the CDA).  Herrick v. Grindr, LLC, 306 F. Supp.3d 529, 588.  The Supreme Court denied Herrick’s petition to review on October 7, 2019.[12]

A major distinguishing feature between the facts in Herrick and the Study’s findings against Facebook is how the two websites handle third-party content.  In Herrick, the claim against Grindr was based on Grindr’s failure to remove content generated by a third-person.  The issue with Facebook exists in the use of optimization algorithms.  The point is that discriminatory outcomes are ultimately a result of Facebook’s manipulation of ad delivery for the purpose of reaching certain groups at the exclusion of others in protected categories.  Facebook’s tools go well beyond the function of “neutral assistance,” because its platform directs advertisements to sectors of people using discriminatory preferences created by Facebook, not third-parties.[13]

Intentional Discrimination

If it can be successfully argued that Facebook is not immune from suit under the CDA, housing advertisers and digital platforms that intentionally or unintentionally target ads to certain groups of users based on their membership in a protected class may be sued for violating the FHA.  As described above, the Facebook Study determined that housing advertisers may still be able to use Facebook’s marketing platform to steer housing ads away from protected classes of tenants by manipulating the content of the ad.  In such circumstances, the housing advertiser who uses the ad’s content as a covert method of discriminatory distribution may be violating the FHA.  The digital platform may also be liable either because they are actively involved in facilitating the selective distribution of ads, or as an agent vicariously liable for the advertiser’s conduct.

Disparate Impact

Even if it cannot be shown that a housing advertiser intended to discriminate, if the ad delivery mechanism has the effect of distributing housing ads in a discriminatory way, the advertiser and platform may still be liable for violating the FHA under a theory of disparate impact.  Disparate impact discrimination occurs when a neutral policy or practice has a discriminatory effect on members of a protected class.  See Texas Dep’t of Hous. & Cmty. Affairs v. Inclusive Communities Project, Inc., 135 S. Ct. 2507, 2523 (2015); see also 24 C.F.R. § 100.500.  A three-part burden shifting framework is used to evaluate liability.  Id.  Protected class members have the initial burden of establishing that a practice has a disproportionate adverse effect on a protected class.  To meet this initial burden, a plaintiff must “allege facts at the pleading stage or produce statistical evidence demonstrating a causal connection” between the policy and the disparate impact.  Inclusive Communities, 135 S. Ct. 2507, 2523 (2015).

If a protected class member makes out a prima facie claim of disparate impact, the burden then shifts to the accused party to show that the practice is necessary to achieve a valid interest.  See Robert G. Schwemm, Calvin Bradford, Proving Disparate Impact in Fair Housing Cases After Inclusive Communities, 19 N.Y.U.J. Legis. & Pub. Pol’y 685, 696-697 (2016).  The protected class members then have an opportunity to show that the interest could be achieved through less discriminatory means.  Id.

In the digital advertising context, protected class members would have the initial burden of showing that they were denied equal access to information about a housing opportunity as a result of a housing advertiser’s marketing campaign.

Statistical Evidence

While the Facebook Study was able to demonstrate the potential for “skewed” ad delivery based on protected characteristics, further research is needed to determine how a plaintiff might marshal statistical evidence to support a particular claim.  As the Facebook Study notes, without access to a platform’s “data and mechanisms” it may be difficult to assess whether or not a particular advertising campaign has led to discriminatory outcomes.[14]  Therefore, it may be challenging for adversely affected users to develop the necessary data at the pleading stage to make out a prima facie claim of disparate impact.  This might explain why HUD is continuing to pursue its legal challenge against Facebook despite the remedial measures it has already agreed to undertake, including allowing research experts to study its advertising platform for algorithmic bias.[15]  In other words, HUD’s intent may be to better understand how Facebook’s ad delivery algorithm works now so it can limit its discriminatory impact.

Causal Connection

Because digital advertising companies play an active role in the ad delivery process, it follows that a discriminatory distribution of ads could be attributed to the platform.  While there are limited case decisions involving FHA liability and algorithmic decision-making programs, the court in Connecticut Fair Hous. Ctr. v. Corelogic Rental Prop. Sols, LLC, 369 F. Supp. 3d 362 (D. Conn. 2019), found that plaintiffs had pled sufficient facts to establish a causal connection between a tenant screening company’s alleged activity and unlawful housing denials to support a claim of disparate impact based on race.  Id. at 378-379.  The court found that the defendant had created and provided the automated screening process, suggested the categories by which the housing provider could screen potential tenants, made eligibility determinations, and sent out letters to potential tenants notifying them of these decisions.  Id.

Digital advertising companies similarly create the marketing platform for housing advertisers to use, provide criteria from which to choose the users, and design and maintain the algorithms that decide to whom the ads will be delivered.  Therefore, a sufficient nexus should exist between the advertising platform’s activity and the selective distribution of ads to support a disparate impact claim.

Valid Interest

Housing providers and digital advertising platforms arguably have a “valid interest” in being able to effectively market their housing services, and ad delivery algorithms are an efficient way to reach relevant users.  However, given the abundance of print and online advertising options available for housing advertisers that do not rely solely on ad delivery algorithms, such as Craigslist, Zillow, Trulia, and Apartments.com etc., less discriminatory means exist by which housing advertisers can successfully market their services.

HUD recently proposed a new disparate impact rule that would raise the bar even higher for plaintiffs bringing disparate impact claims and provide housing advertisers with a defense if a digital advertising platform’s algorithmic model was the cause of a discriminatory outcome.[16]  A number of tenant advocacy groups and other stakeholders, such as Harvard Law School’s Cyberlaw Clinic, have submitted comments opposing the proposed rule, arguing, among other concerns, that it would perpetuate discrimination by “significantly reduc[ing] incentives for algorithm users and vendors to test their tools for bias” contrary to the purpose of the FHA.[17]

Conclusion

The FHA was designed to provide all home-seekers, who have the resources, with equal access to housing stock and opportunity.  It seems clear that online platforms in the business of designing and maintaining their algorithms have an impact on large segments of protected populations.  The tension between the need for more information to combat discriminatory algorithms and propriety interests remain.  However, one important way to move forward is to balance these interests by staying within the bounds of the FHA, including incentives for platforms to evaluate their ad delivery tools for distribution bias, and ensure a more inclusive participation in the housing market for all social media users.

Nadiyah J. Humber is the Assistant Clinical Professor of Law and Director of the Corporate Counsel, Government, and Prosecution Clinical Externship Programs at Roger Williams University School of Law (“RWU”). RWU students earn academic credit externing for in-house legal offices of corporations, offices of prosecution, and government agencies in Rhode Island and beyond. Professor Humber teaches related seminars for each program on the role of one client entities and professional development through practice.  

James Matthews is a Clinical Fellow in Suffolk Law School’s Accelerator Practice and Housing Discrimination Testing Program (HDTP) where he supervises law students in housing discrimination, landlord-tenant, and other consumer protection matters related to housing. Attorney Matthews also has significant teaching and professional presenting experience. He helps conduct fair housing trainings and presentations as part of HDTP’s community education and outreach. He also teaches an upper-level landlord-tenant course he developed which includes instruction on state and federal fair housing law.   

[1] Nat’l Fair Housing Alliance, et al v. Facebook, Inc., No. 18 Civ. 2689, Complaint (detailing allegations), available at https://nationalfairhousing.org/wp-content/uploads/2018/03/NFHA-v.-Facebook.-Complaint-w-Exhibits-March-27-Final-pdf.pdf (last visited Jan. 13, 2020).

[2] National Fair Housing Alliance, Facebook Settlement, available at https://nationalfairhousing.org/facebook-settlement/ (last visited Jan. 20, 2020).

[3]Assistant Sec’y of Fair Hous. & Equal Opportunity v. Facebook, Inc., No 01-18-0323-8, 1, Charge of Discrimination (detailing procedural history), available at https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdf  (last visited Dec. 8, 2019).

[4] 47 U.S.C. § 230(c) (2019).

[5] Brief of Internet, Business, and Local Government Law Professors as Amici Curiae Supporting the Respondents at 7 Homeaway.com & Airbnb, Inc. v. City of Santa Monica, Nos. 2:16-cv-06641-ODW, 2:16-cv-06645-ODW (9th Cir. May 23, 2018). See generally 47 U.S.C. §§ 230(b) (detailing policy goals for freedom on the internet).

[6] Brief of Internet, Business, and Local Government Law Professors as Amici Curiae Supporting the Respondents at 9, Homeaway.com & Airbnb, Inc. v. City of Santa Monica, Nos. 2:16-cv-06641-ODW, 2:16-cv-06645-ODW (9th Cir. May 23, 2018).

[7] 47 U.S.C. §§ 230(c)(1), (f)(3) (2019).

[8] Brief of Internet, Business, and Local Government Law Professors as Amici Curiae Supporting the Respondents at 4, Homeaway.com & Airbnb, Inc. v. City of Santa Monica, Nos. 2:16-cv-06641-ODW, 2:16-cv-06645-ODW (9th Cir. May 23, 2018).

[9] Ali, Muhammad, et. al, Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes, available at https://www.ccs.neu.edu/home/amislove/publications/FacebookDelivery-CSCW.pdf (last visited Dec. 6, 2019).

[10] Id. at 7 (explaining optimization on Facebook).

[11] See Nadiyah J. Humber, In West Philadelphia Born and Raised or Moving to Bel-Air? Racial Steering as a Consequence of Using Race Data on Real Estate Websites, 17 Hastings Race & Poverty L.J. 129, 153-155 (2020) (analyzing pertinent case law precedent for Section 230 immunity). There is a difference between online services that manage content (content provider) on their sites versus those that act more as a store house of information (service provider). Id.

[11] 47 U.S.C. § 230(c) (2019).

[12] Bloomberg Law available at https://news.bloomberglaw.com/tech-and-telecom-law/grindr-harassment-case-wont-get-supreme-court-review

[13] 47 U.S.C. § 230(c) (2019) (citing language from the act). Distinguishing O’Kroley v. Fastcase Inc., No. 3-13-0780, 2014 WL 2881526, at *1–2 (M.D. Tenn. June 25, 2014) (finding that providing search returns based on automated algorithms and user inputs does not constitute creating content).

[14] Muhammad, supra note 9.

[15] See supra note 2.

[16] See HUD’s Implementation of the Fair Housing Act’s Disparate Impact Standard, 84 Fed. Reg. 42853 (proposed Aug. 19, 2019) (for example, providing a defense where a “(2) plaintiff alleges that the cause of a discriminatory effect is a model, such as a risk assessment algorithm, and the defendant . . . (ii) Shows that the challenged model is produced, maintained, or distributed by a recognized third party that determines industry standards, the inputs and methods within the model are not determined by the defendant, and the defendant is using the model as intended by the third party . . .”)

[17] See Cathy O’Neil, Comment Regarding Docket No. FR-6111-P-02, http://clinic.cyber.harvard.edu/files/2019/10/HUD-Rule-Comment-ONEIL-10-18-2019-FINAL.pdf (last visited Jan. 20, 2020).