Global Freedom of Expression

Twitter v. Taamneh

Closed Expands Expression

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    May 18, 2023
  • Outcome
    Decision Outcome (Disposition/Ruling), Judgment in Favor of Petitioner
  • Case Number
    598 U. S. ____ (2023)
  • Region & Country
    United States, North America
  • Judicial Body
    Supreme (court of final appeal)
  • Type of Law
    Criminal Law
  • Themes
    Content Moderation, Digital Rights, Intermediary Liability
  • Tags
    Terrorism

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Supreme Court of the United States admitted an instant petition and ruled that Facebook, Google, and Twitter were not liable for “aiding and abetting” the Reina nightclub terrorist attack—perpetrated by ISIS—under §2333 of the U.S. Code. In the present case, the family members of the victim Nawras Alassaf filed a lawsuit against the aforementioned companies—owners of social media platforms—for providing substantial assistance to ISIS in carrying out this act of international terrorism. The respondents alleged that these social media companies knew that ISIS used their platforms to recruit people and raise funds for the attacks, yet failed to detect and remove their accounts, posts, and videos. Further, the respondents contended that the “recommendations” algorithm of these companies matched ISIS’ content with users more likely to be interested in their posts. However, the Court held that the mere failure to remove the content could not constitute “substantial assistance” unless an independent duty to act was identified. The Court concluded that the appellants were not liable for “aid and abetment” under §2333 of the U.S. Code, since the respondents failed to establish the appellants’ active involvement, for instance, by showing that they provided special treatment to the terrorist organizations, and the only “affirmative” act in the present case was to create social media platforms.


Facts

On January 1, 2017, Abdulkadir Masharipov carried out a terrorist attack on the Reina nightclub in Istanbul, Turkey on behalf of the Islamic State of Iraq and Syria (ISIS). After planning and coordinating the attack with ISIS member emir Abu Shuhada, Abdulkadir entered the nightclub and fired over 120 rounds into a crowd of more than 700 people. This attack killed 39 people and injured 69 others.

Nawras Alassaf was killed in the attack. His family members (the respondents) filed a lawsuit against three major social-media companies—Facebook Inc., Google Inc., and Twitter Inc. (the appellants) under section 2333(d)(2) of the U.S. Code. According to this provision, U.S. nationals injured by “an act of international terrorism” that is “committed, planned, or authorized by” a designated foreign terrorist organization may sue any person who “aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism,” and recover treble damages.

Section 2333 was originally enacted as part of the Antiterrorism Act (ATA) in 1990 to authorize “estate, survivors, or heirs” to bring civil lawsuits when “injured in [their] person, property, or business by reason of an act of international terrorism.” [p. 6] Back then, the ATA did not explicitly impose liability on anyone who only helped the terrorists carry out the attack or conspired with them. Then, in 2016, Congress enacted the Justice Against Sponsors of Terrorism Act (JASTA), to provide for a form of secondary civil liability.

Thus, as the law now stands, those injured by an act of international terrorism can sue the relevant terrorists directly under §2333(a)—or they can sue anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism” under §2333(d)(2). [p. 7] The respondents alleged that “ISIS has used defendants’ social-media platforms to recruit new terrorists and to raise funds for terrorism” and therefore these organizations had “aided and abetted ISIS and thus were liable for the Reina nightclub attack.” [p. 3]

The appellants primarily generate revenue by charging third parties to advertise on their platforms. These advertisements are displayed alongside or close to the numerous videos, posts, comments, and tweets uploaded by the platforms’ users. To manage and display these advertisements and content effectively, the appellants have created “recommendation” algorithms that automatically match ads and content to each user. These algorithms generate these outputs by processing data about the user, the advertisement, and the viewed content. As claimed by the respondents, ISIS and its supporters have used these platforms over an extended period to recruit, gather funds, and spread their propaganda. [p. 4]

The District Court dismissed the respondents’ complaint for failure to state a claim. But the Ninth Circuit reversed, finding that the respondents had plausibly alleged that the appellants aided and abetted ISIS within the meaning of §2333(d)(2) and thus could be held secondarily liable for the Reina nightclub attack. The present case reached the Supreme Court which granted certiorari to resolve whether plaintiffs have adequately stated such a claim under §2333(d)(2).


Decision Overview

Justice Thomas of the Supreme Court of the United States delivered the unanimous opinion of the court with Justice Jackson filing a concurring opinion. The central issue for consideration was whether Facebook, Twitter, and Google’s conduct —providing ISIS access to its social media platforms and associated services— constituted “aid[ing] and abett[ing], by knowingly providing substantial assistance,” [p. 8] such that they can be held liable for the Reina nightclub attack. To resolve this issue, the Court asked two questions, first what did “aid and abet” mean, and second, what did the defendant “aid and abet”? [p. 8]

To understand the meaning of the phrase “aid and abet”, the court referred to Halberstam’s legal framework as laid down in Halberstam v. Welch, 705 F. 2d 472 . In this case, Bernard Welch, a serial burglar, murdered Michael Halberstam during a break-in. Halberstam’s estate sued Welch’s live-in partner, Linda Hamilton, for aiding and abetting, and conspiring with Welch, despite her alleged lack of awareness of the murder. However, evidence indicated that Hamilton willingly participated in Welch’s criminal activities, as they had accumulated substantial wealth during their five-year cohabitation through the sale of stolen goods facilitated by Hamilton’s bookkeeping work. Hence, in that case, the court held that Halberstam’s murder was a “foreseeable” consequence of Hamilton’s participation in criminal activities.

This case led to the development of three principles: first, “the party whom the defendant aids must perform a wrongful act that causes an injury.”; second, “the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance.”; and third, “the defendant must knowingly and substantially assist the principal violation.” [p. 10]

Additionally, six factors to determine whether the accused’s assistance was “substantial” were developed too. This included: first, “the nature of the act assisted”; second, the “amount of assistance” provided; third, whether the defendant was “present at the time” of the principal tort; fourth, the defendant’s “relation to the tortious actor”; fifth, the “defendant’s state of mind”; and sixth, the “duration of the assistance” given. [p. 10]

While discussing these factors, the Court observed that “mere omissions, inactions, or non-feasance” could not be considered as “substantial assistance” as there is no generalized duty to rescue; if there was then “aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance. For example……anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police.” [p. 12] The Court observed that “if aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer.” [p. 13] The Court further illustrated that not “all those present at the commission of a trespass are liable as principals” merely because they “make no opposition or manifest no disapprobation of the wrongful” acts of another. [p. 13]. The Court opined that “the defendant has to take some ‘affirmative act’ with the intent of facilitating the offense’s commission.” [p. 14] The court then concluded that inaction could only be considered culpable, if there is some independent duty to act.

Therefore, the Court observed that the phrase “aids and abets” in §2333(d)(2) referred to a “conscious, voluntary, and culpable participation in another’s wrongdoing.” [p. 17] Moreover, a person can also be held liable for aid and abetment if the outcome was the result of a foreseeable risk. Thus, a close nexus between the assistance and the tort was not absolutely necessary as even “remote support” could constitute aiding and abetting in the right case.

Applying this framework to the instant case, the respondents alleged that ISIS used the appellants’ platforms to recruit new terrorists and fundraise for weapons of terror. The respondents contended that the appellants were aware of this, yet they failed to detect and remove a substantial number of ISIS-related accounts, posts, and videos. Accordingly, the plaintiffs asserted that the defendants aided and abetted ISIS by knowingly allowing ISIS and its supporters to use their platforms and benefit from their “recommendation” algorithms, enabling ISIS to connect with the broader public, fundraise, and radicalize new recruits. In the process, the respondents argued, the defendants profited from the advertisements placed on ISIS’s tweets, posts, and videos.

Considering these allegations, the Court evaluated if the appellants could have “aided and abetted” the terrorist organizations. Firstly, it noted that ISIS had the ability to utilize social media platforms and engage with third parties, similar to any other user, without undergoing any initial screening by the appellants. Secondly, the appellants’ recommendation algorithms allegedly matched ISIS-related content with users who were most likely to have an interest in such content. Lastly, the appellants were accused of being aware that ISIS was uploading this content and its consequential impact. Yet, they allegedly failed to take adequate measures to remove ISIS supporters and ISIS-related content from their platforms.

The Court opined that none of these scenarios indicated that the appellants “culpably associate[d themselves] with” the Reina attack, “participate[d] in it as something that [they] wishe[d] to bring about,” or sought “by [their] action to make it succeed.” [p. 22] The only “affirmative” conduct attributed to the defendants, as per the Court, was the creation of their platforms and the establishment of algorithms to display content based on user inputs and history. SCOTUS noted that the respondents did not allege that the appellants provided any special treatment or encouragement to ISIS after setting up their platforms or that the defendants selectively curated or screened any ISIS content.

The Court argued that “the mere creation of those platforms, however, is not culpable” [p. 23] and observed that it was possible for malicious entities like ISIS to exploit platforms such as those operated by the appellants for illegal and heinous purposes. However, the same argument could be made for cell phones, email, or the internet as a whole. This is why, the Court ruled that “we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.” [p. 23]

Subsequently, the Court found that the respondents’ assertions that the appellants’ algorithms went beyond passive aid and amounted to active, substantial assistance were unfounded since these “recommendation” algorithms were a part of the infrastructure responsible for filtering and organizing all the content on the platforms, based on user-provided information and content characteristics. According to the Court, the fact that these algorithms matched certain ISIS content with specific users did not transform the appellants’ “passive assistance into active abetting.” [p. 23]  At most, the Court said, it could be argued that the appellants “stood back and watched” [p. 24] while failing to stop ISIS from using these platforms. Nonetheless, the Court ruled that respondents were unable to prove that the appellants’ failure to stop ISIS from using these platforms was somehow culpable with respect to the Reina attack.

The Court concluded that the respondents had failed to show that the appellants treated ISIS differently from other users, that the appellants consciously participated in the terrorist attack, or that they had an independent duty to remove ISIS’s content. It also noted that the appellants’ relationship with ISIS was similar to other users: “arm’s length, passive, and largely indifferent” [p. 24] and therefore could not be considered culpable. The Court highlighted that the respondents could not identify any duty that would require “communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends.” [p. 25]

It also emphasized on the fact that the respondents failed to allege that the appellants “systemically and pervasively assisted ISIS” [p. 24] to have aided and abetted every single ISIS attack. The Court also considered that the respondents were unsuccessful in establishing that “the platform consciously and selectively chose to promote content provided by a particular terrorist group,” [p. 26] which could perhaps be said to have culpably assisted the terrorist group. Therefore, the Supreme Court reversed the Ninth Circuit’s decision and concluded that the respondents’ allegations regarding the social media platforms were a “far cry from the type of pervasive, systemic, and culpable assistance to a series of terrorist activities that could be described as aiding and abetting each terrorist act.” [p. 26]

Furthermore, it also mentioned that “the fact that some bad actors took advantage of these platforms was insufficient to state a claim that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts” [p. 27]. The Court then concluded that the Ninth Circuit erred in focusing primarily on the value of the appellants’ platforms to ISIS, rather than whether the appellants culpably associated themselves with ISIS’ action. The Supreme Court also opined that the Ninth Circuit failed to consider that the appellants’ platforms and content sorting algorithms were generally available to the internet-using public and that ISIS’ ability to benefit from these platforms was merely incidental to the appellants’ services and general business models.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Expands Expression

The present case has potential implications for freedom of expression in the context of online platforms. The Court’s ruling indicates that internet or social media providers cannot be held liable for the content posted by users, including content related to illegal activities such as terrorism—unless they engage in conscious, voluntary, and culpable participation in the wrongdoing. The Court held that the mere creation and operation of platforms, or the establishment of algorithms to display content based on user inputs and history, are not inherently culpable actions. It recognized that while malicious entities like ISIS may exploit these platforms for illegal purposes, the same argument could be made for other forms of communication, such as cell phones, email, or the internet as a whole. The Court stated that internet service providers generally do not incur in liability for providing their services to the public or for facilitating communication between users. This decision reinforces the principle that online platforms should not bear the burden of monitoring and censoring user-generated content. This interpretation helps to protect the freedom of expression of internet users and avoids imposing excessive liability on communication-providing services or intermediaries.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

National standards, law or jurisprudence

  • U.S., Halberstam v. Welch, 705 F. 2d 472 (1983)
  • U.S., Nye & Nissen v. United States, 336 U. S. 613
  • U.S., Bartenwerfer v. Buckley, 598 U. S. 69 (2023)
  • U.S., Central Bank of Denver, N. A. v. First Interstate Bank of Denver, N. A., 511 U. S. 164 (1994)
  • U.S., Sekhar v. United States, 570 U. S. 729, 733 (2013)
  • U.S., Brown v. Perkins, 83 Mass. 89, 98 (1861)
  • U.S., Zoelsch v. Arthur Andersen & Co., 824 F. 2d 27 (1987)
  • U.S., Monsen v. Consolidated Dressed Beef Co., 579 F. 2d 793 (1987)
  • U.S., Doe v. GTE Corp., 347 F. 3d 655 (2003)
  • U.S.,Passaic Daily News v. Blair, 308 A. 2d 649 (1973)
  • U.S., American Family Mutual Ins. Co. v. Grim, 201 Kan. 340 (1968)

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

The decision was cited in:

Official Case Documents

Official Case Documents:


Amicus Briefs and Other Legal Authorities

  • Brief for Facebook and Google

    https://www.supremecourt.gov/DocketPDF/21/21-1496/247601/20221129131310083_21-1496%202022-11-29%20Taamneh%20Final.pdf
  • Brief on behalf of the Respondents

    https://www.supremecourt.gov/DocketPDF/21/21-1496/233560/20220815173733550_TaamnehOppCertPDF.pdf
  • Brief of the Chamber of Commerce of the United States of America, National Foreign Trade Council, United States Council for International Business, and Business Roundtable as Amici Curi.

    https://www.supremecourt.gov/DocketPDF/21/21-1496/249189/20221206140741581_Twitter%20v.%20Taamneh%20No.%2021-1496%20Brief%20of%20Amici%20Chamber%20of%20Commerce%20et%20al.pdf
  • Other documents filed by Amici Curi

    https://www.scotusblog.com/case-files/cases/twitter-inc-v-taamneh/
  • Knight First Amendment Institute Amicus Brief

    https://knightcolumbia.org/cases/twitter-inc-v-taamneh

  • Reports, Analysis, and News Articles:


    Attachments:

    Have comments?

    Let us know if you notice errors or if the case analysis needs revision.

    Send Feedback