Content Regulation / Censorship, Digital Rights
Netchoice v. Paxton
Nominations Are Now Open for the 2024 Columbia Global Freedom of Expression Prizes. Learn more and nominate here.
In Progress Mixed Outcome
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The United States Court of Appeals for the Eleventh Circuit granted a preliminary injunction in respect of specific provisions of a Florida Senate Bill which sought to “combat the ‘biased silencing’ of ‘our freedom of speech as conservatives … by the “big tech” oligarchs in Silicon Valley’.” [p. 7] A trade association representing the big social-media platforms approached the courts, seeking an injunction on the enactment of the Bill, arguing that the law’s restrictions on the platforms’ content moderation and disclosure activities violated their right to free speech under the US Constitution’s First Amendment. The lower court granted the broad injunction. On appeal, the Court accepted that the majority of the contentious provisions were “substantially likely” to be unconstitutional, and so would meet the standards for a preliminary injunction. Following an analysis of all the impugned provisions, the Court declared that specific provisions requiring disclosure from the platforms were likely to be constitutional and so did not grant the injunction in respect of the enactment of those provisions. The Court stressed that social-media platforms engage in protected speech when moderating the content on their platform, and that, as private companies, are entitled to curate a specific type of content and community for their platform.
In May 2021, the State of Florida enacted a law, Senate Bill (S.B.) 7072, to combat the so-called effort of silencing conservative speech in favor of a more leftist agenda. S.B. 7072 contained several provisions which applied to social-media platforms, broadly divided into three categories: “1) content-moderation restrictions; 2) disclosure obligations and 3) a user-data requirement”. [p. 9]
The content-moderation restrictions prevented social-media platforms from removing a candidate for public office from the platform (“deplatforming”); limiting or prioritizing posts by or about political candidates; and censoring any “journalistic enterprise”. They also required the platforms to apply “consistency” in their decisions to remove or limit posts or users; to allow users to “opt-out” of receiving a moderated feed; and to not change its conditions or standards more than once every 30 days. The disclosure provisions required platforms to publish their standards and rule changes; and to allow access to information on “view counts” of posts or users’ content. They also regulated the free advertising platforms were allowed to provide to candidates. The provisions also required the platforms to provide detailed explanations to a user when any post or content is moderated. The user-data requirement provision required the platforms to make available to any user on request the data from their account for at least 60 days after the account has been removed.
Two trade associations, NetChoice and the Computer & Communications Industry, representing a variety of internet and social media companies, sought to enjoin the enforcement of specific provisions in S.B. 7072 on the grounds that those provisions violated social-media platforms’ right to free speech and are “pre-empted by federal law” (in that there is a federal law which provides for competing obligations and so would override S.B. 7072). [p. 14] 47 U.S.C. § 230(c)(2) states that “[n]o provider or user of an interactive computer service shall be held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. [p. 14]
The district court granted NetChoice’s motion and preliminarily enjoined the enforcement of certain provisions of S.B. 7072 on the grounds that 47 U.S.C. §230(c)(2) limits liability of providers to restrict access to content on account of public interest. The district court also held that S.B. 7072 did implicate the platforms’ First Amendment rights as it restricted their constitutionally-protected exercise of “editorial judgement”. [p. 14] The Court concluded that the entire bill was mainly motivated by the State’s purpose of defending conservatives’ speech from the perceived “big-tech” bias [p. 14]. Accordingly, it held that Netchoice met all the requirements for the preliminary injunction.
The State of Florida appealed to the Court of Appeal for the Eleventh Circuit.
Judge Newsom delivered the majority opinion with concurrence by Judges Tjoflat and Ed Carnes. The central question for the Court’s determination was whether the content moderation carried out by social-media platforms is a constitutionally-protected exercise of expression.
The State argued that S.B. 7072 does not implicate or violate the First Amendment as the moderation by social-media platforms is not protected speech. The State submitted that these platforms merely “host” third parties’ speech. [p. 15] In addition, the State argued that social media platforms are “common carriers” which are “certain services that society determines shouldn’t be required to do without”. [p. 40]
Netchoice argued that the Bill violated the platforms’ First Amendment rights and would fail any form of scrutiny as there is no legitimate state interest in ensuring equal access to speech on private social-media platforms. It submitted that the intention behind the legislation – to defend the perceived bias of “Big-Tech” – should allow for strict scrutiny on the entire act.
The Court highlighted that although the drafters of the American Constitution could not have envisioned social media, “the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary when a new and different medium for communication appears”. [p. 3] It added that the effect of one of the basic principles is that “the government can’t tell a private person or entity what to say or how to say it”. [p. 3] In discussing the nature of social media, the Court emphasized that social-media platforms are private entities which means that “[n]o one has an obligation to contribute to or consume the content that the platforms make available” and that the converse of this is that “no one has a vested right to force a platform to allow her to contribute to or consume social-media content”. [p. 5] The Court also noted that while most of the content on social media is created by users and not the platform, there are some forms of speech that platforms engage in, included “terms of service or community standards specifying the type of content that it will (and won’t) allow on its site”. [p. 6] It also stressed that the platforms curate the information their users’ access and so exercise “editorial judgment” by removing content that infringes its terms or standards and arranging the information by prioritizing certain posts. The Court described platforms’ content moderation as them “develop[ing] particular market niches, foster[ing] different sorts of online communities, and promot[ing] various values and viewpoints”. [p. 7]
The requirements for injunctive relief are for the party seeking the injunction to demonstrate that, 1) they have a substantial likelihood of success on merits; 2) that irreparable injury will occur without the injunction; 3) that “the threatened injury to the movant outweighs whatever damage the proposed injunction may cause the opposing party” and 4) that “if issued, the injunction would not be adverse to the public interest”. [p. 17] The Court focused on the question of whether NetChoice had demonstrated a substantial likelihood of success on the merits in respect of its claim that S.B 7072 infringed the platforms’ First Amendment rights. It first determined whether S.B. 7072 “triggers First Amendment scrutiny in the first place – i.e. whether it regulates ‘speech’ within the meaning of the Amendment at all”. [p. 18] If it was held that “social-media platforms engage in First-Amendment-protected activity” the Court would then “determine what level of scrutiny applies”, and then finally “whether the Act’s provisions survive that scrutiny.” [p. 18]
The Court reiterated that social media platforms, as private companies, do have First Amendment rights and that when one “removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users – a judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site”. [p. 19] The Court referred to the rationale behind S.B. 7072 and stated that its drafters – in alleging the platforms’ “leftist bias” – accepted that “social-media platforms express themselves (for better or worse) through their content-moderation decisions.” [p. 19] Accordingly, the Court found that the content moderation activities of the platforms constitute speech under the First Amendment, and that laws that restrict that ability to moderate content – such as S.B. 7072 – therefore do trigger First Amendment scrutiny.
In assessing the jurisprudence on First Amendment infringement, the Court examined whether the social-media platforms’ speech constituted “editorial judgment” cases and “protecting inherently expressive conduct”. [p. 20] With reference to the cases of Miami Herald v Tornillo, Pacific Gas & Electric Co. v. Public Utilities Commission of California, Turner Broadcasting Services, Inc. v. FCC, and Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, the Court held that “social-media platforms’ content-moderation decisions constitute the same sort of editorial judgments and thus trigger First Amendment scrutiny.” [p. 23] The Court confirmed that “[s]ocial-media platforms exercise editorial judgment that is inherently expressive”, and so when they “choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breachers of their community standards, they engage in First-Amendment-protected activity.” [p. 25] The Court noted that the different platforms use their editorial judgment to “cultivate different types of communities that appeal to different groups.” [p. 26] It held that “[a]ll such decisions about what speech to permit, disseminate, prohibit, and deprioritize – decisions based on platforms’ own particular values and views – fit comfortably within the Supreme Court’s editorial-judgment precedents.” [p. 28] In addition, it held that the decisions to remove or deprioritize posts “convey some sort of message” and so constitute expressive conduct. [p. 28] The Court noted that the fact that “observers perceive bias in platforms’ content-moderation decisions is compelling evidence that those decisions are indeed expressive.” [p. 29]
The Court rejected the State’s argument that because most (of the high volume of) content is not reviewed by the platforms, their content moderation decisions cannot constitute expressive conduct, holding that it is only the decisions which the platforms actually take that are relevant to whether the conduct is expressive or not. The Court also rejected the State’s argument that Supreme Court precedent obliges private entities to “host” others’ speech. It distinguished the cases of PruneYard Shopping Center v. Robins 447 U.S. 74 (1980) and Rumsfeld v. Forum for Academic & Institutional Rights from the present case and noted that these cases did not address cases in which a speaker’s First Amendment rights are curtailed. The Court disagreed with the State’s assertion that the Supreme Court’s editorial-judgment decisions establish three “guiding principles” which show that S.B. 7072 does not implicate the First Amendment. [p. 37] In respect of the first “guiding principle” submitted by the State – that “a regulation must interfere with the host’s ability to speak in order to implicate the First Amendment” – the Court confirmed that S.B. 7072 does interfere with the platforms’ ability to speak. [p. 37]. The State’s second guiding principle was that “in order to trigger First Amendment scrutiny a regulation must create a risk that viewers or listeners might confuse a user’s and the platform’s speech” but the Court found that the principle “finds little support in our precedent” and “[c]onsumer confusion simply isn’t a prerequisite to First Amendment protection”. [p. 38] The third principle – “that in order to receive First Amendment protection a platform must create and present speech in such a way that a ‘common theme’ emerges” – was also rejected by the Court as not being present in the Supreme Court jurisprudence. [p. 39]
In rejecting the State’s argument that social media platforms are “common carriers”, the Court identified two possible interpretations of the State’s argument: that the platforms are already common carriers, and so have “no (or only minimal) First Amendment rights” and that “the State can, by dint of ordinary legislation, make them common carriers, thereby abrogating any First Amendment rights that they currently possess.” [p. 41] The Court rejected both of these interpretations, finding that because the platforms require users to agree to their conditions and standards they are not open to all possible users and that the Supreme Court has distinguished social media from television and radio broadcasters (which are common carriers). The Court stressed that the Supreme Court precedent “demonstrate[s] that social-media platforms should be treated more like cable operators, which retain their First Amendment right to exercise editorial discretion, than traditional common carriers.” [p. 42-43] The Court also identified that, in the Telecommunications Act of 1996, Congress “explicitly differentiates ‘interactive computer services’ – like social-media platforms – from ‘common carriers or telecommunications services’.” [p. 43] In respect of the second interpretation, the Court stated “[n]either law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier.” [p. 43] The Court acknowledged the State’s argument that “large social-media platforms are clothed with a ‘public trust’ and have ‘substantial market power’,” [p. 44] but noted that the Supreme Court has “squarely rejected the suggestion that a private company engaging in speech within the meaning of the First Amendment loses its constitutional rights just because it succeeds in the marketplace and hits it big.” [p. 45-46]
In summary, the Court held that S.B. 7072’s content-moderation restrictions – including those which require the platforms to “to remove (or retain) all content that is similar to material that they have previously removed (or retained)” and those which allow users to opt-out of curated feeds – limit the platforms’ editorial judgment and so trigger First Amendment scrutiny. [p. 46-47] It also held that the disclosure provisions “indirectly burden platforms’ editorial judgment by compelling them to disclose certain information.” [p. 47]
However, the Court held that S.B. 7072’s user-data-access requirement “which requires social-media platforms to allow deplatformed users to access their own data stored on the platform’s serves for at least 60 days” did not hinder the platforms’ editorial judgment. [p. 48]
Having established that First Amendment scrutiny is triggered by S.B. 7072, the Court determined what level of scrutiny the provisions attracted. Content-neutral restrictions attract only intermediate scrutiny while content-based restrictions – those that “suppress, disadvantage or impose differential burdens upon speech because of its content” – are subjected to strict scrutiny. [p. 49] With reference to Rosenberger v. Rector and Visitors of Univ. of Va. 515 U.S. 819, 829 (1995), the Court noted that “[v]iewpoint laws – ‘[w]hen the government targets not subject matter, but particular views taken by speakers on a subject’ – constitute ‘an egregious form of content discrimination’,” and “’are prohibited’, seemingly as a per se matter.” [p. 49]
The Court disagreed with NetChoice’s argument that the “viewpoint-based motivation” [p. 50] of the legislative drafters should subject the entire Act to strict scrutiny, holding that there was an “absence of clear precedent enabling us to find a viewpoint-discriminatory purpose based on legislative history”. [p. 54] In respect of the content-moderation provisions, the Court said it was not necessary to “precisely categorize” the provisions because they would be unlikely to survive even intermediate scrutiny. [p. 56] In respect of the disclosure provisions, the Court found that they were content-neutral and so attracted only intermediate scrutiny.
The Court held that it was “substantially likely” that the content-moderation restrictions in S.B. 7072 “do not further any substantial governmental interest – much less any compelling one” and that there is no “substantial or compelling interest that would justify the Act’s significant restrictions on platforms’ editorial judgment”. [p. 58] The Court emphasized that there was no “vested right to a social-media account” and so there was no “legitimate – let alone substantial – government interest” in ensuring all voices are given space on social media platforms. [p. 59] With reference to the Miami Herald case, it added that “preventing ‘unfair[ness]’ to certain users or points of view isn’t a substantial government interest; rather, private actors have a First Amendment right to be ‘unfair’ – which is to say, a right to have and express their own points of view.” [p. 59] It emphasized that the difference between broadcasting entities in the past and social media platforms now is that “political candidates and large journalistic enterprises have numerous ways to communicate with the public besides any particular social-media platform that might prefer not to disseminate their speech.” [p. 60] The Court also held that there was a “substantial likelihood” that the provisions requiring consistency in moderation and the ability for users to opt out of moderation would “fail to advance substantial governmental interests”. [p. 60] Accordingly, the Court held that NetChoice had demonstrated that there was a substantial likelihood of success on the merits that the content-moderation provisions would violate the First Amendment.
In respect of S.B. 7072’s disclosure requirements, the Court applied the principles established in Zauderer v. Office of Disciplinary Counsel and held that it was “not substantially likely” that they violated the First Amendment – except for one exception. This was because the Court found that the State’s interest – that users “are fully informed about the terms of [commercial transactions] and aren’t misled about platforms’ content-moderation policies” – was “likely legitimate” and that NetChoice hadn’t demonstrated a “substantial likelihood” that the provisions placed an undue burden on the platforms. [p. 63]
The exception – that is, the disclosure requirement that the Court held was “substantially likely” to be unconstitutional – was the requirement that “platforms provide notice and a detailed justification for every content-moderation action”. [p. 64] It found that this requirement would by “unduly burdensome and likely to chill platforms’ protected speech” of their editorial judgment. [p. 64]
Having found that there was a likelihood of success on the merits, the Court then examined the remaining requirements for a preliminary injunction. It accepted that an “ongoing violation of the First Amendment … constituted an irreparable injury” and that “neither the government nor the public has any legitimate interest in enforcing an unconstitutional ordinance”. [p. 66]. Accordingly, it held that the factors weighed in favor of granting the injunction in respect of the provisions it deemed likely to be unconstitutional.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
Although the Court of Appeals for the Eleventh Circuit confirmed that social-media platforms have First Amendment rights and identified that restrictions on their ability to moderate content infringe those rights, the judgment was only a preliminary one and did not consider that all impugned provisions in the state bill were likely to be unconstitutional.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
Let us know if you notice errors or if the case analysis needs revision.