The Case of the Brazil Fake News Inquiry – 2

In Progress Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    May 2, 2023
  • Outcome
    Blocking or filtering of information
  • Case Number
    Inq. 4781
  • Region & Country
    Brazil, Latin-America and Caribbean
  • Judicial Body
    Supreme (court of final appeal)
  • Type of Law
    Civil Law
  • Themes
    Content Regulation / Censorship, National Security
  • Tags
    Fake News, Disinformation, Content-Based Restriction, Filtering and Blocking

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

In the context of an existing “Fake News Inquiry”, the Brazilian Supreme Court examined coordinated campaigns by Google, Meta, Spotify, and Brasil Paralelo against a legislative proposal to regulate online platforms that the companies promoted as the “Censorship Bill.” The Court found evidence of abuse of economic power and illicit contribution to disinformation networks, ordering removal of the material, disclosure of advertising and algorithmic practices, adoption of preventive measures, and testimony of company executives before the Federal Police. The order imposed financial penalties for noncompliance, demanded transparency from the platforms, and integrated the case into the broader investigation into coordinated disinformation efforts threatening democratic institutions.


Facts

In March 2019, the Chief Justice of the Brazilian Supreme Court, Dias Toffoli, initiated a criminal inquiry into insults and threats against the Court and its members, citing article 43 of the Rules of the Court. Justice Alexandre de Moraes was appointed to oversee the inquiry, which became known as the Brazil Fake News Inquiry. The remit of the investigation was to examine the dissemination of fake news, financing schemes behind it, false accusations, threats, and other illegal conduct affecting the security of the Supreme Court and its members. Over time, evidence was gathered of coordinated online attacks involving bots, businessmen-funded disinformation campaigns, and social media posts containing serious insults and threats of violence against justices and their families. In June 2020, the Supreme Court upheld the constitutionality of the inquiry (ADPF 572), recognizing it as a necessary institutional response to attempts to destabilize judicial independence, while stressing that it should be limited to speech posing actual risks to democratic institutions and the judiciary.

In 2023, within the scope of this same inquiry, new developments emerged following a report by the newspaper Folha de S. Paulo. The report and academic studies from NetLab at the Federal University of Rio de Janeiro (UFRJ) and from the State University of Rio de Janeiro (UERJ) indicated that major platforms – Google, Meta, Spotify, and Brazilian media company, Brasil Paralelo – had conducted campaigns against Bill 2630 (known as the “Fake News Bill”), circumventing their own rules on advertising and content moderation. According to these studies, the platforms not only profited from irregular advertisements but also influenced search results and recommendation systems to negatively shape public perception of the proposed regulation. The findings suggested that these companies used their infrastructure to amplify partisan messages and frame the bill as the “Censorship Bill,” while concealing the financial flows and algorithmic mechanisms behind such campaigns.


Decision Overview

Justice Alexandre de Moraes was the rapporteur of the decision, delivered in the context of both Inquiry No. 4.781 (“Fake News”) and Inquiry No. 4.874 (“Digital Militias”). The central issue before the Court was whether the platforms’ actions – particularly Google’s campaign against Bill 2630 (“PL das Fake News”), disseminated through search results, advertisements, algorithmic promotion, and amplification of partisan sources – went beyond legitimate political advocacy and amounted to abuse of economic power and illicit contribution to disinformation networks.

Justice de Moraes noted that “the conduct of GOOGLE and the other platforms mentioned in the news report and in the UFRJ study is fully connected with Inquiry No. 4.781 (‘fake news’), as well as with Inquiry No. 4.874 (‘digital militias’).” [p. 2] He explained that Inquiry No. 4.781 investigates fraudulent news, false accusations, threats, and mass disinformation schemes aimed at undermining the security of the Supreme Court and the independence of the judiciary, while Inquiry No. 4.874 addresses organized “digital militias” operating as criminal associations against democracy and the rule of law.

Justice de Moraes also set out the broader institutional context. During the 2022 elections, the Superior Electoral Court (TSE) had already ordered the removal of hundreds of illicit pieces of content attacking electoral integrity. After a coup attempt on January 8, 2023, the Court had convened meetings with the major platforms to address their instrumentalization by digital militias, leading to the establishment of a TSE working group on regulation (TSE Ordinance No 173/2023). Against this backdrop, Justice de Moraes stressed that “it is not credible, therefore, and especially after the 2022 elections and the coup attempt of January 8, 2023, that the providers of social networks and private messaging services are not fully aware of their instrumentalization by various digital militias”. [p. 3] [in uppercase in the original]

In his reasoning, Justice de Moraes engaged directly with the constitutional framework of freedom of expression. He distinguished its negative aspect – prohibition of prior censorship – from its positive aspect – the right of citizens to express themselves; he stated that “prior censorship is prohibited”. [p. 4] [in bold in the original] He emphasized that the Constitution nonetheless creates ex post accountability, including civil, administrative, and criminal liability, citing that “[f]reedom of expression is not freedom to attack! Freedom of expression is not freedom to destroy Democracy, Institutions, and the dignity and honor of others!”. [p. 6] [in bold in the original] He reiterated his position as set out in a previous case, ADI 4451, that any forced adjustment of content to state-imposed guidelines would be unconstitutional, as it would amount to illegitimate interference in the multiplicity of ideas essential for democracy. He invoked American Justice Oliver Wendell Holmes, referring to the “politics of distrust” as a necessary attitude in democratic societies to prevent arbitrary power. [pp. 4-5]

Justice de Moraes incorporated international law, referencing Article 13 of the American Convention on Human Rights, which prohibits prior censorship but permits subsequent responsibilities to protect rights, public order, and democracy. [pp. 5-6] Within this framework, he stressed that while dissenting, exaggerated, or unpopular opinions deserve protection, democratic order cannot tolerate speech that incites hatred, anti-democratic action, terrorism, violence against women, or crimes against children. He referred to German jurist Ferdinand Lassalle’s classic formulation of the “real factors of power,” recalling that private groups may legitimately seek to influence public debate, but only through “legal and morally acceptable mechanisms” [pp. 7-8]. Accordingly, Justice de Moraes concluded that when platforms resort to manipulative, opaque, or unlawful practices, their actions amount to abuse of economic power and expose them to liability.

Justice de Moraes examined the evidence before the Court. Citing the study from NetLab/UFRJ, it noted that “the data suggest that Google has been using search results to negatively influence users’ perception of the bill”. [p. 8] [in bold in the original] That study had found that the platforms used “all possible resources to prevent the approval of Bill 2630 because what is at stake are the billions raised from digital advertising […] maintaining their competitive advantages over other media outlets that also rely on advertising”. [pp. 8-9] [in italics in the original] Justice de Moraes found that this was not mere lobbying: the companies deployed algorithmic manipulation, opaque advertising systems, and undisclosed amplification of partisan sources. He detailed multiple practices: Google labeling the bill “PL da Censura” (“Censorship Bill”) in its own blog and ads; Brasil Paralelo paying for ads categorized as news; Spotify hosting Google’s political ads against its own terms of service; Meta distributing Google’s ads without proper labeling; and YouTube issuing alerts to creators framing the bill as harmful. [pp. 9-14] These mechanisms represented “not only abuse of economic power on the eve of the bill’s vote by attempting to impact ILLEGALLY and IMMORALLY public opinion and the vote of parliamentarians, but also clear inducement and instigation to maintain various criminal practices carried out by the digital militias investigated in Inquiry No. 4.874, thereby aggravating the risks to the security of the Supreme Federal Court and the Democratic Rule of Law, whose protection was the very reason for initiating Inquiry No. 4.781.” [p. 14] [in uppercase in the original]

Accordingly, the Court ordered Google, Meta, Spotify, and Brasil Paralelo to remove all advertisements and content linked to their campaigns against Bill 2630, including those labeling it as the “Censorship Bill,” and imposed substantial daily fines for noncompliance. The companies were further required to submit detailed reports within 48 hours, disclosing the amounts invested, the advertising strategies used, and the algorithmic methods that promoted such content, as well as to explain why they had violated their own rules on political advertising and content moderation. [pp. 14-16]

The decision also compelled the platforms to provide evidence of concrete measures to prevent, mitigate, and remove unlawful practices across their services, particularly content amplified through paid promotion, inauthentic accounts, or artificial distribution networks. The Court specified that these obligations extended to the dissemination of anti-democratic acts, disinformation capable of undermining electoral integrity, threats of violence against public officials and institutions, hate speech, terrorism, and crimes against children and women. [pp. 16-17]

The Federal Police was ordered to hear the testimony of the presidents or equivalent representatives of the companies to clarify the reasons behind the adoption of the mechanisms described in the decision. [p. 17]

Note: Although this lawsuit is confidential, the STF has been disclosing some of the rulings it issues. As a result, we still do not have access to the content of subsequent decisions or to whether the companies effectively complied with the orders.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

The decision represents a mixed outcome. On the one hand, it reinforces democratic safeguards by treating the conduct of platforms as potentially abusive and by reaffirming that freedom of expression does not shield hate speech, disinformation, or attacks against constitutional institutions. By imposing duties of transparency, accountability, and removal of harmful content, the Court sought to counter the instrumentalization of digital platforms by organized groups engaged in anti-democratic activities.

On the other hand, the ruling raises concerns about its scope and methodology. As in previous stages of the Brazil Fake News Inquiry, critics argue that the Court concentrated investigative, prosecutorial, and adjudicative functions in a single authority, stretching its jurisdiction and blurring the separation of powers. Moreover, the order to remove content and impose algorithmic disclosure obligations, while aiming to protect the democratic order, approaches the territory of prior restraint and lacks a detailed proportionality analysis. The inquiry has also been faulted for insufficient scrutiny of specific speech and for extending orders with global reach beyond Brazil’s jurisdiction.

The outcome therefore both advances institutional efforts to curb disinformation and platform abuse, and at the same time exposes significant risks by concentrating powers in the Court, extending orders with global reach, and approaching the terrain of prior restraint in ways that may undermine freedom of expression and the separation of powers.

Note: The inquiry itself remains confidential, but the Court has gradually released certain rulings, such as this one, while the investigation continues following its extension.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

National standards, law or jurisprudence

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

Official Case Documents

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback