Oversight Board Case of Posts Supporting UK Riots

Closed Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    April 23, 2025
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2025-009-FB-UA, 2025-010-FB-UA, 2025-011-FB-UA
  • Region & Country
    United Kingdom, International
  • Judicial Body
    Oversight Board
  • Type of Law
    Meta's content policies
  • Themes
    Facebook Community Standards, Hate Speech/Hateful Conduct, Violence And Criminal Behavior, ​​Violence and Incitement
  • Tags
    Incitement, Facebook, Oversight Board Enforcement Recommendation, Discrimination against Minorities

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Oversight Board overturned Meta’s original decisions to leave up three Facebook posts shared during the UK riots in the summer of 2024, following the murder of three girls in Southport. In the aftermath, widespread disinformation falsely claimed the perpetrator was a Muslim asylum seeker, fueling anti-Muslim and anti-immigrant sentiment that spilled into violent protests across the country. Although the posts were reported, Meta’s automated systems kept them on the platform. The Board found that each post posed a likely and imminent risk of harm, and that their removal was necessary and proportionate under international human rights standards, including the Rabat Plan of Action. While Meta eventually activated its Crisis Policy Protocol and designated the UK a High-Risk Location, the Board criticized the company’s delayed response and failure to promptly moderate harmful visual content, calling for clearer enforcement standards, particularly for image-based posts, and faster deployment of crisis interventions.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

Following a knife attack in Southport on 29 July 2024 that left three girls dead and ten injured, misinformation falsely claiming the attacker—a 17-year-old British citizen—was a Muslim asylum seeker quickly spread online, with one Facebook post shared over six million times. Despite police efforts to correct the narrative, anti-immigration and anti-Muslim riots erupted in 28 cities, leading to widespread violence, attacks on refugee centers, and over 100 police injuries. A court later lifted the attacker’s anonymity in an attempt to restore order.

Meta’s Oversight Board (OSB) reviewed three Facebook posts shared shortly after the Southport killings, all of which appeared to incite or support violence amid the escalating UK riots. The first post, shared two days after the attack, called for people to “smash mosques” and “do damage to buildings” where “migrants,” “terrorists,” and “scum” live, arguing that the riots were necessary for authorities to act and warning that the murdered “little girls” would not be “the last victims.”

The second post, shared six days after the attack, included an AI-generated image of a giant white man in a Union Jack T-shirt chasing smaller Muslim men, with the caption “Here we go again,” a time and place for a protest in Newcastle, and the hashtag “#EnoughIsEnough.”

The third post, shared two days after the attack, showed four bearded Muslim men in white kurtas chasing a crying blond child in a Union Jack T-shirt outside Parliament, with one man holding a knife and a plane flying toward Big Ben, referencing 9/11. The post’s caption read “Wake up” and featured the logo of a well-known anti-immigrant account.

Facebook users reported the three posts for violating the company’s Hate Speech (now Hateful Conduct) and Violence and Incitement policies. Meta’s automated systems initially found no violations and upheld the posts upon appeal. These cases were only reviewed by humans after the OSB selected them. Meta removed the text-only post for violating the Violence and Incitement policy but upheld the other two. The OSB noted that Meta’s updated policies, including the renaming of “Hate Speech” to “Hateful Conduct” on January 7, 2025, applied retroactively to all content and evaluated the cases based on both the original and revised policies.

The users who reported the posts appealed to the OSB, arguing that the content was clearly encouraging people to attend “racist protests,” inciting violence against “immigrants” and “Muslims,” and urging far-right supporters to continue rioting. One user, who identified as an immigrant, stated they felt personally threatened by the post they reported.


Decision Overview

The Oversight Board issued a decision in this case to assess whether Meta’s decision to uphold three Facebook posts shared during the UK riots and depicting Muslims in a dehumanizing and threatening manner was consistent with the company’s content policies and its human rights responsibilities.

After the OSB brought the appeal to Meta’s attention, the company removed the text-only post for violating its Violence and Incitement policy due to its explicit calls to riot and attack mosques and buildings housing migrants. Nonetheless, Meta upheld the two other posts: one calling for a gathering, deemed protected political speech, and another falsely linking Muslim men to the Southport attack, which Meta said targeted specific individuals rather than an entire group. Meta submitted that, in response to the UK riots, it activated its Crisis Policy Protocol, designated the UK as a Temporary High-Risk Location from August 6 to 20, and applied additional safety measures, although it did not set up a real-time response center. The company used third-party fact-checkers to label false content and reduced their visibility, while internally coordinating via a cross-functional working group.

The users who reported reiterated that the content said that the contested posts incited racist violence against immigrants and Muslims and encouraged “far right supporters to continue rioting.” [p. 7]

1. Compliance with Meta’s Content Policies

Content rules

The Oversight Board found that the text-only post violated Meta’s Violence and Incitement policy, as it explicitly incited high-severity violence against both individuals and places associated with religion and immigration status. The language used, such as calls to “smash mosques” and attack buildings where “migrants” and “terrorists” live, could not be interpreted as hyperbole or casual speech. It was published amid widespread riots, a day after violent attacks on a mosque and police officers occurred, thus amplifying its dangerous and inciting nature. In light of this, the OSB concluded that the post constituted a clear and serious threat to public safety.

In the case of the giant man post, the Board also found a violation of the Violence and Incitement policy. Though it contained no explicit call to violence, the imagery and text combined to convey a discriminatory and threatening message. Posted amid ongoing anti-Muslim riots, it depicted a giant white man chasing smaller brown men in Islamic attire and referenced a real gathering point, with the caption “Here we go again” signaling a call to action. The Board criticized Meta’s failure to recognize the violent implications of this post, noting that given the context and past riots, it should have been flagged earlier through human review and crisis protocols.

The third post, depicting four Muslim men and a crying child, was found to violate Meta’s Hateful Conduct policy. The OSB determined that it dehumanized Muslims by falsely portraying them as violent criminals and terrorists, invoking dangerous stereotypes and linking them to 9/11 imagery. It rejected Meta’s claim that the image referred to a specific individual involved in the Southport attack, noting that the attacker was not Muslim and the image bore no resemblance to the actual event. The Board emphasized that exploiting disinformation rooted in anti-Muslim bias cannot be used to justify hate speech.

Enforcement actions

The OSB raised serious concerns regarding the two posts containing images and how Meta moderated harmful content when it is conveyed through imagery rather than text. The two image-based cases highlighted the platform’s ongoing challenges in detecting visual hate speech and incitement to violence—an issue the Board flagged in prior cases (such as Case of Post in Polish Targeting Trans People, Planet of the Apes Racism, Hateful Memes Video Montage, The Media Conspiracy Cartoon, and Knin Cartoon). With the rise of AI-generated content, the barriers to creating persuasive and harmful imagery are rapidly decreasing, increasing the urgency for Meta to improve both its automated systems and human review processes, the OSB held. The Board emphasized that automated tools must be better trained, and that human review should be prioritized until those tools become more reliable in accurately identifying violations.

The OSB also criticized Meta for its delayed activation of the Crisis Policy Protocol during the UK riots, noting that a timelier response, especially in the immediate aftermath of the Southport attack, could have helped stem the spread of disinformation and violence. Although Meta eventually designated the UK as a Temporary High-Risk Location and implemented some restrictions, these measures came too late to prevent significant harm. The Board recommended Meta to define clear criteria for rapid activation of crisis protocols and conduct ongoing assessments throughout a crisis to adapt its moderation efforts to evolving risks, including proactive content scanning and the deployment of regionally informed human reviewers.

2. Compliance with Meta’s Human Rights Responsibilities

The OSB found that removing all three posts was consistent with Meta’s content policies and its human rights responsibilities under international law. In line with Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects freedom of expression, the Board analyzed whether the removal was consistent with the three-part test, namely if it met the requirements of legality, legitimate aim, and necessity and proportionality. It also relied on the UN Guiding Principles on Business and Human Rights (UNGPs)—particularly Principles 13 and 17, which require companies like Meta to prevent and mitigate adverse human rights impacts. The Board emphasized that while Meta is not a state, it must still assess its decisions against international standards and justify any deviation from them.

Legality (Clarity and Accessibility of the Rules)

The principle of legality requires that rules restricting expression be clear, accessible, and precise, enabling individuals to understand and regulate their behavior accordingly. The UN Special Rapporteur on freedom of expression has emphasized that this standard applies to private actors like Meta, whose content rules must be both understandable to users and provide clear enforcement guidance to moderators. On this point, the OSB found that Meta’s Violence and Incitement policy lacked clarity, particularly in its prohibition of threats against places, which is not clearly communicated to users—an issue of special importance during the UK riots when locations associated with Muslims and immigrants were attacked.

Regarding Meta’s Hateful Conduct policy, the Board found its language sufficiently clear in prohibiting allegations that members of a protected group are violent criminals, as in the “four Muslim men” post. Yet, the Board criticized Meta’s distinction between broad generalizations (e.g., calling a group “terrorists”) and statements describing a group’s actions (e.g., saying they “murder”), arguing that both can be equally dehumanizing depending on the context. This nuanced but inconsistently applied distinction risks confusion for users and may contribute to arbitrary enforcement.

Legitimate aim

Restrictions on freedom of expression must serve a legitimate aim under the ICCPR, such as protecting public order or the rights of others. The Oversight Board has previously found that Meta’s Violence and Incitement policy aligns with these aims, particularly the right to life (Iranian Women Confronted on the Street and Tigray Communication Affairs Bureau), while the Hateful Conduct policy serves the recognized aim of protecting equality and non-discrimination (Knin Cartoon and Myanmar Bot)—both consistent with international human rights standards.

Necessity and Proportionality

Under Article 19(3) of the ICCPR, restrictions on freedom of expression must meet the tests of necessity and proportionality, meaning they must be the least intrusive means to achieve a legitimate aim, and appropriate to protect the rights at stake. As highlighted by the OSB, the UN Special Rapporteur on freedom of expression has emphasized that companies like Meta must assess these principles when taking content moderation actions, especially given the complexities of moderating hateful expression at scale. While political speech is highly protected, including controversial or offensive views, content that incites violence or dehumanizes vulnerable groups may justifiably be removed, as previously affirmed by the Oversight Board.

The OSB concluded that all three posts warranted removal under Meta’s rules and that this action was aligned with international human rights standards, particularly when analyzed through the framework of the Rabat Plan of Action. This framework, developed by the UN Office of the High Commissioner for Human Rights, offers a six-part test to assess when speech reaches the threshold of incitement to discrimination, hostility, or violence under international law. The test examines: (1) the broader social and political environment; (2) the identity and influence of the speaker; (3) the speaker’s intent to provoke harmful action; (4) the content and presentation of the message; (5) how widely it was disseminated; and (6) the likelihood and immediacy of resulting harm.

In this case, the posts were shared during a wave of riots marked by anti-Muslim and anti-immigrant violence, triggered by viral misinformation after the Southport attack. Far-right actors exploited this volatile moment to mobilize protests, including one outside a mosque that turned violent. The content of the posts, whether they included explicit incitements to attack mosques or the use of dehumanizing imagery, clearly contributed to the hostile atmosphere. Though the speakers may not have been highly influential individuals, the environment of unrest and the virality of such content amplified the potential for harm.

Given the severity of the unrest and the direct link between these posts and the broader climate of violence and discrimination, the OSB found that no less intrusive measure—such as labeling or demotion—would have effectively mitigated the danger. In this context, removal was both a necessary and proportionate response to prevent further harm, consistent with Meta’s responsibilities under international human rights standards.

Enforcement

The Board expressed concern that even after it selected the cases for review, Meta maintained that two posts containing AI-generated imagery did not violate its policies. This suggests that Meta’s content moderators and policy teams may be relying too rigidly on formulaic checklists developed primarily for text-based content, without adequate guidance for interpreting visual material. Such an approach can result in inconsistent enforcement, particularly when assessing content alleging inherent criminality based on protected characteristics. The Board noted that this gap is especially problematic given the dominance of image and video content on social media and the growing use of AI-generated visuals.

While consistency is important in content moderation, the Board emphasized that it must not come at the cost of contextual accuracy, especially during crises like the UK riots, where the stakes involved real risks to life and safety. To the OSB, effective enforcement requires swift activation of Meta’s Crisis Policy Protocol and tailored guidance for moderators to assess content within the specific social and political context. The Board reiterated that understanding the broader implications of visual portrayals and disinformation was crucial to upholding user safety and rights during such volatile events.

The OSB also raised concerns about Meta’s handling of misinformation—particularly regarding the false identification of the Southport attacker. Despite Meta’s policy to remove misinformation likely to cause imminent harm, it is unclear how comprehensively this policy was enforced. Although some false posts were flagged and labeled by third-party fact-checkers, the Board remained concerned about the limited capacity of Meta’s fact-checking system and the unclear proportion of harmful content that was actually reviewed. As Meta considers transitioning to its Community Notes system, the Board urged the company to closely examine the limited success of similar programs on other platforms. For instance, during the UK riots, posts from five major accounts spreading disinformation received over 430 million views on X, yet only one post was labeled with a Community Note, underscoring the need for stronger and more responsive misinformation interventions.

Human Rights Due Diligence

Under the UN Guiding Principles on Business and Human Rights, particularly Principles 13, 17(c), and 18, companies like Meta are expected to carry out ongoing human rights due diligence when introducing significant changes to their policies or enforcement practices. This includes evaluating potential human rights impacts, consulting with affected groups, and providing transparency about the process. The OSB expressed concern that Meta’s policy and enforcement changes to the Hate Speech Policy announced on January 7, 2025, appeared to bypass its usual internal procedures and lacked any public explanation of whether human rights assessments were conducted in advance.

As these changes were now being implemented globally, the OSB emphasized the need for Meta to carefully evaluate and publicly report on how these updates may impact different communities, especially vulnerable groups such as immigrants, asylum seekers, and refugees. The company should consider both the risk of over-removal of legitimate speech, as seen in cases like the Call for Women’s Protest in Cuba, and the danger of underenforcing harmful content, as in the Holocaust Denial and Homophobic Violence in West Africa cases. The Board also highlighted the importance of learning from prior recommendations, such as those in the Criticism of EU Migration Policies and Immigrants decision, to help ensure that future policy updates are both rights-respecting and transparently implemented.

Considering the arguments presented above, the Oversight Board held that Meta should have removed all three posts.

Policy Advisory Statement

The OSB issued several recommendations to improve Meta’s content moderation and crisis response. It advised Meta to update its Violence and Incitement Community Standard to clearly prohibit high-severity threats against places, not just individuals. It also called for clearer internal criteria for identifying hateful conduct in visual content, aligned with existing text-based standards, to ensure consistent enforcement across formats.

To strengthen Meta’s crisis response, the Board recommended revising the activation criteria for the Crisis Policy Protocol by establishing core triggers for immediate activation. It also urged Meta to ensure potential high-risk content is flagged for human review and supported by timely, context-specific guidance for moderators. Finally, as Meta transitions to Community Notes, the OSB advised ongoing evaluations of its effectiveness compared to third-party fact-checking, with regular updates and public reporting on the results.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

The Oversight Board’s decision represents a mixed outcome for freedom of expression. While the removal of the three posts constituted a restriction on speech, the Board found that each post fueled disinformation and contributed to an imminent risk of violence, discrimination, and hostility, particularly targeting Muslims, immigrants, and asylum seekers during a period of escalating riots. Applying the six-part test of the Rabat Plan of Action, the Board determined that the content removals were a necessary and proportionate restriction to protect public order and the rights of others, including the right to life and non-discrimination.

However, the Board strongly criticized Meta’s enforcement practices and crisis response, particularly the delay in activating the Crisis Policy Protocol and the lack of preparedness to address harmful visual content. It expressed concern that Meta’s enforcement framework remains overly reliant on text-based assessment tools, resulting in inconsistent moderation of image-based content and inadequate responses during fast-moving crises. The Board called for clearer policy guidance, stronger safeguards for human rights due diligence, and urgent improvements to Meta’s content moderation systems, especially as AI-generated visuals become more prevalent.

Global Perspective

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

Official Case Documents

Official Case Documents:


Amicus Briefs and Other Legal Authorities


Attachments:

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback