Facebook Community Standards, Objectionable Content, Hate Speech/Hateful Conduct
Oversight Board Case of South Africa Slurs
South Africa
Closed Mixed Outcome
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The majority of the Oversight Board found that two immigration-related posts shared on Facebook ahead of the June 2024 European Parliament elections violated Meta’s Hateful Conduct policy and should be removed. One post, published by a Polish political party, deliberately used a racial slur to provoke hostility against migrants; the other generalized immigrants as “gang rape specialists,” perpetuating dehumanizing stereotypes. Both posts were reported by users for hate speech. Meta initially found no policy violations and left the content online. Users then appealed to the Oversight Board. The majority found that, in this electoral context, removing the content was necessary and proportionate to protect the rights of affected groups. While the Board reaffirmed that freedom of expression is particularly vital in the context of political debate, it concluded that these posts shared during a period of rising anti-migrant sentiment posed a heightened risk of discrimination and harm. A minority of the Board disagreed, finding that while the posts were offensive, they did not meet the threshold for removal under international human rights standards, and that restricting them risked undermining legitimate political discourse. The Board recommended that Meta add the term murzyn to its Polish slur list, revise its internal guidance to presume generalizations about immigrants are harmful unless clearly limited, improve transparency in content moderation, and conduct and publicly report on human rights due diligence regarding its updated Hateful Conduct policy, particularly its impact on migrants, refugees, and asylum seekers.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. The Board issues full decisions and summary decisions. Decisions, except summary decisions, are binding unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
The Oversight Board (OSB) reviewed two Facebook posts related to immigration, published in the lead-up to the June 2024 European Parliament elections. In May 2024, the EU adopted the Pact on Migration and Asylum, introducing new rules for managing migration across the bloc. As a result, migration became a central issue in public and political debate across EU member states during the election period.
The first post appeared on the official Facebook page of Poland’s far-right political alliance, Confederation (Konfederacja Wolność i Niepodległość). It featured a meme depicting Polish Prime Minister Donald Tusk looking through a door’s peephole, while a Black man approaches him from behind. The accompanying Polish text reads: “Good evening, did you vote for Platform? I’ve brought the murzyn from the immigration pact.” “Platform” refers to Tusk’s political party, the Civic Platform coalition, while the “pact” refers to the EU Pact on Migration and Asylum. The term murzyn, historically used to describe Black people in Polish, is now widely considered derogatory. The post’s caption criticized the EU pact and urged voters to support Confederation in the European elections to prevent “uncontrolled immigration.” The post had approximately 170,000 views.
The second post came from a German Facebook page that identified itself as opposing left-wing groups. It included an AI-generated image of a blonde, blue-eyed woman holding up her hand in a stop gesture. The German-language text stated that “gang rape specialists” were no longer needed in the country, attributing the alleged threat to the Green Party’s immigration policy. The post also referenced a non-hyperlinked article titled “Non-German suspects in gang rapes,” hosted on the German Parliament’s website. This post received around 9,000 views.
Both posts were reported by users for hate speech. Meta initially found no policy violations and left the content online. Users then appealed to the Oversight Board. The appellant in the Polish case referenced academic sources arguing that the word murzyn is a pejorative term that perpetuates racial stereotypes and discrimination. The user appealing the German post argued that it implied all refugees are criminals and rapists.
On January 7, 2025, Meta revised its Hate Speech policy, renaming it the Hateful Conduct policy. Considering this, the Board assessed the cases under both the policy in effect at the time of posting and, where relevant, the revised policy.
The main issue before the Oversight Board was wether Meta’s decision to keep posts on Facebook that contained racial slurs or reproduced harmful stereotypes against immigrants was consistent with its policies and human rights responsibilities. These cases fell within the Board’s strategic priorities of Hate Speech Against Marginalized Groups and Elections and Civic Space.
In the decision, the Board assessed whether the contested posts violated Meta’s Hateful Conduct Community Standard. This policy, which mirrors its former “hate speech” policy, prohibits direct attacks on individuals based on protected characteristics such as race, ethnicity, and national origin, while treating immigration status as only partially protected. As a result, immigrants are shielded only from the most extreme forms of hate under Tier 1, which bans serious criminal allegations and slurs. On January 7, 2025, Meta clarified that political or religious speech—including expressions on immigration—may involve exclusionary or insulting language and is not always restricted. Under updated rules, less serious criminal accusations—previously banned under Tier 1—now fall under Tier 2, where protections do not extend to migrants. As noted by the Board, Meta’s internal guidance instructs that Tier 1 applies only when attacks target more than half of a group; thus, statements accusing most migrants of being criminals are prohibited, while claims about some migrants are allowed. The policy also continues to prohibit slurs, defined as language rooted in historical discrimination and oppression.
In its submission to the Board, Meta contended that neither of the two posts violated its revised Hateful Conduct policy and that was why it allowed them to remain on the platform, noting that the January 7 policy updates did not affect its assessment, as rules on racial slurs and comparisons of migrants to violent criminals remained unchanged. In the case of the Polish post, Meta explained that the term murzyn is not classified as a slur in the Polish market, citing its historically neutral use and the risk of overenforcement due to its similarity to other words. As for the German post, Meta found it did not meet the threshold for a Tier 1 violation because it was unclear whether the statement referred to all, most, or only some migrants, and the article it cited did not support a generalizing attack.
While both posts could be interpreted as exclusionary, Meta stated that they did not breach its prohibition on “calls for exclusion,” as Tier 2 protections do not extend to immigration status.
The Board analyzed Meta’s decisions in these cases in light of Meta’s content policies and human rights responsibilities. The Board also assessed the implications of these cases for Meta’s broader approach to content governance.
1. Compliance With Meta’s Content Policies
The majority of the Board found that both posts violated the Hateful Conduct Policy and should be removed from Facebook.
As for the Polish post, the majority of the OSB found that the term murzyn qualified as a discriminatory slur under Meta’s Hateful Conduct policy, as it targeted black people based on race and created an atmosphere of exclusion and intimidation. Experts and civil society organizations highlighted the term’s frequent use in derogatory contexts, its association with inferiority and uncleanliness, and its harmful impact on black communities in Poland. Although some still consider the term neutral, the Polish Language Council and major dictionaries have recognized its offensive nature. Its historical link to slavery further supported its classification as a slur. The Board emphasized that the views of marginalized groups on such terms are especially significant and urged Meta to engage more systematically with impacted communities when auditing its slur list. It also noted that, absent the slur, the post would have been allowed under Meta’s policies.
A minority of the Board disagreed, considering that the Polish post did not violate the Hateful Conduct Policy. While the term might have been seen as offensive and derogatory, this was insufficient to find that it should be considered a banned term. For the minority, Meta’s policy required clearer evidence that the use of the term inherently created an atmosphere of exclusion and intimidation. There should be more than correlative ties to periods of historic discrimination, oppression, and violence (in other times and places), but evidence that its use has been and continues to be intrinsic to the infliction of those harms.
As for the German post, the majority of the Board found that the German post constituted a Tier 1 attack since it generalized that the majority of immigrants were “gang rape specialists.” For the OSB, the characterization of immigrants entering the country as “gang rape specialists,” without any qualifying language (e.g., “some” or “too many”), clearly conveyed a generalized attack on all immigrants. Contrary to Meta’s assessment, the fact that the post included the website address (which was not hyperlinked and appeared in smaller text) of an article titled “Non-German suspects in gang rapes,” did not affect this conclusion. Instead, it supported the majority’s view: The text in the post only included the title of the article, which, rather than conveying the nuances discussed in the article’s fuller analysis, implied that “non-Germans” are generally the suspects of gang rapes.
For more accurate enforcement of the Hateful Conduct policy, the majority of the Board recommended Meta to reverse its default presumption that unless content clearly refers to more than 50% of a group, it will be considered non-violating (e.g., “immigrants are gang rapists” should be presumed as a generalization and therefore be violating). Meta should require users posting content that could violate the Hateful Conduct policy to clearly indicate they are targeting less than 50% of a group (e.g., “some immigrants are gang rapists.”)
A minority of the Board found that, while the German post was offensive, it did not violate Meta’s Hateful Conduct policy as it did not claim that all or most immigrants are gang rapists. They emphasized that the post engaged with a legitimate topic of public debate—immigration and crime—especially in an election context, and expressed concern that the majority’s approach could unduly burden users by requiring them to overly clarify their views. They also noted that Meta’s recent policy changes aim to protect expression in such discussions.
2. Compliance With Meta’s Human Rights Responsibilities
The majority of the Board found that the removal of both posts was also consistent with Meta’s human rights responsibilities. A minority of the Board disagreed, finding that removal was not consistent with them.
The OSB analyzed the compatibility of the content removal with Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects a wide range of expressions, including speech on political and public matters. Under this provision, any restriction must be clearly defined, serve a legitimate purpose, and be both necessary and proportionate—commonly known as the “three-part test.” The Oversight Board applies this standard when assessing Meta’s actions, using the UN Guiding Principles on Business and Human Rights (UNGPs) as a framework for evaluating corporate responsibility. While private companies are not bound by human rights law in the same way as governments, they are expected to identify, prevent, and address negative impacts on users’ rights, including freedom of expression. Where company policies diverge from international norms, businesses should clearly explain and justify those differences.
Legality (Clarity and Accessibility of the Rules)
The Board found that Meta’s Hateful Conduct policy met the principle of legality, as it was sufficiently clear and provided guidance for users and content reviewers. However, it raised concern over the global enforcement of a revised version of the policy that was only available in U.S. English for several months. This lack of timely translation meant users in other markets could not access the updated rules, undermining their accessibility. The OSB urged Meta to ensure policy updates are promptly and accurately translated across all languages.
Legitimate Aim
Any restriction on freedom of expression should pursue one or more of the legitimate aims outlined in Article 19, para 3, of the ICCPR, which include the “rights of others.” In several decisions (Knin Cartoon and Myanmar Bot), the Board has found that Meta’s Hate Speech (renamed Hateful Conduct) policy aims to protect the right to equality and non-discrimination, a legitimate aim that is recognized by international human rights standards. This continues to be the legitimate aim of the Hateful Conduct policy, the Board opined.
Necessity and Proportionality
Under Article 19(3) of the ICCPR, any restriction on expression must be necessary and proportionate, meaning it must be the least intrusive means to achieve a legitimate aim and proportionate to the interest protected. While political discourse, including controversial or offensive opinions on public affairs such as immigration, is highly protected, the majority of the Board found that the removal of both posts was necessary and proportionate. The Board referenced the Politician’s Comments on Demographic Changes decision as a relevant precedent to illustrate the distinction between controversial political speech—which may be protected under international standards—and content that crosses the line into hate speech or incitement. In that case, the Board found that although the views on immigration were provocative, they did not amount to dehumanizing language or incitement to violence against vulnerable groups, highlighting the importance of context and intent when evaluating expression.
Applying the Rabat Plan of Action’s six-part threshold test, the Board concluded that the posts posed a sufficient risk to the integrity of the electoral process by encouraging illegal conduct, thus justifying the restriction despite the high value generally afforded to political expression.
In the Polish case, for the Board, the term murzyn was used to denigrate people based on race, contributing to an environment of discrimination and exclusion on Meta’s platforms. The OSB’s majority emphasized that the slur was not employed in a permissible context such as self-identification or to condemn hate speech; instead, it was used to invoke racist sentiment and reinforce anti-migrant stereotypes. The post’s timing—during an election marked by heightened anti-migrant rhetoric—and its wide reach amplified its potential to incite harm. The majority drew parallels to the Depiction of Zwarte Piet case, and noted that the intent to be discriminatory was more explicit in the current case. Experts consulted by the Board highlighted a rise in racially motivated violence in Poland, particularly targeting people of African descent, and the role of political parties in driving xenophobic discourse. Given the speaker’s political influence and the post’s potential to incite real-world harm, the OSB found that removal was necessary and proportionate, regardless of when the post was shared.
The Board argued that the German post followed a similar pattern, considering it was shared just before elections in an environment of increasing anti-migrant sentiment and online hostility. The OSB’s majority found the post’s generalization that most immigrants are gang rapists to be degrading and dangerous, especially given the wider European context of rising xenophobia and its links to offline violence. UN and expert commentaries confirmed that dehumanizing rhetoric of this nature fosters fear and hatred, often leading to attacks against minority groups. While political parties have the right to campaign on sensitive topics like immigration, the Board emphasized they must do so without resorting to racial slurs or inflammatory generalizations. The Board also underscored the importance of Meta improving its enforcement and communication processes—both by clearly explaining removal decisions to users and by increasing the use of pre-posting prompts that help users reconsider potentially violating language. The OSB pointed to evidence that such measures can meaningfully reduce harmful content through user-led correction.
The majority of the Board underscored that social media platforms like Meta, unlike States, must make real-time decisions with incomplete information and cannot wait for violence or discrimination to become imminent before acting. Doing so would undermine Meta’s responsibility to prevent harm under the UN Guiding Principles on Business and Human Rights. Given the scale and unpredictability of online content, a more cautious moderation approach is justified. Referring to a previous decision from the Board in the South Africa Slurs case, the majority affirmed that Meta may remove hate speech that does not meet the threshold of incitement under Article 20 of the ICCPR, as long as such removals meet the Article 19(3) standards of necessity and proportionality. Allowing hate speech to accumulate—even when individual posts fall short of incitement—can create an unsafe environment for minorities, chilling participation in public discourse and diminishing pluralism, as the Board previously found in Depiction of Zwarte Piet, Communal Violence in the Indian state of Odisha, Armenians in Azerbaijan, and Knin Cartoon cases. In such cases, less severe interventions like warning labels are insufficient, and content removal may be necessary to protect human rights and prevent harm.
A minority of the Board found that removing the Polish and German posts was neither necessary nor proportionate. While acknowledging the posts could be offensive, it argued the content did not reach the threshold of incitement to likely and imminent violence, discrimination, or hostility. The minority critiqued the majority’s reliance on “cumulative harms,” asserting that this concept lacked grounding in international freedom of expression standards and stretched causation so far that it undermined the necessity and proportionality analysis. They emphasized that neither post called for violence or unlawful action, but rather engaged in political discussion, including electoral participation and immigration—matters of public concern. In their view, Meta should prioritize freedom of political expression, especially during elections, and rely on less intrusive moderation tools when addressing potentially harmful content. The minority warned that disproportionate removals can erode trust in content moderation and threaten democratic discourse, urging Meta to take guidance from the Rabat Plan’s emphasis on positive, non-censorial policy responses.
Access to remedy
The Board expressed concern that users who reported the posts were not informed that their reports were not prioritized for review. It warned that as Meta shifted to relying more on user reports for lower-severity violations, it must ensure these are fairly reviewed. If reports are not prioritized, users should be clearly informed that no review has occurred.
Human Rights Due Diligence
The Board raised concerns that Meta’s January 7, 2025, policy and enforcement changes were introduced without sufficient transparency or evidence of prior human rights due diligence, as required under the UNGPs. As these changes are now being implemented globally, the OSB emphasized the need for Meta to assess and publicly report on their human rights impact, particularly on vulnerable groups such as immigrants, refugees, and asylum seekers. Due diligence should account for the risks of both overenforcement and underenforcement.
Considering the arguments laid out above, the Oversight Board overturned Meta’s decisions to leave up the content in both cases.
Policy Advisory Statement
The Oversight Board issued several recommendations to improve Meta’s content policy and enforcement practices following the January 7, 2025, updates to the Hateful Conduct Community Standard. As instructed by the OSB, Meta should conduct robust human rights due diligence by identifying how the policy changes may adversely affect immigrants—especially refugees and asylum seekers—in high-risk contexts. It must then implement and monitor measures to mitigate those risks, report progress to the Board every six months, and make these findings public. The Board said it will consider this recommendation fulfilled once Meta shares credible data and analysis demonstrating the effectiveness of these mitigation efforts and publishes its findings.
The Board advised Meta to add the term murzyn to its Polish market slur list, and to ensure more meaningful engagement with affected communities and civil society when auditing slur lists. Additionally, Meta should revise its internal guidance to clarify that Tier 1 attacks—including those targeting immigration status—are presumed to affect the majority of a group unless clearly stated otherwise. These recommendations are meant to reduce harmful content and improve enforcement consistency.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
The Oversight Board found that both posts violated Meta’s Hateful Conduct policy and directed that they be removed from Facebook, thereby contracting expression. The Board concluded that the Polish post used a racial slur targeting black people and that the German post amounted to a generalized attack on immigrants by characterizing them as “gang rape specialists.” The majority held that, although the content related to political speech on immigration, its removal was justified under Meta’s content rules and consistent with international human rights standards, as the posts posed a significant risk to the rights and safety of marginalized groups, particularly in the context of elections.
However, a minority of the Board disagreed, finding that the posts did not meet the threshold for removal under the Hateful Conduct policy and that this application of the rules was not aligned with international standards on freedom of expression. They argued that neither post incited violence or unlawful action and that the speech, while offensive, formed part of legitimate political discourse on matters of public interest. In their view, the removals were neither necessary nor proportionate under Article 19(3) of the ICCPR, and risked unduly restricting political debate, particularly in electoral contexts where expression is afforded heightened protection.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.