Oversight Board Posts Displaying South Africa’s Apartheid-Era Flag

Closed Expands Expression

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    April 23, 2025
  • Outcome
    Oversight Board Decision, Agreed with Meta’s initial decision
  • Case Number
    2025-001-FB-UA, 2025-002-FB-UA
  • Region & Country
    South Africa, International
  • Judicial Body
    Oversight Board
  • Type of Law
    Meta's content policies
  • Themes
    Facebook Community Standards, Objectionable Content, Hate Speech/Hateful Conduct, Violence And Criminal Behavior, Dangerous Individuals and Organizations, Referral to Facebook Community Standards
  • Tags
    Facebook, Oversight Board Content Policy Recommendation, Discrimination

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

Meta’s Oversight Board issued a decision upholding Meta’s decision to leave up two Facebook posts featuring images of South Africa’s 1928–1994 flag. Meta’s human reviewers had determined the content did not violate the Community Standards, and two users then appealed to the Oversight Board. The majority of the Board found that while the posts conveyed a racially insensitive message, they did not meet the threshold for violating the Hateful Conduct policy, as they did not clearly advocate for racial exclusion, segregation, or incite violence or discrimination. A minority of the Board disagreed, arguing that the use of the apartheid-era flag constitutes a direct and unambiguous symbol of support for racial segregation. The Board was unanimous in finding that both posts violated the Dangerous Organizations and Individuals policy, although it split on the reasoning: the majority viewed the posts as “unclear references” to white separatism, while the minority considered them to explicitly glorify that hateful ideology. However, the majority concluded that the likelihood of imminent discrimination or violence was low and that, in this case, the content should remain online. The Board also recommended that Meta clarify its policies by resolving conflicting language around references to hateful ideologies and by explicitly listing apartheid as a standalone hateful ideology under the Dangerous Organizations and Individuals policy.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. The Board issues full decisions and summary decisions. Decisions, except summary decisions, are binding unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies. 


Facts

In the run-up to South Africa’s general election in May 2024, two Facebook posts related to the country’s old, apartheid-era flag were reported to Meta.

The first post was of a photo of a white male soldier holding South Africa’s old, pre-1994 apartheid-era flag with an English caption urging users to share the content if they “served under this flag.” The post was “viewed around 600,000 times and shared around 5,000 times”. [p. 4] Three users reported the content for hate speech and violence, and human reviewers assessed all the complaints.

The second post referred to the “good old days” and included a grid of various photos, and said “read between the lines”, followed by a winking face and the “OK” (a hand with three fingers up and the thumb and index finger together) emojis. The photo grid included “stock images taken during the apartheid era, including; the country’s former flag; an adult Black man on an ice cream bicycle with three white children standing next to him in a seemingly whites-only neighborhood; a public whites-only beach with a theme park; a South African board game; a packet of white candy cigarettes, and a silver toy gun.” [p. 4-5] That post was viewed around two million times and shared around 1,000 times. 186 users reported it, mostly for hate speech. The complaints were assessed by a mixture of human reviews and “automated systems and prior human review decisions.” [p. 5]

Meta found both posts to be non-violating and so the posts remained on Facebook.

For each of the posts, one user who had reported each post then appealed the decision to the Oversight Board.

On January 7, 2025, Meta announced that it was revising its Hate Speech policy, including renaming it the “Hateful Conduct” policy. The Hateful Conduct policy defines “hateful conduct” (as “hate speech” had been defined) as “direct attacks against people based on protected characteristics, which include race, ethnicity and national origin.” Tier 2 of this policy addresses “calls or support for exclusion or segregation or statements of intent to exclude or segregate” and prohibits calls or support for general, political, economic and social exclusion. Before the amendments to the policy, “general” exclusion was referred to as “explicit exclusion”. The introduction to Meta’s Community Standards confirms that content with “ambiguous or implicit language” may be removed.

The Dangerous Organizations and Individuals policy “seeks to ‘prevent and disrupt real-world harm’,” and permits Meta to remove content that glorifies “hateful ideologies.” The four listed ideologies are Nazism and white supremacy, white nationalism and white separatism and are prohibited because they are “inherently tied to violence” and call for exclusion of people based on their protected characteristics.

The Board took note that from 1948 to 1994 South Africa was governed by an apartheid regime that enforced racial segregation, and had an orange, white, and blue national flag. After apartheid ended in 1994, a new six-color national flag was adopted. It acknowledged that, despite this political change, deep-rooted socioeconomic inequality continues to affect the non-white population, fuelling ongoing racial tensions. In 2018, the Nelson Mandela Foundation had approached domestic courts seeking a ban of the “gratuitous display” of the apartheid-era flag, arguing it constituted hate speech and racial discrimination and, in 2019, the Equality Court ruled that such displays amount to hate speech, a decision upheld by the Supreme Court of Appeal in 2023, though exceptions were made for public interest uses.


Decision Overview

The Oversight Board delivered a decision that included majority and minority reasoning. The central question was whether Meta was correct in not removing the two posts.

The user who reported the first post with the soldier stated that “South Africa’s former flag is comparable to the German Nazi flag” and submitted that its display “incites violence” given the ongoing effect of apartheid. [p. 6] The user highlighted the timing of the post being during an election campaign.

The user who reported the post with the photos of the “good old days” said that the “use of the flag is illegal” and that the post “suggests apartheid was a ‘better time’ for South Africans”. [p. 6] The user stressed that the flag “represents oppression” and was “derogatory” and “painful” for the “majority of South Africans”. [p. 6]

Meta maintained that neither post infringed the Hateful Conduct or Dangerous Organizations policies because neither post advocated exclusion of protected groups. It submitted that only content that calls for a direct, not implicit, attack can be removed. Meta explained that it has enforcement guidance for its reviewers to find that the use of emojis is only violative of the policies if the review can “confirm intent to directly attack a person or group on the basis of a protected characteristic”; the guidance does not include the “OK” emoji and does permit reviewers to consider photos, video and text in the post in determining the meaning of the emojis. [p. 8]

Meta submitted that despite its post-Apartheid use as a “symbol of Afrikaner heritage and apartheid”, the orange, white and blue flag was used before the apartheid government came into power and so had other meanings, “including South Africans’ connections to different aspects of that period such as personal experiences, military service and other aspects of citizenship”. [p. 8] Meta explained that it is only Nazism, white supremacy, white nationalism and white separatism that are its “hateful ideologies” but that separatism policies such as a apartheid should fall under this. However, Meta accepted that the example of a post that would be removed it gave the Board – “apartheid is wonderful” – is not an example given to the reviewers. Meta maintained that neither post glorified or supported a hateful ideology.

The Board accepted 299 comments from the public which addressed the history and politics of the old flag and its impact on “non-whites and efforts to build a multi-cultural South Africa” as well as whether the public believed its display should be allowed on Meta’s platforms and how the coded use of symbols should be moderated by Meta. [p. 10]

The Board selected these cases to “address Meta’s respect for freedom of expression and other human rights in the context of an election, and how it treats imagery associated with South Africa’s recent history of apartheid”. [p. 11] It noted that the cases did fall within “the Board’s strategic priorities of Elections and Civic Space and Hate Speech Against Marginalized Groups.” [p. 11]

The Board analyzed Meta’s decisions in these cases against Meta’s content policies, values and human rights responsibilities and assessed the implications of these cases for Meta’s broader approach to content governance. The Board confirmed that it would assess the posts and Meta’s compliance with its policies based on the policy’s content “at the time of posting, and, where applicable, as since revised”. [p. 12]

In examining the posts’ compliance with Meta’s Hateful Conduct Policy, the Board found that Meta’s prohibition on “calls or support for exclusion or segregation” based on protected characteristics could reasonably be interpreted in two ways. It noted that these interpretations were not affected by the January 7, 2025 policy revision. The first interpretation, followed by the majority, was a narrow one “requiring advocacy for exclusion or segregation” and recognized “Meta’s paramount value of voice”. [p. 11] The second interpretation, from the minority, was broader and focused instead on Meta’s value of dignity and would have interpreted the policy’s prohibition “to also encompass support for exclusion or segregation more generally”. [p. 11]

The majority found that neither of the two posts violated the Hateful Conduct policy, as they only displayed “nostalgia” for apartheid but did not “advocate reinstituting apartheid of any other form of racial exclusion”. [p. 11] It emphasized the reference to military service in the soldier post and noted that, “[n]otwithstanding how divisive or insensitive” the sharing of the flag was, there was no evidence that the post did actually advocate racial exclusion or segregation. [p. 12] The majority said that the photos in the photo grid post could “feasibly evoke general nostalgia” and although it accepted that the phrases “good old days” and “read between the lines” along with the emojis were “indicators of a racist message that change how the images alone would be perceived”, the post still did “not rise to the level of advocacy for the reinstitution of apartheid”. [p. 12]

The minority disagreed, and would have found both posts to be in violation of the policy. It stated that the apartheid-era flag is an “unambiguous and direct symbol of apartheid” and that when it is shared without condemnation “it is contextually understood in South Africa as support for racial segregation and exclusion”. [p. 12] It referred to the South African cases which have found the display of the flag to constitute hate speech and noted that the flag has been used globally by white nationalist movements. The minority would have found that the call for the soldier post to be reshared could only be interpreted as support for apartheid segregation and that the inclusion of the flag in the photo grid without any condemnation was a violation. The minority noted that the “OK” emoji has white power connotations as it is “understood by white supremacists globally as covert hate speech and a dog whistle” because its symbol of three fingers and a thumb are seen as representing the letters W and P (and so “white power”). [p. 13] In addition, the minority said that the phrases and the winking emoji did enough to indicate support for racial segregation, even if a reader did not understand the connotations of the ”OK” emoji. The minority stressed the importance of understanding “how the use of racist language and symbols online has adapted to evade content moderation” and how indirect messages are shared between “like-minded people”. [p. 13] It stated that “coded or indirect hate speech can be alarmingly non-ambiguous, even when it leaves literal statements of intended meaning unsaid”. [p. 13]

In respect of the Dangerous Organizations and Individuals Community Standard, the Board described Meta’s identification of “designated ideologies” as “vague” and found that both posts violated this standard. [p. 13] For the majority, neither of the posts explicitly glorifies apartheid but both qualified as indirect, coded support due to the presence of “indicators of a racial message”. [p. 14] The minority would have found that the posts were glorification of white separatism due to the use of the flag – “an inherent symbol of white separatism” – in the solider post and the flag and phrases and emojis in the photo grid post. [p. 14] The minority added that the fact that the posts were reported demonstrated that their context was understood.

The Board emphasized that the apartheid-era South African flag cannot be separated from the ideology of apartheid and white separatism. It added that while Meta has confirmed that its policy includes apartheid under the white separatist designation, its internal reviewer guidance lacks clarity and should be expanded with concrete examples to ensure consistent enforcement.

The Board examined whether the decisions were in compliance with Meta’s human rights responsibilities under the protection of freedom of expression in Article 19 of the International Covenant on Civil and Political Rights. The majority of the Board found that keeping both posts up on the platform was consistent with Meta’s human rights responsibilities while the minority would have found that removal would have been consistent with these responsibilities. The Board referred to General Comments No. 34 and No. 35 in establishing that the three-part requirements of legality, proportionality and necessity should be applied in determining whether a restriction (here, a removal of posts) on expression is legitimate.

The Board found that Meta’s content moderation policies lacked the clarity and precision required under the principle of legality in international human rights law. This principle demands that restrictions on expression be clearly defined so individuals can understand what is prohibited and content reviewers can apply the rules consistently. In respect of the Hateful Conduct policy, the Board identified confusion over whether “direct attacks” include implicit speech and whether “calls or support for exclusion or segregation” require explicit advocacy or cover broader expressions of support. It found that similar ambiguity exists in the Dangerous Organizations and Individuals policy, which inconsistently describes whether “unclear references” to hateful ideologies are prohibited or if only explicit content is actionable. Compounding this, Meta’s internal guidance did not provide adequate examples related to apartheid, making enforcement inconsistent and uncertain.

The Board also criticized the omission of apartheid as a standalone designation within Meta’s prohibited ideologies, despite its inherent connection to white separatism and white supremacy. It referred to the Rome Statute of the International Criminal Court and the Apartheid Convention which made confirmed that apartheid is a crime under international law. While Meta’s policies reference white separatism, they fail to reflect how racially supremacist messages are often subtly conveyed – an oversight the Board found undermines effective content moderation, particularly in Global Majority contexts. This gap raised broader concerns about the global applicability of Meta’s enforcement framework, which appears to disproportionately focus on threats in Global Minority regions, such as white nationalism, while underrepresenting ideologies that incite racial exclusion and violence in the Global South. The Board recommended that Meta update its reviewer guidance with clearer definitions and relevant global examples, especially in relation to apartheid, to align enforcement practices with its stated human rights responsibilities.

In examining whether the restriction on expression served a legitimate aim, the Board noted that it has previously recognized that the Hate Speech Community Standard and the Hateful Conduct policy pursue the legitimate aim of protecting the rights of others, including the rights to equality and non-discrimination and, in respect of the the Dangerous Organizations and Individuals policy, seeking to “prevent and disrupt real-world harm” pursues the legitimate aim of protecting the rights of others, such as the right to life and the right to non-discrimination and equality.

In assessing the necessity and proportionality of the decisions, the majority found that Meta’s decision to leave both posts online was consistent with the company’s human rights responsibilities. While acknowledging the painful legacy of apartheid and its continued effects in South Africa, it emphasized the heightened protection afforded to political expression under international law, especially during election periods. It found that the posts, although offensive, did not meet the Rabat Plan of Action’s threshold for incitement to likely and imminent discrimination or violence as neither post featured explicit calls to action, nor were they made by influential speakers. It added that the content was unlikely to lead to imminent harm and therefore remained within the bounds of protected expression, and referred to General Comment 37 which had stressed that the display of flags (and “uniforms, signs and banners”) should not be restricted even when they are “reminders of a painful past” unless they are used in direct incitement. [p. 18] The majority also highlighted South Africa’s “relatively stable representative democracy since the end of apartheid and its robust legal framework for protecting human rights” and that the experts it consulted noted that “white supremacist rhetoric” had not been a “major issue” during the 2024 elections and that the elections had not been affected by interracial violence. [p. 19]

The majority highlighted that the UN Special Rapporteur on freedom of expression has highlighted the beneficial use of measures that infringe speech less than removal of the posts and that private companies have more freedom to use these less restrictive measures than states. It raised concerns about the vagueness of Meta’s “unclear references” rule under its Dangerous Organizations and Individuals policy and urged the company to ensure any removals under this standard are precise and necessary. It also advocated for the use of less intrusive content moderation tools, such as limiting the reach of harmful content, rather than resorting to removal. Drawing from previous Board decisions and guidance from UN experts, the majority underscored the risk of driving intolerant views to less regulated platforms, where they may fester. It encouraged Meta to adopt a more nuanced approach that better balances freedom of expression with the need to prevent discrimination, including through transparent and effective enforcement of its rules.

By contrast, the minority of Board Members concluded that both posts should have been removed. They argued that the content glorified a system of racial segregation and thus contributed to ongoing discrimination against non-white South Africans. It stressed that apartheid-era symbols like the flag have become global markers of white supremacy and are used by extremists in hateful contexts, including American Dylann Roof who murdered Black churchgoers. Referring to public reactions to the posts – including comments under the posts in Afrikaans which “reveal a sense of white supremacy rooted in colonialism” – and past Board rulings, the minority said that allowing such content undermines the rights and dignity of those historically and presently affected by racial discrimination. [p. 21] The minority referred to its previous case concerning South African slurs in highlighting that the “particularities of the South African context” should be relevant in determining whether posts should be removed.[p. 21]

For the minority, removal would have been a proportionate and necessary step to protect equality and prevent the accumulation of hate speech that can chill the expression of marginalized communities.

The Board expressed concern that Meta’s January 7, 2025, policy and enforcement changes were introduced hastily and without the usual human rights due diligence required under the UN Guiding Principles on Business and Human Rights (UNGPs), particularly Principles 13, 17(c), and 18. These principles oblige companies to assess, prevent, and mitigate adverse human rights impacts of significant policy shifts through processes like stakeholder engagement. The lack of transparency around whether any due diligence was conducted prior to the rollout of the updates marks a departure from Meta’s stated procedures and raises accountability concerns. As these changes are now being implemented globally, including in countries with recent histories of atrocity crimes such as South Africa, the Board emphasized the urgent need for Meta to identify and address the human rights impacts of its new enforcement approach. This includes considering how policy changes may result in over- or underenforcement, with disproportionate effects on Global Majority communities. The Board recalled that Meta’s prior failures in Myanmar partly stemmed from an overreliance on automation and underutilization of user reporting. Given that in many parts of the world, affected users may not actively report harmful content, the Board urged Meta to engage with local stakeholders and strengthen due diligence to avoid repeating past harms.

Accordingly, the Board upheld Meta’s decisions to leave up both pieces of content.

The Board made several recommendations aimed at improving the clarity, transparency, and human rights compliance of Meta’s content policies and enforcement practices. It urged Meta to conduct ongoing human rights due diligence on the January 7, 2025 updates to the Hateful Conduct Community Standard, specifically assessing the risks these changes may pose to populations in Global Majority regions. It recommended that Meta adopt measures to mitigate any identified risks, monitor their effectiveness, and provide the Board with regular updates every six months. Public reporting on these findings is also required for accountability.

To enhance the clarity of the Dangerous Organizations and Individuals Community Standard, the Board recommended that Meta consolidate its rules and exceptions regarding designated hateful ideologies into a single, comprehensive explanation. It also called on Meta to explicitly list apartheid as a standalone hateful ideology, reflecting its global relevance and legal recognition as a crime against humanity. It also recommended that Meta improve its internal guidance for content reviewers by providing global examples of prohibited support for hateful ideologies, including indirect or coded expressions that may not explicitly name the ideology but clearly promote it.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Expands Expression

The Board’s approval of Meta’s decision to leave up South Africa’s apartheid-era flag expands expression in line with international human rights standards. The posts, while potentially offensive or insensitive, did not meet the threshold for removal under Meta’s Hateful Conduct or Dangerous Organizations and Individuals policies when interpreted consistently with Article 19 of the ICCPR and the Rabat Plan of Action. The content did not amount to advocacy of hatred that constitutes incitement to discrimination, hostility or violence.

At the same time, the Board recommended that Meta improve the clarity and accessibility of its policies, particularly on designated hateful ideologies and the treatment of symbols such as the apartheid-era South African flag. As required by the principle of legality and the three-part test for permissible restrictions on freedom of expression under international law, content rules must be precise and foreseeable to both users and content moderators. To this end, the Board reiterated its recommendation that Meta adopted a clear and comprehensive explanation of how its Dangerous Organizations and Individuals policy applied to hateful ideologies, and explicitly designate apartheid as such.

Global Perspective

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

The decision was cited in:

Official Case Documents

Official Case Documents:


Amicus Briefs and Other Legal Authorities


Attachments:

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback