Global Freedom of Expression

Oversight Board Case of Post in Polish Targeting Trans People

Closed Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    January 16, 2024
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2023-023-FB-UA
  • Region & Country
    Poland, Europe and Central Asia
  • Judicial Body
    Oversight Board
  • Type of Law
    Meta's content policies
  • Themes
    Facebook Community Standards, Objectionable Content, Hate Speech, Safety, Suicide and Self-Injury
  • Tags
    LGBTI, Oversight Board Content Policy Recommendation, Oversight Board Enforcement Recommendation, Facebook, Satire/Parody

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

On January 16, 2024, the Oversight Board overturned Meta’s decision to keep a Facebook post depicting curtains bearing the colors of the Transgender Pride flag with a text overlay saying, “Curtains that hang themselves” and “spring cleaning <3”—uploaded by a user whose biography stated, “I am a transphobe”. After publication, the content was reported 12 times, by 11 different users, under the Hate Speech policy and the Suicide and Self-injury policy. Only two reports were sent to human reviewers who deemed the content non-violating, while the others were closed automatically. Only one appeal, out of three, was reviewed by a human moderator—who upheld the original decision.  The Board noted that Meta’s practices in this case depicted enforcement challenges as it considered that the company’s policies were clear and precise. The Board found that the content violated Meta’s Hate Speech policy since it was violent speech targeting transgender people. It also concluded that the content violated the Suicide and Self-injury policy as it encouraged and celebrated the high suicide rates of transgender people. The Board recommended Meta modify its internal guidance for reviewers in order to clarify that flag-based depictions of gender identity, even without any human figures, can be understood as representations of a group defined by the gender identity of its members. The Board also recommended Meta to clarify that the Suicide and Self-Injury policy forbids content encouraging  suicide within an identifiable group of people.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. Decisions, except summary decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

In April 2023, a Polish Facebook user posted an image on the platform of a curtain with the colors of the Transgender Pride flag and overlay text in Polish stating: “New technology. Curtains that hang themselves” and “spring cleaning <3”. The post received less than 50 reactions.  The “Haha” reaction was the most used. The user described themselves as a transphobe in their biography.

Between April and May 2023, 11 different users reported the content 12 times. Ten of these reports were not prioritized for human review due to “low severity and virality scores”. However, two of the reports, under the Suicide and Self-injury policy, were sent to human reviewers who deemed the content non-violating and did not escalate it further. None of the reports made under the Hate Speech policy were sent for human review.

Three users appealed Meta’s decision to keep the content on Facebook. Only one appeal was reviewed by a human moderator who upheld the decision that the content did not violate Meta’s Suicide and Self-injury policy. The other two appeals, under the Hate Speech policy, were not sent for human review as Meta deduplicates multiple reports on the same piece of content to strengthen consistency in decisions and enforcement actions.

One of the users who originally reported the content appealed the decision to the Oversight Board. After the Board selected the case, Meta determined that the content violated the Hate Speech policy and the Suicide and Self-injury policy and removed the post. Moreover, Meta disabled the author’s account in August 2023 due to the accumulation of several violations of Facebook’s Community Standards.


Decision Overview

The main issue before the Oversight Board was whether Meta’s original decision to keep a Facebook post targeting transgender people, through speech advocating for the suicide of members of the group, was compatible with Meta’s content policies and human rights obligations regarding freedom of expression.

In their appeal to the Board, the reporting user noted that the person who posted the content had a history of harassing transgender people online and had created a new account after being suspended from Facebook. Moreover, the user emphasized the high rate of suicides in the transgender community.

As previously stated, Meta removed the post under Tier 1 of the Hate Speech policy since it considered the content was violent speech targeting people based on a protected characteristic. Meta explained that the policy’s internal guidelines instruct reviewers to remove violent content that includes calls to action, expressions of intent to cause harm, or statements that endorse or support causing death, disease, or injury, whether in written or visual form. Furthermore, the guidelines instruct reviewers to take into consideration all visual elements when deciding whether the content targets a group of people based on a protected characteristic.

Meta noted that its previous assessments of the content—which considered that it did not violate the company’s policies—aligned with a strict application of the internal guidelines. Meta held that the curtains resembling the Transgender Pride flag could be interpreted as an attack on a flag, concept, or institution, rather than a group of people. However, Meta reached its subsequent removal decision due to the phrase “curtains which hang themselves”, which implicitly referred to the suicide rates in the transgender community. Moreover, Meta explained that the user’s biography (“I am a transphobe”) violated the Hate Speech policy as it admitted to intolerance based on a protected characteristic. This assessment further helped clarify the user’s intent when posting the contested content.

In response to the Board’s question about whether the content violated the Suicide and Self-Injury policy, Meta confirmed that the content indeed encouraged suicide and thus violated the policy. Additionally, Meta clarified that the policy doesn’t differentiate between content encouraging the suicide of a specific person or a group of people.

The Board chose this case “to increase understanding of the content moderation process and to make recommendations to reduce errors and increase fairness for people who use Facebook and Instagram” [p.8] by analyzing whether the original decision to keep the content on Facebook was compatible with Meta’s content policies and human rights responsibilities.

 

Compliance with Meta’s Content Policies

1. Content Rules

Hate Speech

The Board found that the content, in this case, violated Meta’s Hate Speech policy as it included a call for death by suicide within a group, based on a protected characteristic. The Board agreed that the reference to hanging in the post targeted transgender people. Furthermore, the Board considered the broader context of violence (both online and offline) that members of the LGBTQIA+ community face in Poland. For the Board, it was clear that the post advocated for the suicide of transgender individuals thus promoting hate and exclusion, which potentially lead to physical harm.

The Board also held that the post used dehumanizing language and imagery, exacerbating the mental health crisis among the transgender community—which already faces higher risks of suicidal behavior. To the Board, the inclusion of the transgender flag made it clear that the post targeted transgender people. Phrases like “spring cleaning”, and a heart emoji, indicated support for their death.

The Board argued that the Hate Speech policy, and its internal guidelines, should better address malign creativity—the kind that employs ambiguous terminology, recurring visual and textual memes that depend on context, and various strategies used to evade detection on social media platforms. The Board considered that this malign creativity was present in the case at hand through the user’s employment of coded references to suicide, alongside a visual depiction of the transgender flag, to encourage self-harm.

The Board drew comparisons between this case and the Armenians in Azerbaijan case—where the Board highlighted the importance of context when determining whether speech targeted people based on a protected characteristic. While the Board recognized that the aforementioned case happened amidst a context of war and conflict, it highlighted—referring to the case at hand—the violence that transgender people suffer in Poland

The Board expressed concern that Meta’s reviewers did not consider these contextual clues within the content—which is why they found the content non-violating. Although the Board recommended revisions to the internal guidelines of the Hate Speech policy, it underlined that the post violated the policy as it was written at the time the content was posted, as it supported the death of transgender people by suicide— which was compounded by the fact that the user identified as a transphobe in their biography. The user’s biography, in itself, the Board said, violated Tier 2 of the Hate Speech policy, which prohibits self-admissions to intolerance based on protected characteristics. The Board stressed that Meta must improve the enforcement accuracy of its policies regarding hate speech towards the LGBTQIA+ community, especially when posts include images or text employing malign creativity.

The Board expressed further concern over Meta’s statement that the original decision aligned with a strict application of its internal guidelines. The Board held that such a statement reflected that the internal guidelines failed to sufficiently address the way text and images combine in a social media post to target a group based on their gender identity, hindering reviewers from reaching the correct enforcement decisions. The Board recommended Meta to modify its guidelines to ensure that visual depictions of gender identity are analyzed when assessing content.

Suicide and Self-injury

The Board considered that the contested content violated the Suicide and Self-injury policy which prohibits content that promotes or encourages suicide or self-injury. The internal guidelines, the Board noted, further define promotion as “speaking positively of”. The Board agreed with Meta’s subsequent conclusion that the content in this case encouraged the suicide within a group based on a protected characteristic.

Additionally, the Board noted that the Suicide and Self-injury policy should explicitly prohibit content that encourages the suicide of an identifiable group of people, as opposed to a person in that group, rather than not differentiating between these two forms of content—given the challenges reviewers faced in identifying the content in this case as violating. Furthermore, the Board recommended Meta to clarify this in its public-facing policy and internal guidelines for reviewers.

2. Enforcement Action

Regarding Meta’s automated review prioritization systems, the Board held that it significantly impacted enforcement negatively, in this case, since 10 out of the 12 user reports of the content were closed by the automated systems. Moreover, two out of the three users who appealed Meta’s decisions in this case had their appeals closed automatically.

The Board noted that the majority of user reports were closed due to Meta’s practice regarding multiple reports on the same piece of content. Although it acknowledged that deduplication was a reasonable practice when moderating content at scale, the Board considered that there should be more pressure on a report’s initial decision since it would determine the fate of all other reports grouped with it.

The Board stressed that Meta needed to improve its automated enforcement systems to prioritize content for review that might impact the LGBTQIA+ community. It also highlighted that the user’s biography was a relevant sign to determine the severity of the content and the need to take enforcement actions.

The Board also expressed concern over Meta’s approach to LGTBQIA+ community-related content given that reviewers assessing appeals seem to have the same level of expertise as those conducting the first assessment. The Board underlined the importance of Meta’s investment in developing and training classifiers to prioritize for review hate speech affecting the LGBTQIA+ community, alongside “i) enhanced training on harms relating to gender identity for reviewers; ii) a task force on transgender and non-binary people’s experiences on Meta’s platforms; and iii) the creation of a specialized group of subject-matter experts to review content related to issues impacting the LGBTQIA+ community.” [p. 17] The Board advised Meta to extend those practices to improve enforcement against hateful content affecting all protected-characteristic groups.

The Board also noted that the challenges present in this case were a result of the enforcement practices of the company, rather than the language of its policies. For the Board there were five indicators regarding the harmful nature of the content in this case: “(1) the post’s references to ‘self-hanging curtains’; (2) the post’s reference to ‘spring cleaning <3’; (3) the user’s self-description as a ‘transphobe’ in a country context where high levels of hostility toward the LGBTQIA+ community are reported; (4) the number of user reports and appeals on the content; and (5) the number of reports and appeals relative to the virality of the content.” [p. 17] The Board raised concerns over Meta missing these indicators and suggested that the policies were under-enforced.

 

Compliance with Meta’s human-rights responsibilities

 The Board employed the three-part test outlined in Article 19 of the International Covenant on Civil and Political Rights (ICCPR) to determine whether Meta’s actions adhered to its human rights obligations concerning freedom of expression. This test stipulates that any restrictions on freedom of expression must meet three criteria: they must be prescribed by law (legality), they must pursue a legitimate aim, and they must be necessary and proportionate.

1. Legality (Clarity and Accessibility of Rules)

In international human rights law, the principle of legality requires rules to be clear and publicly accessible—both to those in charge of implementing them and to those subject to them. The Board found that Meta’s prohibition regarding “violent speech or support” against groups with protected characteristics—whether in writing or visually—, statements claiming that a protected characteristic should not exist, and speech advocating or inciting suicide and self-harm, was adequately defined.

However, the Board noted that Meta should improve the enforcement accuracy of its policies by clarifying their guidance to reviewers. The Board stated that “Meta should clarify that visual depictions of gender identity, such as through a flag, need not depict human figures to constitute an attack under the Hate Speech policy. Meta also should clarify that a call for a group (as opposed to an individual) to commit suicide violates the Suicide and Self-Injury Policy.” [p. 18]

2. Legitimate aim

For a restriction on freedom of expression to be valid, it must pursue, the Board said, one of the legitimate aims laid out in the ICCPR, such as the “rights of others”. The Board concluded in the Knin Cartoon decision, among others, that the Hate Speech policy pursued the legitimate aim of protecting people from the harm caused by hate speech. Additionally, the Board considered that the Suicide and Self-injury policy pursued the legitimate aim of protecting the people’s right to the enjoyment of the highest attainable standard of physical and mental health, the right to life, and the right to equality and non-discrimination.

3. Necessity and proportionality

The principle of necessity and proportionality holds that limitations on freedom of expression must be suitable for the fulfillment of their protective purpose, be the least intrusive option available to achieve this goal, and be proportionate to the interests safeguarded.  The Board employed the six-factor test stipulated in the Rabat Plan of Action to assess the risks associated with the content at hand and analyze whether the content merited deletion. The Rabat Plan is used to assess whether speech constitutes incitement by analyzing contextual elements, who the speaker is, their intent, the nature of the content, the extent of the expression, and its likelihood to cause harm.

The Board recognized its previous stance about reclaiming derogatory terms in favor of the LGBTQIA+ people (following the Reclaiming Arabic Words decision) and noted that this was not the case. It stated that while the use of curtains was not a recurring coded language targeting transgender people, malign creativity was often employed to target them. For the Board, the contested content showed the malign creativity trend. Even though the Board recognized that the content could be considered humorous to some people, it also stressed that humor and satire could be used for hate speech, which was evident in the post at hand—which celebrated the high suicide rates among transgender people. Furthermore, linguistic experts consulted by the Board explained that the phrase “curtains that hang themselves”, in conjunction with an image of curtains with the transgender flag colors, was a play on words that conveyed the idea “to commit suicide by hanging” and a transphobic slur. As for the “spring cleaning” phrase, experts concluded that in certain contexts it refers to getting rid of all unwanted items and/or people. Moreover, the Board highlighted that the content at hand did not include political or newsworthy expressions (an important fact that distinguished this case from the Colombia Protests decision).

Upon assessing the intent of the user, the Board considered their biography where they admit to being a transphobe. Additionally, the Board noted that the post described the suicide of transgender individuals as “spring cleaning”, alongside a heart emoji. This, the Board considered, showed that the user had the intent to incite discrimination and violence. Additionally, the Board highlighted that the post did not only encourage transgender people to harm themselves but also incited others to discriminate and act violently towards them.

Finally, the Board underscored the significant offline risks that the Polish LGBTQIA+ community faced. It stressed the high level of persecution that the community suffers in Poland, which includes anti-LGBTQIA+ legislation, hostile attitudes from public officials, and an increased likelihood of suffering physical and sexual acts of violence.

In light of these arguments, the Board overturned Meta’s original decision to leave up the contested content on Facebook.

 

Policy Advisory Statement

1. Content Policy

The Board recommended Meta to modify its Suicide and Self-injury policy to clearly prohibit content that promotes or encourages the suicide of an identifiable group of people.

2. Enforcement

The Board recommended Meta to update its internal guidelines to ensure that reviewers understand that flag imagery representing a gender identity, even if it doesn’t include human figures, could symbolize a group defined by that identity.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

While the Oversight Board’s decision to remove the contested content in this case restricts speech, it does so to protect the rights of a marginalized group. The restriction of hate speech is common practice in International Human Rights law as the risk of harm associated with it outweighs the benefit and importance of freedom of expression. Accordingly, in this case, the Board properly and thoroughly balanced the interests at stake, limiting freedom of speech to protect the rights of transgender people.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

  • ICCPR, art. 19

    The Board analyzed Meta’s obligations towards freedom of expression as laid out this article. It also referred to it to include the three-part test in its analysis.

  • ICCPR, art. 6

    The Board referred to this article to highlight the right to life as one of the legitimate aims protected by the Suicide and Self-injury policy.

  • ICCPR, art. 2

    The Board referred to this article to highlight the right to equality as one of the legitimate aims protected by the Suicide and Self-injury policy.

  • ICCPR, art. 26

    The Board referred to this article to highlight the right to non-discrimination as one of the legitimate aims protected by the Suicide and Self-injury policy.

  • ICESCR, art. 12

    The Board referred to this article to highlight the right to the enjoyment of the highest attainable standard of physical and mental health as one of the legitimate aims protected by the Suicide and Self-injury policy.

  • United Nations Guiding Principles on Business and Human Rights (2011)

    The Board referred to this document to analyze Meta’s human rights obligations.

  • OHCHR, Rabat Plan of Action on the prohibition of advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence (2011).

    The Board referred to the Rabat Plan of Action to analyze whether the contested content was hate speech.

  • UNHR Comm., General Comment No. 34 (CCPR/C/GC/34)

    The Board used this General Comment as a guide on how to apply the three part test.

  • OSB, Reclaiming Arabic Words, 2022-003-IG-UA (2022)

    The Board referred to this case to analyze the impact of slurs towards marginalized groups.

  • OSB, Colombia Protests, 2021-010-FB-UA (2021)

    The Board referred to this case to underscore the protection of derogatory expressions in political contexts.

  • OSB, Knin Cartoon, 2022-001-FB-UA (2022)

    The Board referenced this case to highlight the legitimate aim of the Hate Speech policy, which is to protect people from harm.

  • OSB, Armenians in Azerbaijan, 2020-003-FB-UA (2021)

    The Board cited this case to stress the importance of contextual analysis when assessing whether a certain content targets a group based on protected characteristics.

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

The decision was cited in:

Official Case Documents

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback