Facebook Community Standards, Objectionable Content, Hate Speech/Hateful Conduct, Safety, Bullying and Harassment
Oversight Board Case of Image of Gender-Based Violence
Iraq
Closed Expands Expression
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The Oversight Board upheld Meta’s decision to leave up two video posts referring to transgender individuals in derogatory terms under its Hateful Conduct and Bullying and Harassment policies. One Facebook post showed a transgender woman being confronted for using a women’s bathroom, while the other featured a video of a transgender girl winning a race, accompanied by a caption misgendering her. In both cases, Meta found no policy violations. A majority of the Board found the posts did not amount to direct attacks or incitement under applicable policies or international human rights standards. However, a minority disagreed, concluding that the posts met the threshold for removal due to the risk of discrimination and harm they posed, especially given the visibility of the content and its targeting of a minor. The Board issued policy recommendations to improve protections for LGBTQIA+ users and children, and called on Meta to conduct human rights due diligence on its recent policy changes to the Hateful Conduct Policy.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
This case regards two posts, published in the United States, that include videos where transgender women are confronted in derogatory terms.
In the first case, a Facebook video shows an identifiable transgender woman being confronted for using a women’s bathroom at a U.S. university. The person filming questions the woman’s presence in the bathroom, expressing safety concerns. The caption refers to the woman as a “male student who thinks he’s a girl” and asks why “this” is tolerated. The post was viewed over 43,000 times. Although nine users reported the content, Meta determined it did not violate its policies. One of the reporting users subsequently appealed the decision to the Oversight Board (OSB).
The second case concerns an Instagram video showing a transgender girl winning a track race, “with some spectators disapproving of the result.” [p. 5] The caption identifies the minor by name, refers to her as a “boy who thinks he’s a girl,” and uses male pronouns. This video was viewed approximately 140,000 times and was reported by one user. Meta again found no policy violation. The reporting user appealed to the Board.
The main issue before the Oversight Board regarded Meta’s approach to moderating discussions about gender identity on its platforms. The Board assessed the compatibility of derogatory language with the company’s policies—namely the Hateful Conduct and Bullying and Harassment Policies and human rights responsibilities, including freedom of expression.
The user appealing the bathroom post argued that Meta was allowing a transphobic message to remain on the platform. The user appealing the athletics post contended that the content attacked and harassed a minor. Neither user appeared in the posts under review. Meta notified the users who originally shared the posts about the Board’s review and invited them to submit statements; no responses were received.
In its submissions to the Board, Meta maintained that both the bathroom and athletics posts did not violate its Hateful Conduct or Bullying and Harassment policies and decided to leave them on Facebook and Instagram. In the case of the bathroom post, Meta found the content too ambiguous to qualify as a call for exclusion under the Hateful Conduct policy and emphasized that enforcing it against content with indirect or implicit attacks would hinder debate about gendered access to public spaces. Meta explained that its January 7, 2025 update to the Hateful Conduct policy publicly clarified the acceptability of sex- or gender-based exclusion in settings like bathrooms and sports, making enforcement more transparent. The company also did not interpret the misgendering in the bathroom post as denying the existence of a protected group. Under the Bullying and Harassment policy, Meta found no violation since the transgender woman targeted did not report the content herself, which is required for enforcement in cases involving adult private individuals. Meta acknowledged potential alternatives to the self-reporting rule but expressed concerns over over-enforcement and difficulties in validating third-party claims.
Regarding the athletics post, Meta again determined there was no call for exclusion and described the post as critiquing the broader concept of transgender girls participating in sports rather than attacking an individual. The company maintained that its revised policy now explicitly permitted such discourse under the exemption for gender-based exclusions in sports. Under the Bullying and Harassment policy, the targeted minor was classified by Meta as a voluntary public figure due to her media presence and prior public statements about her gender identity, making her ineligible for protections under Tier 3 against misgendering. Meta added that even if either post violated its policies, they would have remained online under the newsworthiness allowance, given their relevance to major political debates and public interest.
The Board first observed that its examination of these cases occurred amidst growing global discourse, when conversations became especially charged during the United States’ 2024 Presidential Election. Following the election, the incoming Trump administration introduced a series of policies that significantly impacted transgender communities. Those debates saw some individuals advocating for open dialogue on these matters as part of the right to freedom of expression, while not necessarily endorsing the policies themselves.
The Board mentioned that in January 2025, Meta introduced updates to its rules on hate-related content that are now part of the Hateful Conduct policy. The OSB emphasized that since the content examined under the present appeal remained available on Meta’s platforms, its analysis considered both the policies in force when the content was originally shared and any subsequent changes. Those changes included broader boundaries for users to use “sex- or gender-exclusive language” when discussing “access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement or teaching roles, and health or support groups.” The policy introduced stricter rules against speech denying the existence of transgender people, as well as calls or support for exclusion or segregation on the basis of sex or gender identity.
The Board analyzed Meta’s decisions in these cases considering Meta’s content policies and human rights responsibilities. It also assessed the implications of these cases for Meta’s broader approach to content governance.
1. Compliance with Meta’s Content Policies
Following the January 7 policy changes to the Hateful Conduct policy, the OSB held that neither post violated this community standard. For content to be considered in violation, it must meet two criteria: first, it must include a “direct attack” as defined by the prohibitions listed in the “Do not post” section; and second, it must be directed at individuals or groups based on one of the protected characteristics outlined in the policy. In the cases reviewed by the Board, the absence of a qualifying “direct attack” meant that the posts did not breach the updated standard. The OSB also confirmed that “gender identity” continues to be listed as a protected category under the policy.
In reviewing the posts under the previous version of the Hate Speech policy, which was in effect before January 7, the Board considered whether the content amounted to either of the following prohibited forms of expression: (1) statements that deny the existence of transgender people or their identities, and (2) calls to socially exclude transgender individuals.
The majority of the Board did not find a violation under this version either because neither post contained a “direct attack” against people based on their gender identity, which is a protected characteristic. A minority, on the other hand, argued that both posts would have violated the policy’s pre-January 7 version.
For the majority of the Board, neither post would have broken the rule against “statements denying existence,” under the previous version of the policy. This specific provision was removed as part of Meta’s January policy revision. Under the former rule, a violation would have required the content to contain a clear and unequivocal claim denying the existence of transgender individuals or identities (for example, asserting that transgender people do not exist, that no one can be transgender, or that anyone identifying as transgender is inherently mistaken). In the posts at issue, the users commented on the biological sex of the individuals in the videos, implying that they only “believe” themselves to be female. While such remarks may demonstrate a lack of respect for the individuals’ gender identity and could be perceived as offensive, they do not explicitly or implicitly deny the existence of transgender identities. At most, the posts suggest disagreement with the notion that gender identity should override biological sex in determining eligibility for women’s sports or access to female-only spaces. Although these views may be contentious, expressing them did not constitute a breach of the former Hate Speech policy.
The Board observed that Meta retained its ban on calls for social exclusion in the January 7 update to the Hateful Conduct policy. However, the revised policy introduced clearer allowances for sex- or gender-based exclusions from certain contexts, such as health and support groups—as well as spaces traditionally segregated by sex or gender, including restrooms and sports competitions. The accompanying policy rationale was revised to reflect Meta’s intent to permit gender- or sex-specific language in discussions related to these settings.
A majority of the Board concluded that neither of the two posts in question amounted to a call for exclusion under the previous version of the Hate Speech policy. In the video involving a transgender woman in a bathroom, the speaker did not urge her removal or exclusion, either in the moment or going forward, the majority opined. Instead, the person filming posed a question—“Do you think that’s OK?”—which, while possibly intrusive or disrespectful, did not constitute a demand for exclusion. Similarly, in the post about the transgender athlete, there is no appeal for her to be barred from the competition. The content simply portrayed her participation and victory, raising—implicitly—a broader question of fairness. Engaging in public debate over the inclusion of transgender athletes or questioning the eligibility of a specific competitor did not, in and of itself, breach the policy, according to a majority of the Board.
The Board also highlighted that, prior to the January changes, Meta’s internal reviewer guidance already permitted gender-based exclusion in relation to sports and, in certain cases, bathrooms. The increased clarity and transparency brought by the January revisions were considered a positive step in making these standards more accessible and understandable.
On the Bullying and Harassment community standard, the Board found by consensus no violation for the bathroom post since the adult transgender woman would have had to self-report the content for it to be assessed under the rules prohibiting “claims about gender identity” and “calls for … exclusion.” This type of self-reporting was not required for minors (aged between 13 and 18) unless they are considered by Meta to be a “voluntary public figure.” The majority of the Board agreed with Meta that the transgender athlete, who is a minor, was a voluntary public figure who had engaged with their fame, although for different reasons. For these Board members, the athlete voluntarily chose to compete in a state-level athletics championship, in front of large crowds and attracting media attention, having already been the focus of such attention in earlier athletic participations. Therefore, additional protections under Tier 3 of the policy, including the rule that does not permit “claims about gender identity,” did not apply.
A minority disagreed, arguing that the transgender athlete should not be treated as a voluntary public figure. Such public figure status should not be applied to a child because they have chosen to participate in an athletics competition that created media attention driven by their gender identity, which is not within their control. This should not equate to voluntarily engaging with celebrity status. Therefore, to this minority, the post violated the rule against “claims about gender identity,” as well as “calls for exclusion” under the Bullying and Harassment policy, and should have been removed. This minority agreed with Meta that “claims about gender identity” include misgendering. As underscored by a minority of the OSB, the post directly stated that the transgender athlete was a “boy who thinks he’s a girl” and used male pronouns. In its view, these were claims about gender identity targeting an identifiable child to harass and bully them, and as such, violated the policy.
2. Compliance with Meta’s Human Rights Responsibilities
The majority of the OSB held that allowing both posts to remain online aligned with Meta’s obligations under international human rights law. However, a minority of the members held a different view, arguing that Meta should have removed the content. In reaching its conclusion, the OSB assessed Meta’s actions in light of its responsibility to uphold freedom of expression without discrimination, as enshrined in Article 19 of the International Covenant on Civil and Political Rights (ICCPR).
Legality (Clarity and Accessibility of the Rules)
The principle of legality under international human rights law requires that restrictions on expression be clear, accessible, and precise, offering sufficient guidance to both users and those enforcing the rules. This includes ensuring that private actors like Meta adopt clear and specific policies for governing online speech. In this case, the Oversight Board held that Meta’s updated Hateful Conduct policy met the legality standard, as it was formulated with adequate clarity and accessibility for both users and content reviewers.
Legitimate aim
Any restriction on expression must pursue a legitimate aim under the ICCPR, such as protecting the rights of others. In previous OSB decisions like the Knin Cartoon case, the Board concluded that Meta’s Hateful Conduct (formerly Hate Speech) policy met this requirement, as it was designed to prevent users from feeling attacked based on who they are and to deter harm that could lead to exclusion or offline violence. Similarly, in the Pro-Navalny Protests in Russia case, the Board held that the Bullying and Harassment Community Standard served the same legitimate aim by protecting users from emotional distress and psychological harm, which could undermine their freedom of expression and right to health. In cases involving children, the OSB had already stressed (Iranian Make-up Video for a Child Marriage) that the best interests of the child must also be considered, in line with Article 3 of the UNCRC.
Necessity and Proportionality
According to Article 19(3) of the ICCPR, any limitation on freedom of expression must meet the standards of necessity and proportionality. This means that restrictions should serve a legitimate protective purpose, be the least restrictive means available to achieve that purpose, and maintain a balance between the harm addressed and the impact on expression (see General Comment No. 34, para. 34).
Considering this, the OSB argued that public debate on transgender rights must remain permissible under international human rights law, even when the views expressed could be offensive or controversial. While the posts at issue may be hurtful or discriminatory, the majority of the Board found they did not pose an imminent risk of incitement to violence and therefore did not meet the high threshold required to justify a restriction under Article 19(3) or Article 20(2) of the ICCPR. The majority emphasized that suppressing such speech risks chilling legitimate discourses, marginalizing dissenting voices, and driving prejudiced views to less regulated spaces, potentially exacerbating intolerance. Offensive speech, unless directly threatening or inciting violence, remains protected, and restrictions must be narrowly tailored to avoid impairing public understanding and democratic engagement on sensitive issues like gender identity.
The OSB applied the Rabat Plan of Action to assess whether the two posts met the high threshold for incitement under the ICCPR. This framework, developed by the UN Office of the High Commissioner for Human Rights, offers a six-part test to assess when speech reaches the threshold of incitement to discrimination, hostility, or violence under international law. The test examines: (1) the broader social and political environment; (2) the identity and influence of the speaker; (3) the speaker’s intent to provoke harmful action; (4) the content and presentation of the message; (5) how widely it was disseminated; and (6) the likelihood and immediacy of resulting harm. While recognizing the broader context of discrimination and hostility faced by transgender people, the majority concluded that the posts lacked intent to incite harm, contained no calls—explicit or implied—for violence or discrimination, and did not come from a speaker with formal authority or a large-scale platform that would increase the risk of harm. As such, the content did not present a likely or imminent risk of discrimination or violence. The majority encouraged Meta to adopt alternative, non-restrictive measures to address intolerance, such as promoting counter-speech, limiting algorithmic amplification, or using educational tools to foster respectful public debate. As the Board opined, there may also be less intrusive means available to Meta to address concerns around intolerance short of total content deletion—such as removal of posts from recommendations or limits on interactions or shares.
With respect to the Bullying and Harassment policy, the majority of the Board acknowledged that its objectives differ from those of the Hateful Conduct policy, focusing more narrowly on preventing harm to individuals who are personally targeted. Nonetheless, the broad language of these rules creates the risk of capturing content that may be self-deprecating, satirical, or rooted in specific cultural contexts. To help prevent excessive enforcement, Meta requires self-reporting by the affected individual in certain cases and excludes public figures from protection against less severe forms of harassment. Although the tools for self-reporting are somewhat limited, they serve a useful function in ensuring that content is not removed unless the person concerned perceives it as harmful.
The Board expressed concerns about how Meta classified the teenager in the second case as a “voluntary public figure.” Still, given the athlete’s visible role in a high-level competition and the public discourse surrounding their transgender identity, the majority found it reasonable to expect some level of public scrutiny. Taking into account the evolving capacities of adolescents, as recognized under the Convention on the Rights of the Child, the majority concluded that it was appropriate to consider the teen’s autonomy in this context. The decision not to apply Tier 3 protections under the Bullying and Harassment policy reflects both this recognition of agency and the broader public interest in the discussion, without undermining the principle that a child’s best interests must be safeguarded.
A minority of the OSB considered that Meta should have removed both posts under its Hateful Conduct and Bullying and Harassment policies, arguing they met the threshold of incitement to discrimination, hostility or violence against LGBTQIA+ individuals as outlined in Article 20(2) of the ICCPR and the Rabat Plan of Action. The minority emphasized the worsening global climate of violence and discrimination against transgender people and highlighted that the posts misgendered individuals, relied on harmful stereotypes, and were shared by an influential account known for anti-LGBTQIA+ content. With high visibility and widespread engagement, the posts significantly increased risks of harm, especially given Meta’s failure to adequately account for speaker influence and contextual dangers. They also criticized Meta’s low threshold for classifying children as “voluntary public figures,” which strips minors of key protections and allows targeted harassment—particularly acute for the LGBTQIA+ youth. The minority concluded that only content removal could have prevented further harm and that Meta’s approach failed to uphold its human rights responsibilities, especially regarding children’s rights and online safety.
The Board was unanimous in emphasizing that gender identity is a protected characteristic under international human rights law and should be treated accordingly in Meta’s policies. It raised concerns over Meta’s use of the term “transgenderism” in its Hateful Conduct policy, warning that such framing risks portraying transgender identity as an ideology rather than a personal identity, undermining the principles of equality and non-discrimination. To uphold human rights standards and ensure legitimacy, Meta should use neutral, inclusive language—such as referring to “discourse about gender identity and sexual orientation”—in its policies.
The Board also expressed concern over Meta’s failure to follow standard procedures for human rights due diligence in announcing and implementing its January 7, 2025, policy changes. Under the UN Guiding Principles on Business and Human Rights, Meta has an ongoing responsibility to assess and mitigate the human rights risks of its decisions. The lack of transparency about the process leading to these changes is problematic, especially given their global application. The Board urged Meta to publicly report on the impacts of these updates, including how different communities—such as women and LGBTQIA+ people—may be disproportionately affected, and to remain alert to risks of both overenforcement and underenforcement across its platforms, as previously recommended in Call for Women’s Protest in Cuba, Reclaiming Arabic Words, Holocaust Denial, Homophobic Violence in West Africa, Post in Polish Targeting Trans People.
The Oversight Board upheld Meta’s decision to leave up the content in both cases.
Policy Advisory Statement
The OSB recommended that Meta strengthen its human rights due diligence following the January 7, 2025, updates to the Hateful Conduct policy. Specifically, Meta should assess how the changes may negatively impact LGBTQIA+ people, especially minors, adopt measures to prevent or mitigate such risks, and regularly monitor their effectiveness. Meta is also expected to update the Board every six months and report publicly on its findings. To align its policies with international human rights standards, the Board also urged Meta to remove the term “transgenderism” from its Hateful Conduct policy, noting that it frames transgender identity as an ideology rather than an identity.
On the topic of enforcement, the Board issued two recommendations aimed at improving protections under the Bullying and Harassment policy. First, Meta should allow users to nominate connected accounts to self-report violations on their behalf, easing the burden on those directly targeted. Second, Meta should enhance its systems to ensure that when multiple reports are received on the same content, the most appropriate report—likely to reflect the actual target—is prioritized. Any technological changes must account for potential negative impacts on vulnerable or marginalized users.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
The Oversight Board’s decision expands expression by affirming that public debate on gender identity, including controversial or offensive viewpoints, must remain permissible under international standards on freedom of expression. While recognizing the serious risks of discrimination and violence faced by LGBTQIA+ communities, the majority of the Board found that the two posts in question—though disrespectful and hurtful—did not cross the high threshold for hate speech or incitement as defined under Article 19(3) and Article 20(2) of the ICCPR and the Rabat Plan of Action. The decision reinforces the principle that protecting freedom of expression includes safeguarding spaces for dissenting and even troubling views, provided they do not incite imminent harm. By upholding Meta’s decision to keep the content online, the Board emphasizes the importance of open discourse on socially sensitive topics, while urging Meta to adopt less intrusive measures (such as counter-speech and de-amplification) to mitigate harms without stifling democratic debate.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.