Global Freedom of Expression

Oversight Board Case of Violence Against Women

Closed Expands Expression

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    July 12, 2023
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2023-002-IG-UA & 2023-005-IG-UA
  • Region & Country
    Sweden, Europe and Central Asia
  • Judicial Body
    Oversight Board
  • Type of Law
    International/Regional Human Rights Law, Meta's content policies
  • Themes
    Digital Rights, Instagram Community Guidelines, Hate Speech, Bullying and Abuse
  • Tags
    Oversight Board Content Policy Recommendation, Oversight Board Enforcement Recommendation, Oversight Board Transparency Recommendation, gender-based violence, Social Media

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Oversight Board overturned Meta’s decisions to remove two Instagram posts condemning and raising awareness about gender-based violence. Meta originally removed the contents because they violated, according to the company, its Hate Speech policy. The first post included a statement that was characterized by two at-scale reviewers as an unqualified behavioral statement and the second post included a statement deemed as an expression of contempt against men. The Board found both posts did not violate the aforementioned policy as they aimed to raise awareness about violence against women and did not promote offline harm or create an environment of discrimination against men. The Board also decided that the posts aligned with Meta’s value of “Voice” as they sought to raise awareness. Additionally, the Board concluded that the removal of the posts was inconsistent with Meta’s human rights responsibilities as the measure didn’t meet the requirements of legality, necessity, or proportionality. The Board recommended Meta modify its policies to include an exception that allows content raising awareness about gender-based violence.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


An Instagram user in Sweden posted a video with Swedish audio, and its transcription, depicting a woman explaining her experience in a violent relationship—without providing any graphic details of the violence she endured. The video was accompanied by a caption that highlighted victim-blaming in gender-based violence and provided a helpline for victims. The caption also mentioned the International Day for the Elimination of Violence against Women to show support for women. Further, the caption also said, “men murder, rape and abuse women mentally and physically – all the time, every day.” The post was viewed 10,000 times.

After a Meta classifier identified the content to be potentially violating of Meta’s Hate Speech policy, two reviewers removed the post and gave the user a strike. After the decision was appealed by the user, it was automatically sent to a High Impact False Positive Override (HIPO) channel which is tasked with identifying wrongfully removed content. The content was sent to the same human reviewers who had initially examined it and the decision was upheld. In total, those two human reviewers examined the content seven times and found it violating each time. After the Oversight Board selected this case, Meta determined the removal was an error and restored the content. The strike was removed as well.

While assessing the first post, the Board received another appeal from the same user. The second appeal concerned an Instagram video of a woman speaking Swedish and saying that although she was a man-hater, she didn’t hate all men and that her feelings of hatred stemmed from fear of violence. She further compared men who committed violence against women to venomous snakes. In the video’s caption, the user asked men who were “allies” to help women in their struggles. The post was viewed 150,000 times.

Meta removed the aforementioned post arguing that it violated its Hate Speech policy and applied a strike to the account—preventing it from creating live videos. The user appealed this decision. Meta upheld the removal which led the user to appeal to the Board for the second time.

Decision Overview

The main issue before the Oversight Board was whether Meta’s removal of two posts discussing gender-based violence against women—using harsh language against men—abided by the company’s Hate Speech policy, values, and responsibilities under international human rights law.

In their first appeal, the user submitted that the removal of their post hindered an important discussion about domestic violence and that their intention was solely to show women who faced domestic violence that they were not alone. In the second appeal, the user highlighted that the goal of their post was to discuss the problem of violence against women, as perpetrated by men, and not to spread hatred against them.

Meta argued that the first decision was an error since the post did not violate its Hate Speech policy—as the full context of the post made the sentence “men murder, rape and abuse women mentally and physically – all the time, every day” a “qualified behavioral statement”. Meta reached this later decision by considering several factors such as the reference to the International Day for the Elimination of Violence Against Women and to helplines and experiences of gender-based violence. All those factors showed that the user intended to raise awareness which should have allowed the content to stay on Instagram.

Meta further clarified that “its policies generally do not grant reviewers discretion to consider intent” [p. 10] to ensure consistent and fair enforcement of its rules, and avoid bias, especially in cases of hate speech.

Regarding the second post, the company reasoned that it contained an expression of hatred towards men which violated its Hate Speech policy. Meta noted that the content might have violated other elements of the policy; however, the removal decision was made based on the expression where the user described themself as a man-hater.

The Board asked Meta 14 questions in writing about the criteria, internal guidelines, and automated processes for distinguishing qualified and unqualified behavioral statements; how the accumulation of strikes impacts users on Instagram; internal escalation guidelines for at-scale reviewers; and how at-scale reviewers evaluate context, intent, and the accuracy of statements. Meta answered all the questions.


  • Compliance with Meta’s content policies and values

1. Content Rules

The Board concluded that the first post did not violate any of Meta’s content policies. The Board further stated that while the statement “men murder, rape and abuse women mentally and physically – all the time, every day” might be understood in many ways, it constituted a qualified behavioral statement as the post was not a generalization about all men and helped reassure women who were victims of gender-based violence. As noted by the Board, qualified behavioral statements “use statistics, reference individuals, or describe direct experience […] unqualified behavioral statements [on the contrary] ‘explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.’” [p. 8]

The Board’s argument was further supported by the fact that the user referred to the International Day for the Elimination of Violence Against Women and included information about local support organizations. Within this context, the Board concluded that the first post was a non-violating qualified statement.

Some of the Board members also relied on the global context of violence against women in their analysis. They decided that the content raised awareness about a worldwide issue and was not a generalization about men. However, other members disagreed to consider the global context, worrying that such broad and contested considerations might cause controversial interpretations of the meaning of hate speech. The majority of the Board did not consider the societal phenomenon of violence against women, and the debate around its roots, in their decision that the statement amounted to a qualified behavioral statement.

The Board found that the second post did not violate Meta’s content policies. Upon examining the second post as a whole, the Board concluded that it was not an expression of contempt. The Board further decided that while the user described themself as a “man-hater”, they made it clear that they did not hate all men. They said their hate stemmed from fear and was a way to condemn violence against women rather than an expression of hatred towards men. The Board considered the venomous snakes’ analogy to be another indicator—even when most snakes are harmless, however, the fear of venomous snakes brushes off onto all snakes, as the user explained. Some of the Board members disagreed with that argument and considered the second post as an expression of contempt that violated Meta’s policies. A subset of those members believed the post should not be restored at all.

The Board considered that the second post was more complex than the first one. The first post, according to the Board, could have been easily recognized as a qualified behavioral statement while the second post required a nuanced analysis of the content as a whole. Ultimately, the Board concluded that the post, in light of its context, condemned violence against women. Thus, the majority of the Board decided to restore it.

The Board further agreed that the posts did not promote offline violence. Hence it couldn’t be said that they violated the Hate Speech policy rationale. The Board found the posts fell within Meta’s value of “Voice”, as they aimed to decrease offline violence against women. Therefore, the Board held that removing the posts was inconsistent with Meta’s values.

2. Enforcement action

Upon analyzing Meta’s decisions, the Board considered that the company’s appeal process—where the same two reviewers reviewed the same content seven different times— meant that they reviewed their own decisions rather than using different reviewers. The Board expressed its concern regarding the effectiveness of the appeal process and the HIPO reviews in light of the current approach. The Board recalled the Wampum belt case to reiterate its concerns regarding Meta’s review and appeal system and its accuracy.

The Board further stressed that there was too much pressure on at-scale reviewers to assess complex content, requiring nuanced analysis, in a short time—often mere seconds. The Board underscored its concern about the limited resources available to moderators and their capacity to prevent mistakes, by referencing the cases of Wampum belt and Two buttons meme.

3. Transparency

The Board welcomed Meta’s implementation of the Board’s recommendations to clarify and modify its strikes and penalty systems. However, it highlighted that Meta did not provide information about the consequences of Instagram strikes to its users in its Transparency Center. While Meta shared some information about the penalties imposed on Instagram accounts that accumulate strikes in an Instagram Help Center article, the Board deemed it less accessible and not comprehensive as it didn’t mention all the penalties.


  • Compliance with Meta’s human rights responsibilities

As part of its analysis, the Board considered whether Meta’s actions fulfilled its human rights responsibilities, especially towards the right to freedom of expression. The Board relied on the United Nations Guiding Principles on Business and Human Rights to understand Meta’s responsibilities, and on the International Covenant on Civil and Political Rights (ICCPR) to analyze whether those responsibilities were fulfilled.

The Board emphasized the importance of the internet as a tool for women to express themselves and talk about their struggles, including gender-based violence. Moreover, the Board stressed that unlawfully restricting speech raising awareness about gender-based violence would lead to hindering the eradication of violence against women.

In order to assess whether Meta lawfully restricted the user’s freedom of expression, the Board implemented the three-part test, provided for in Article 19 of the ICCPR, which analyzes the legality, legitimate aim, and the necessity and proportionality of the restrictions.

1. Legality (clarity and accessibility of the rules)

The principle of legality, the Board noted, requires rules limiting expression to be clear and publicly accessible to provide guidance for those who apply them. The Board found that Meta’s approach when enforcing its Hate Speech policy failed to fulfill the principle of legality. Even though the Hate Speech Policy allows expressions raising awareness of gender-based violence, the Board determined that there was no guidance in the public-facing policy and internal guideline documents ensuring that such posts would not be removed mistakenly.

Regarding the first post, the Board noted that Tier 1 hate speech rules of qualification were applied. As the Board mentioned, when it notified the case to Meta, the company used contextual analysis to backtrack its original decision; however, such analysis was not available to the reviewers—preventing them from ever reaching the right decision even when there were clear cues within the content about its aims regarding gender-based violence.

Considering that Meta informed the Board that the internal guidelines instruct reviewers to default to removing behavioral statements in cases where it’s unclear whether the statement is qualified or not—as it is challenging to determine intent—the Board underscored its concern that reviewers would remove non-violating content that raised awareness about gender-based violence.

Tier 2 hate speech rules on expressions of contempt were applied to the second post, the Board noted. The Board recognized that the public-facing policy of the Tier 2 hate speech rules, in the Hate Speech policy, was clearer than the Tier 1 hate speech rules. However, there was still concern about Meta’s approach when dealing with expressions to condemn or raise awareness. The Board held that there was no guidance to guarantee an allowance regarding awareness-raising content due to Meta’s position that additional language present in the same video should not be used to analyze statements of contempt.

2. Legitimate aim

According to the Board, Article 19 of the ICCPR provides an exhaustive list of legitimate aims that justify limitations to freedom of expression, which include the protection of the rights of others. The Board recognized that the Hate Speech policy aims to protect Meta’s users from harm caused by hate speech, such as offline violence or discrimination.

3. Necessity and proportionality

As the Board explained, limitations to freedom of expression must be appropriate and the least intrusive measures to achieve their protective functions. As such, social media platforms should consider a wide range of ways to deal with problematic content before deleting it.

The Board implemented the Rabat Plan of Action to analyze the content at hand. The Rabat Plan is used to assess “the necessity and proportionality of removing hate speech” [p. 17] by studying the context of the expression, the identity and intent of the speaker, the content itself, the extent of the expression, and the likelihood of harm including its imminence. The Board found that, in light of the test, the content didn’t pose any risk of imminent harm, thus its removal would be unnecessary to protect men from harm.

Furthermore, the Board determined that the posts were of public interest and non-violent as they directly condemned, and raised awareness about,  gender-based violence. The Board stressed that the first post was a factual statement that men committed gender-based violence and the second post included a personal opinion caused by the global phenomenon of violence against women. Members of the Board who found that the second post violated the Hate Speech policy agreed that the post should remain on Instagram as it did not pose likely or imminent harm, making its removal unnecessary.

The Board said that these cases highlighted how Meta’s enforcement approach to gender-based hate speech can result in the disproportionate removal of content raising awareness and condemning violence against women. The Board recommended Meta to consider the context and prevalence of gender-based violence in its policy and enforcement choices to allow content that raises awareness and does not promote offline violence or create an environment of intimidation.


  • Policy Recommendations

1. Content policy

The Board recommended Meta to include an exception to its Hate Speech policy to allow content that condemns or raises awareness of gender-based violence.

2. Enforcement

The Board recommended Meta to update its internal guidelines documents for reviewers to include clear parameters about what constitutes a qualified behavioral statement and what does not. The Board said that this was important since the current guide made it impossible for reviewers to reach the correct decisions when evaluating content condemning and raising awareness about gender-based violence. 

The Board also recommended Meta to improve the accuracy of secondary reviews “by sending secondary review jobs to different reviewers than those who previously assessed the content.” [p. 19]

3. Transparency

The Board recommended Meta to update its Transparency Center page with information on all penalties imposed when accounts accumulate strikes on Instagram—like it does for Facebook—to provide greater transparency and foreseeability to Instagram users.

Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Expands Expression

The Oversight Board’s decision expanded freedom of expression by allowing speech condemning or raising awareness about gender-based violence even if it includes harsh and offensive language to some groups, as long as it doesn’t promote violence or create an environment of discrimination against them. In its analysis of the content at hand, the Board implemented the Rabat Plan of Action and concluded that the statements did not incite violence, hatred, or discrimination against men. Thus they fell within the scope of freedom of expression. Through this decision, the Board fosters a safer environment for women to express themselves and raise awareness about their struggles.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2.1.3 of the Oversight Board Bylaws, “Where Meta determines that the content in a particular case was incorrectly actioned by Meta and reverses its original decision, the case selection committee may choose to select the case for summary decision […]”. In addition to that, Article 2 of the Oversight Board Charter stipulates that, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.”

Furthermore, Article 4 of the same Charter provides that, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

The decision was cited in:

Official Case Documents

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback