Global Freedom of Expression

Oversight Board Case of Image of Gender-Based Violence

Closed Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    August 1, 2023
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2023-006-FB-UA
  • Region & Country
    Iraq, Middle East and North Africa
  • Judicial Body
    Oversight Board
  • Type of Law
    International Human Rights Law, Meta's content policies
  • Themes
    Facebook Community Standards, Objectionable Content, Hate Speech, Safety, Bullying and Harassment
  • Tags
    gender-based violence, Facebook, Oversight Board Content Policy Recommendation, Discrimination, Domestic Violence, Satire/Parody

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

On August 1, 2023, the Oversight Board overturned Meta’s original decision to leave up a photo on Facebook depicting an identifiable Syrian activist with visible injuries, whose caption mocked gender-based violence and implied that women who get abused bring it on themselves. A user on the platform reported the post three times; however, the reports were closed without human review. After the user appealed the decision to the Board, Meta removed the post considering it violated the Bullying and Harassment policy. The Board found that Meta’s original decision was incompatible with Meta’s Bullying and Harassment policy, as it prohibits any content mocking physical injuries or medical conditions. Additionally, the Board expressed concerns about Meta’s current policies when dealing with content that normalizes gender-based violence.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

In May 2021, a Facebook user in Iraq posted a photo of a woman with visible marks from a physical attack. The caption was in Arabic and stated that the woman was hit by her husband after writing him a letter which he misunderstood due to a spelling mistake. According to the post, the man thought his wife asked him for a “donkey” when she wanted a “veil” (the Arabic words for donkey (“حمار”) and veil ( “خمار”) look similar). The caption indicated that the woman got what she deserved and mocked the situation by including smiling and laughing emojis throughout the post.

According to several sources, the woman in the photo was a Syrian activist who was imprisoned by the regime of Bashar Al-Assad “and later beaten by individuals believed to be affiliated with the regime.” [p. 4] The caption did not name her but her face was clearly visible. The post included a hashtag primarily used, according to experts consulted by the Board, by Syrian pages and groups to support women. The post had around 20,000 views and under 1,000 reactions.

In February 2023, a Facebook user reported the content three times arguing it violated the Violence and Incitement policy. The reports were closed without human review and the content remained on the platform. According to Meta, the company “considers a series of signals to determine how to prioritize content for human review, which includes the virality of the content and how severe the company considers the violation type. If content is not reviewed within 48 hours, the report is automatically closed.” [p. 5] In the case at hand, the report was not reviewed within 48 hours. Hence, it was closed without human review.

In Iraq—where the user posted the content from—, over 1.32 million people, mostly women and girls, are at risk of suffering different forms of gender-based violence according to the World Health Organization. The country doesn’t have legislation to combat domestic violence and the current penal code allows husbands to punish their wives.

Furthermore, over 7.3 million people in Syria—where the depicted activist in the content is from—, mostly women and girls, required services related to gender-based violence. According to the United Nations, the Syrian regime “has targeted women associated with the opposition and subjected them to torture and sexual abuse” [p. 5] since the beginning of the conflict in Syria. Moreover, the UN has reported a widespread climate of impunity regarding gender-based violence in the country.

The user who reported the content appealed the decision to leave up the content on Facebook to the Oversight Board. After the case was selected by the Board, Meta removed the post since the company considered that it breached the Bullying and Harassment policy.


Decision Overview

The Oversight Board analyzed whether Meta’s original decision to leave up a photo on Facebook of an identifiable Syrian activist with visible marks from a physical attack, and a caption mocking gender-based violence, complied with Meta’s policies and obligations under international human rights law.

Despite being notified of the Board’s decision to review the case, neither the user who posted the photo nor the user who reported it submitted a statement.

For its part, Meta explained that the content should have been removed because it violated the Bullying and Harassment policy. According to Meta’s regional team, the woman in the photo was “a known Syrian activist who had been jailed for her activism.” [p. 8] The company held that the photograph in the post depicts the activist after she was “beaten by individuals affiliated with the Bashar Al-Assad regime.” [p. 8] Meta argued that the content breached the aforementioned policy as it mocked the injuries sustained by the woman and implied she brought those injuries and abuse upon herself. Meta also found that the made-up story in the caption suggested the woman lacked intelligence.

Meta explained that it updated the Bullying and Harassment policy after the content was uploaded. Nonetheless, the company held that the update was only to streamline the policy “and did not change the protections afforded limited scope public figures, like the woman identified in the case content.” [p. 8] Meta further clarified that “[t]he relevant line under which [the company] removed this content was initially in Tier 4 of the policy, but as a result of the update it is now part of Tier 1.” [p. 9]

 

Compliance with Meta’s content policies

1. Content rules

The Board considered that the post violated Meta’s Bullying and Harassment policy—both before and after it was updated—because it mocked the serious injuries sustained by the woman in the photo. Meta, the Board noted, defined “mocking” in its internal guidance for reviewers as “an attempt to make a joke about, laugh at, or degrade someone or something.” The Board held that the content fulfilled this definition as the caption, in the form of a joke, implied she deserved to be attacked for making a typographical mistake.

Subsequently, the Board noted that the post had multiple interpretations considering the woman could have been attacked due to her activism, as a victim of abuse, or because of both. Regardless, the gendered nature of the mocking allowed the Board to conclude that the post violated the Bullying and Harassment policy, taking into account the “depicted person was identifiable.” [p. 10]

2. Enforcement action

The Board underlined several concerns about the enforcement of Meta´s policies. First, the Board noted that the contested content was not analyzed by a human reviewer despite the fact it was reported multiple times. For the Board, this could indicate that violations of the Bullying and Harassment policy are not prioritized for review. Second, the Board highlighted the challenges of moderating content in Arabic. As explained by the Board itself—following the Wampum Belt case—“Meta relies on a combination of human moderators and machine learning tools referred to as classifiers to enforce its Community Standards. In this case, Meta informed the Board that the company has a classifier targeting Bullying and Harassment for ‘General Language Arabic.’” [p. 10]

At this point, the Board recalled an independent human rights due diligence report by BSR—which was commissioned by Meta in response to the Board’s recommendations in the previous case of Shared Al Jazeera Post. According to it, the Board said, Meta’s enforcement problems in Arabic may be due to inadequate sensitivity to its different dialects. The Board was also concerned about “the lack of transparency on auditing of the classifiers enforcing this policy.” [p. 11]

 

Compliance with Meta’s human rights responsibilities

 The Board analyzed Meta’s original decision through the lens of the three-part test enshrined in Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Pursuant to it, restrictions to freedom of expression must be prescribed by law (legality), pursue a legitimate aim, and be necessary and proportionate.

1. Legality 

The Board held that restrictions to freedom of expression “should be accessible and clear enough in scope, meaning and effect to provide guidance to users and content reviewers as to what content is and is not permitted on the platform.” [p. 11] Deficiencies in this aspect, the Board held, could lead to arbitrary and inconsistent enforcement of rules, as described by General Comment 34 and UN Report A/HRC/38/35.

Considering this, the Board noted and welcomed Meta’s changes to the Bullying and Harassment Community Standard since they harmonized the terminology used in the policy and internal guidance. Before the update, “the Community Standard prohibited content mocking ‘serious physical injury’ while the internal guidance prohibited mocking a ‘medical condition.’ According to Meta, ‘medical condition’ is the broader term.” [p. 12] Nonetheless, the Board was concerned “that it may not be clear to users that ‘medical condition’ includes ‘serious physical injury’ and recommend[ed] that Meta makes this clear to its users.” [p. 12]

2. Legitimate aim

The Board considered that Meta’s Bullying and Harassment policy pursued a legitimate aim, as recognized by the ICCPR. In this case, the policy was “directed towards the legitimate aim of respecting the rights of others, including the right to equality and non-discrimination, and to freedom of expression.” [p. 12] As highlighted by the Board, the policy seeks to prevent harms resulting from bullying, harassment, and discrimination based on sex and gender, and promotes access to Meta’s platforms in favor of those who have been targeted. These aims, the Board said referring to the Joint Declaration on Freedom of Expression and Gender Justice, seek to foster online spaces that are safe for all women without discrimination, violence, hatred, and disinformation.

3. Necessity and proportionality

The principle of necessity and proportionality, the Board said, requires that restrictions to freedom of expression be appropriate to fulfill their protective function and the least intrusive measure to achieve it. Analyzing the case at hand, the Board found that removing the content was a necessary and proportionate measure to protect users from online bullying and harassment. For the Board, the Bullying and Harassment policy protects women human rights defenders from online violence—which, according to various UN reports, disproportionately affects them and often pushes them to leave online spaces or abandon their profession altogether. Such harassment, the Board noted, can manifest too into real-life physical assaults (A/HRC/38/47 and  A/HRC/40/60). The Board referenced a UN Women study which stated that 70% of women activists in Arab states reported feeling unsafe due to online violence.

The Board considered that the contested post mocked the woman in the picture, through a gendered joke, since it implied she deserved to be attacked. Hence, its removal was necessary “as less restrictive means would not prevent her image with a joke meant to belittle her from being disseminated.” [p. 13] Moreover, the Board also argued that the content normalized gender-based violence and that the hashtag used in it showed that the user intended to reach a broad group of women.

Thereupon, the Board underscored its concerns about Meta’s existing policies. To the Board, they were not sufficiently efficient to moderate content that normalizes gender-based violence by praising it or implying it was deserved. As the Board noted, the content in this case was analyzed under the Bullying and Harassment policy which cannot always be used to limit the harm of gender-based violence, for example, in cases where “the woman depicted was not identifiable, or if the same caption had accompanied a picture of a fictional character.” [p. 14] Meta explained that the contested content did not violate their Hate Speech policy as it did not target anybody based on protected characteristics. In light of this context, the Board opined that the existing policies and their enforcement practices do not encompass all the relevant content and that there was a gap that allowed discriminatory content to remain on the platforms. The Board then referred to CEDAW’s General Comment 35 to conclude that “social media companies [need] to strengthen self-regulatory mechanisms ‘addressing gender-based violence against women that takes place through their services and platforms.’” [p. 14]

Following the cases of Depiction of Zwarte Piet, Knin cartoon, and South African Slurs, the Board reiterated its stance that “certain content […] which may be discriminatory can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm.” [p. 15] It also stated that the accumulation of such harmful content helps to create a negative environment that fosters discrimination and increases the likelihood of violence. With this in mind, the Board highlighted the role social media platforms play in perpetuating gender-based violence and Meta’s responsibility to address these issues in a way that promotes freedom of expression for women and eliminates violence.

Thus, the Board argued, that the content in this case, alongside similar content, “normalize[d] gender-based violence by denigrating women and trivializing, excusing, or encouraging both public aggressions and domestic abuse”. This, the Board held, encourages or defends “the use of violence, and the harm to women’s rights and the perpetuation of an environment of impunity [contributing] to a heightened risk of offline violence, self-censorship, and suppression of the participation of women in public life.” [p. 15]

Considering these arguments, the Board overturned Meta’s original decision to leave up the content online as it didn’t comply with Meta’s policies and human rights obligations.

Recommendations

  1. The Board recommended Meta to explain and clarify to its users, in the public-facing Bullying and Harassment Community Standard, that the term “medical condition” includes “serious physical injury.”
  2. The Board recommended Meta to undertake “a policy development process to establish a policy aimed at addressing content that normalizes gender-based violence through praise, justification, celebration or mocking of gender-based violence.” [p. 16]

Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

After a thorough analysis of the systemic violence that women suffer online, and balancing the conflicting interests at stake, the Board concluded that content that normalizes gender-based violence should be deleted from Meta’s platforms. This approach, a priori, could be seen as one that contracts expression. However, it also has the potential of fostering a safer environment for women online, whose voices are often suppressed—or who must self-censor themselves out of fear of violence. The Board also considered that restricting freedom of expression in this case sought to protect women from discrimination and violence, which are legitimate interests that justify such restrictions.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

  • ICCPR, art. 19

    The Board referred to this provision to assess Meta’s responsibilities towards human rights through the lens of freedom of expression.

  • UNHR Comm., General Comment No. 34 (CCPR/C/GC/34)

    While employing the three-part test to assess whether the content should be removed, the Board referred to the General Comment for guidance.

  • United Nations Guiding Principles on Business and Human Rights (2011)

    The Board referred to this instrument to highlight Facebook’s human rights responsibilities.

  • CEDAW, General Recommendation No. 35 on gender-based violence against women (2017)

    The Board referred to this document to highlight that social media companies should adress gender-based violence against women in their services and platforms.

  • Rapporteur on freedom of opinion and expression, A/HRC/38/35 (2018)

    The Board recalled this report to underscore that norms that restrict freedom of expression shoud be accesible and clear in scope and meaning.

  • UN., Report of the Special Rapporteur on violence against women, its causes and consequences on online violence against women and girls from a human rights perspective, A/HRC/38/47, 2018

    The Board referenced this report to highlight that women are disproportionately affected by online harassment and violence.

  • UN., Report of the Special Rapporteur on the situation of human rights defenders, A/HRC/40/60 (2019)

    The Board referenced this report to argue that online harassment increases the likehood of real-life violence.

  • OSB, Knin Cartoon, 2022-001-FB-UA (2022)

    The Board referred to this case to reiterate its stance that hateful content can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm.

  • OSB, South African slurs, 2021-011-FB-UA (2021)

    The Board referred to this case to reiterate its stance that hateful content can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm.

  • OSB, Depiction of Zwarte Piet, 2021-002-FB-UA (2021)

    The Board referred to this case to reiterate its stance that hateful content can be removed due to its cumulative effect, without the need to show that each piece of content can cause direct and imminent physical harm.

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition to Article 4 of the Oversight Board Charter, “The board’s resolution of each case will be binding and Meta will implement it promptly, unless implementation of a resolution could violate the law. In instances where Meta identifies that identical content with parallel context — which the board has already decided upon — remains on Meta, it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision or a policy advisory opinion includes recommendations, Meta will take further action by analyzing the operational procedures required to implement the recommendations, considering those recommendations in the formal policy development process of Meta, and transparently communicating about actions taken as a result.”

Official Case Documents

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback