Oversight Board Case of India Sexual Harassment Video

Closed Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    December 14, 2022
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2022-012-IG-MR
  • Region & Country
    India, Asia and Asia Pacific
  • Judicial Body
    Oversight Board
  • Type of Law
    International/Regional Human Rights Law, Meta's content policies
  • Themes
    Facebook Community Standards, Objectionable Content, Adult Nudity and Sexual Activity
  • Tags
    Meta Newsworthiness allowance, Oversight Board Content Policy Recommendation, Oversight Board Enforcement Recommendation, Oversight Board on Meta Interstitials, Oversight Board Policy Advisory Statement, Oversight Board Transparency Recommendation, Facebook, Sexual Harassment, Gender-based Violence, Discrimination

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Oversight Board upheld Meta’s decision to restore an Instagram video showing a group of men sexually assaulting an Adivasi woman in India, shared by an account highlighting Dalit perspectives. The video, which contained no nudity and did not reveal the victim’s identity, was initially removed under the Adult Sexual Exploitation policy but later reinstated following internal escalation under the “newsworthiness allowance,” with an 18+ warning screen. The Board considered this consistent with international freedom of expression standards, stressing the strong public interest in exposing the systemic sexual violence and discrimination faced by Dalit and Adivasi women, while recognizing the potential harm such content may cause to victims and viewers. Applying the necessity and proportionality test, it concluded the video should remain online with safeguards to protect the victim’s dignity and shield children and survivors of abuse. Nonetheless, the Board criticized the vagueness and unreliable application of the “newsworthiness allowance.” It recommended that Meta clarify the policy and add a specific exception within the Adult Sexual Exploitation policy to distinguish between banned depictions of sexual violence and permitted content raising awareness of gender-based violence and discrimination against marginalized groups. The new exception should apply to content depicting non-consensual sexual touching only where the victim is not identifiable, the content is intended to raise awareness, is not sensationalized, and contains no nudity. It further advised Meta to update its internal guidelines so reviewers know when to escalate such cases for consideration under this exception.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

In March 2022, an Instagram account describing itself as a news platform sharing Dalit perspectives (the lowest caste within India’s caste system) posted a video showing a group of men sexually harassing a faceless Adivasi woman in India. The term Adivasi refers to a heterogeneous group of indigenous peoples in India. Both Dalit and Adivasi women are frequent targets of sexual violence. The posting account had approximately 30,000 followers, primarily based in India, and the video had already gone viral before being posted.

After a user reported the content, human moderators reviewed and removed it for violating Meta’s Adult Sexual Exploitation policy, which prohibits depictions of sexual violence. As a result, the account received one standard strike, one severe strike for egregious violations, and a 30-day restriction on starting live videos.

Later the same day, a member of Meta’s Global Operations team escalated the case for review by policy and safety experts. Following this review, the post was granted a newsworthiness allowance. The strikes were removed, and the content was reinstated behind a warning screen that flagged the video as violent or graphic and restricted access for users under 18. The newsworthiness allowance enables content that violates policies to remain online if it is deemed newsworthy and in the public interest.

Meta referred the case to the Oversight Board, noting the difficulty of balancing the value of exposing and condemning sexual exploitation with the potential harm of allowing sexually abusive material to remain on its platforms.


Decision Overview

On 22 December 2022, the Oversight Board issued a decision on the matter. The central issue was whether Meta’s decision to restore a video depicting sexual harassment behind a warning screen after granting it a newsworthiness allowance was compatible with Meta’s Adult Sexual Exploitation policy, its values, and its human rights responsibilities to uphold freedom of expression and non-discrimination.

The user who posted the video was given an opportunity to submit a statement to the Board but did not do so.

In its submissions to the Board, Meta argued that the caption and background of the posting account suggested an intent to condemn and raise awareness about violence against marginalized groups, though the Adult Sexual Exploitation policy contains no exception for this. The company justified applying the newsworthiness allowance by emphasizing the post’s public interest value, given that it was shared by an account amplifying underrepresented voices to raise awareness of gender-based violence against Adivasi women—whose perspectives have long been repressed in India.

Meta also argued that the risk of harm was low since the video did not include explicit nudity or sexual activity, was not sensationalized, and the victim’s identity was protected. It explained that recognition as a news organization is determined not by self-description but by subject matter experts and regional specialists using market knowledge and prior classifications, and claimed its approach aligned with its values and human rights responsibilities.

1. Compliance with Meta’s Content Policies

Content rules and enforcement

The Board agreed with Meta that the video violated the Adult Sexual Exploitation policy, as it depicted sexual harassment. A majority upheld Meta’s decision to restore the content under the newsworthiness allowance, recognizing the strong public interest in raising awareness about violence against a marginalized group. At the same time, the Board stressed that the newsworthiness allowance was applied inconsistently and lacked a clear, scalable process for assessing similar cases.

The Board acknowledged that keeping such content on Instagram promoted awareness but also underscored risks: content depicting sexual assault could cause serious harm, particularly if victims were identifiable. It held that content where a victim is identifiable should not remain online unless posted with the victim’s consent.

On this specific case, the Board was divided. The majority concluded that the victim could not reasonably be identified, as her face was not visible, the video quality was poor, and the caption revealed no personal information. A minority, however, argued that viewers with local knowledge of the incident, which was reported by local news outlets, might still be able to identify her. While the majority acknowledged this concern, it considered that local awareness did not make the victim identifiable.

Transparency

The Board considered that Meta had disclosed more information about how it applied the newsworthiness allowance following the Colombia Protests and Sudan Graphic Video decisions. However, it held that Meta had not developed or shared criteria for reviewers to escalate content that could qualify for the newsworthiness allowance, despite the Board’s previous recommendations. The Board reiterated that Meta should establish and publish clear escalation criteria for applying the newsworthiness allowance.

2. Compliance with Meta’s values

Meta’s values of “Voice,” “Privacy,” “Safety,” and “Dignity” pointed in different directions in this case. Sharing the video to raise awareness of the abuse faced by Adivasi communities supported the value of “Voice” and could help advance their “Safety” and “Dignity.” At the same time, however, the “Dignity” and “Safety” of the victim and other Adivasi women were at risk of being undermined if publicity around sexual violence contributed to normalizing such conduct.

A majority of the Board concluded that keeping the video on Instagram aligned with Meta’s values, as the content was not explicit and the victim could not be identified. A minority, however, argued that even a low risk of identification remained and therefore the video should have been removed to protect the victim’s privacy, dignity, and safety.

3. Compliance with Meta’s human rights responsibilities

A majority of the Board held that keeping the video online was consistent with Meta’s human rights responsibilities under the UN Guiding Principles on Business and Human Rights—including respecting Article 19 of the International Covenant on Civil and Political Rights (ICCPR) on the right to freedom of expression. The Board also applied other standards in this case, such as the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), which safeguards the exercise of the right to freedom of expression against discrimination. The Board highlighted that the content was aimed at raising awareness about violence against Adivasi women, a type of content that deserved a high level of protection.

The Board applied the three-part test laid out in Article 19(3) of the ICCPR to assess whether restoring the content was valid.

Legality (clarity and accessibility of rules)

The Board emphasized that rules restricting expression must be clear and accessible, meaning that Meta’s users and reviewers should be able to understand what content is permitted on the platforms. On this point, the Board concluded that Meta had not met this requirement, as the newsworthiness policy remained vague and left broad discretion to those enforcing it. The Board reiterated that vague standards risk arbitrary application and fail to properly balance the rights at stake.

In earlier cases, such as Breast Cancer Symptoms and Nudity and Ayahuasca Brew, the Board had already expressed concern about users’ lack of clarity regarding which policies apply on Instagram and under what circumstances Facebook policies may also apply. The same concern was raised in this case.

The Board underlined the urgent need for Meta to provide more information on the standards, internal guidance, and escalation processes governing the newsworthiness allowance so that users can understand whether, and in what circumstances, it may apply to their content.

Legitimate aim

The Board emphasized that the legitimate aim in this case was the protection of the rights of others. It noted that Meta’s Adult Sexual Exploitation policy is designed to prevent harassment and to safeguard the rights to life, privacy, and physical and mental health, as well as to combat gender-based violence and discrimination.

The Board concluded that the company acted in pursuit of this aim by applying a warning screen, which both shielded victims of sexual harassment from potentially triggering material and restricted children from accessing harmful content.

Necessity and proportionality

Under the principle of necessity, restrictions on freedom of expression must achieve their protective purpose. The proportionality prong of the test requires that restrictions on expression must be the least intrusive means available. A majority of the Board held that removing the content was neither necessary nor proportionate, and that applying a warning screen met these requirements. A minority, however, considered that the removal of the content was both necessary and proportionate.

The Board further considered that to meet its human rights responsibilities, Meta should introduce a clear exception within its Adult Sexual Exploitation policy to better balance competing rights. Such an exception would not replace the newsworthiness allowance but would operate alongside it, providing reviewers with clearer guidance on weighing public interest against the risk of harm—even in situations where the victim might be identifiable.

Decision to leave the content up

The Board acknowledged the potential harms posed by similar content, including the risk of re-victimization, social stigmatization, and other forms of abuse or harassment when victims are identifiable. While it agreed that these harms are severe when identification is possible, it found that the likelihood of harm is significantly reduced if the victim cannot be identified. A majority considered the probability of identification in this case to be low and concluded that the use of a warning screen adequately protected the victim from interacting with the content, associated comments, and reshares.

The Board assessed broader risks, such as the role of social media in amplifying caste-based hate speech in India and reinforcing existing power structures. It noted that content depicting violence against women could normalize abusive conduct and foster more permissive attitudes toward such behavior. At the same time, the majority highlighted that activists and news organizations depend on social media to raise awareness of violence against marginalized groups, particularly in contexts where press freedom is under threat. In India, Dalit and Adivasi women face severe discrimination and violence, yet public records underreport such crimes, and independent journalism is increasingly constrained, making social media a vital channel for advocacy.

The Board ultimately diverged on the question of victim identification: the majority concluded that the risk was minimal and supported keeping the content online with a warning screen, while the minority considered the risk of identification too great and favored removal of the post despite its public interest value.

Warning screen and age restriction

The Board said that applying a warning screen was a “lesser restriction” than removal. The majority considered it the least intrusive measure to mitigate potential harms while safeguarding freedom of expression. Referring to its decision in the Sudan Graphic Video case, the Board emphasized that warning screens allow users to make an informed choice about viewing content while protecting the dignity of individuals depicted in it. The minority, however, argued that warning screens were insufficient given the severity of the harms and that removal was required.

The Board also highlighted that warning screens served the additional purpose of imposing an age restriction to protect minors, which forms part of Meta’s human rights obligations. The majority agreed with Meta that age restrictions struck an appropriate balance by shielding children while allowing content of public interest to remain accessible.

Design of policy and enforcement processes

While the majority supported Meta’s decision to restore the content, the Board expressed concern over the ineffectiveness of the newsworthiness allowance. It unanimously held that Meta should adopt clear policies and defined criteria for permitting depictions of sexual harassment, distinguishing between content intended to raise awareness and content that incites violence or discrimination.

To the Board, the newsworthiness allowance was an ineffective tool for moderating content at scale, noting that it had been applied only 68 times between 1 June 2021 and 1 June 2022, with few instances involving the Adult Sexual Exploitation policy. The case also revealed flaws in the escalation process: the content was escalated not by a reviewer, whose role is to analyze speech, but by a member of the Global Operations team. Because of vague escalation rules, reviewers lacked guidance on when and how to escalate content. In addition, the newsworthiness allowance was overly broad, applying across all of Meta’s policies without criteria tailored to balance the specific harms posed by content violating the Adult Sexual Exploitation policy.

Hence, the Board recommended that Meta introduce a specific exception within the Adult Sexual Exploitation policy for content raising awareness of sexual harassment against marginalized groups. Such an exception, supported by updated guidance, would enable at-scale reviewers to escalate relevant cases to experts, who could then decide whether the newsworthiness allowance should apply. This approach, the Board held, would be more effective than relying solely on the vague and inconsistently applied newsworthiness allowance.

Non-discrimination

The Board stressed Meta’s obligation to uphold equality and avoid discrimination on its platforms. It acknowledged the difficulty of balancing content that raises awareness of violence against marginalized groups with the need to protect the privacy and security of individuals from those same groups. While the Board recognized that serious individual harm could, in some cases, outweigh the benefits of awareness-raising, the majority found that the low probability of identifying the victim in this case significantly reduced the risk of harm and justified keeping the content online with a warning screen. The minority, however, considered the risk of identification too high and maintained that the content should have been removed.

In light of these arguments, the Oversight Board upheld Meta’s decision to keep the video on Instagram and agreed on the use of a warning screen to protect under-18 viewers and other victims from traumatizing content.

4. Policy advisory statement

Policy

The Board recommended that Meta add an exception to its Adult Sexual Exploitation policy to permit depictions of non-consensual touching when the content is shared to raise awareness, the victim is not identifiable, the material does not include nudity, and it is not presented in a sensationalized manner, thereby minimizing risks of harm to the victim. This exception should be applied only at the escalation stage.

Enforcement

The Board recommended that Meta update its internal guidance to clarify when content falling under the Adult Sexual Exploitation policy should be escalated for review in light of this new exception.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

This decision has a mixed outcome on expression. On the one hand, the Board expanded expression by upholding Meta’s decision to restore the content, recognizing that the victim of sexual violence depicted in it was not readily identifiable and that the post served a clear public interest. This broadened the scope of protected expression by allowing speech that would ordinarily be prohibited under Meta’s policies. On the other hand, the application of a warning screen limited the reach of the content. The Board considered this measure to be the least intrusive option, enabling the post to remain on Instagram while protecting minors from harmful material and shielding victims from potentially triggering depictions of sexual harassment.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

 

The decision was cited in:

Official Case Documents

Official Case Documents:


Amicus Briefs and Other Legal Authorities


Attachments:

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback