Oversight Board Case of United States Posts Discussing Abortion

Closed Expands Expression

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    September 6, 2023
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2023-011-IG-UA, 2023-012-FB-UA, 2023-013-FB-UA
  • Region & Country
    United States, North America
  • Judicial Body
    Oversight Board
  • Type of Law
    International/Regional Human Rights Law, Meta's content policies
  • Themes
    Political Expression, Facebook Community Standards, Violence And Criminal Behavior, ​​Violence and Incitement, Referral to Facebook Community Standards
  • Tags
    Reproductive Rights/Abortion, Threatening Statements, Satire/Parody, Facebook, Oversight Board Enforcement Recommendation, Instagram

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Oversight Board overturned Meta’s original decisions to remove three posts expressing different views on abortion and abortion policy in the United States, shared in the aftermath of the Dobbs v. Jackson Women’s Health Organization Supreme Court ruling. The posts were removed for allegedly violating Meta’s Violence and Incitement policy—specifically its prohibition on death threats. However, after the appeals were filed, Meta reversed its decisions, acknowledging the satirical and rhetorical nature of the content. The Board raised concerns about Meta’s over-enforcement of this policy, warning that misclassifying rhetorical language as violent threats could unduly suppress political expression and stifle open debate on abortion.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

In March 2023, three United States–based users posted content related to abortion on Meta platforms, reflecting differing perspectives on the topic. The posts were shared in the aftermath of the Dobbs v. Jackson Women’s Health Organization decision, in which the United States Supreme Court overturned Roe v. Wade and ruled that abortion is not protected under the U.S. Constitution.

In the first case (Facebook group case), a user posted an image in a public group with approximately 1,000 members. The group identifies as supporting traditional values and opposing the “liberal left.” The image depicted outstretched hands accompanied by the text: “Pro-Abortion Logic. We don’t want you to be poor, starved or unwanted. So we’ll just kill you instead,” with the caption “Psychopaths.”

In the second case (Instagram news article case), a user posted a screenshot of a news article reporting on a lawmaker’s proposal to introduce the death penalty for women who obtain abortions. The caption read: “So pro-life, we’ll kill you dead if you get an abortion.”

In the third case (Facebook news article case), a user shared a link to a similar article and commented: “So it’s wrong to kill, so we are going to kill you?” Each of the posts received fewer than 1,000 interactions.

Meta’s automated systems flagged all three posts as potentially violating and referred them for human review. Human reviewers determined that each post violated the Violence and Incitement policy—specifically the prohibition on death threats. In the first two cases, the original removal decisions were upheld on appeal. In the third case, one reviewer found the post non-violating, but a subsequent reviewer found it violating. In total, seven human reviewers were involved across the three cases—four based in the Asia Pacific region and three in Central and South Asia.

All three users appealed to the Oversight Board (OSB). After being notified of the appeals, Meta re-reviewed the cases and determined the removals were mistakes, as none of the posts contained death threats. The posts were subsequently restored by the company.


Decision Overview

The Oversight Board issued a decision on September 6, 2023. The main issue it analyzed was whether the removal of the posts, discussing abortion in inflammatory terms, was compatible with Meta’s Violence and Incitement Community Standard and human rights obligations. The OSB selected these cases to explore the challenges of moderating violent rhetoric used figuratively, particularly in debates around abortion. While the posts were clearly enforcement errors—both Meta and the Board agreed they did not contain threats or violate policies—the OSB remained concerned that Meta’s approach may disproportionately impact legitimate discourses on abortion. The cases were chosen to assess whether such errors point to a broader, systemic issue.

All three affected users submitted statements as part of their appeals to the Oversight Board. The user in the Facebook group case stated they were not inciting violence but criticizing what they saw as the flawed logic of pro-abortion groups. The Instagram user explained their post was sarcastic and intended to echo anti-abortion rhetoric. They also alleged that Meta failed to enforce its Violence and Incitement policy consistently, particularly when it comes to protecting LGBTQIA+ individuals from credible death threats. The user in the Facebook news article case argued that censoring discussions about women’s rights harms public discourse, emphasized that they did not advocate violence, and noted that content creators often resort to euphemisms like “de-life” or “unalive” to avoid content moderation.

Meta explained that although human reviewers initially found the three posts to violate its Violence and Incitement policy, the company later determined—after the users appealed to the Oversight Board—that none of the posts contained threats and should have remained on the platform. In the Facebook group case, Meta concluded the user was not threatening anyone but critiquing what they perceived as the logic behind pro-abortion views. In the Instagram news article case, Meta found no threat of violence when the post was read holistically. Similarly, in the Facebook news article case, Meta determined that the post, which used satire to criticize proposed legislation, did not contain any violent threats. Meta acknowledged it did not know why six of the seven reviewers had incorrectly assessed the content, as it does not require at-scale reviewers to record the rationale for their decisions. A root cause analysis conducted for each case concluded the errors were due to human review mistakes, not flaws in protocol.

1. Compliance with Meta’s content policies

The OSB considered that none of the posts violated the Violence and Incitement policy, which prohibits “threats that could lead to death targeting people.” To it, none of the posts were threatening or incited violence; rather, they were caricatures of the views opposed by the authors. The non-violent nature of the posts became even clearer when read in full and in context. On this point, the Board said that Meta must ensure its systems can distinguish between actual threats and the rhetorical use of violent language. In discussions about abortion policy, harmful content may include genuine threats directed at activists, vulnerable women, medical professionals, and judges. While such threats must be addressed, the OSB emphasized that political discussions may also contain violent language used in non-literal or satirical ways.

The Board highlighted that wrongful removals of non-violating content (false positives) can chill legitimate expression, while failing to remove actual threats (false negatives) can endanger the safety and participation of targeted individuals. The current cases illustrate the harm of false positives, as the removals disrupted political debate on one of the most divisive issues in the United States at the time.

The OSB acknowledged the difficulty in identifying rhetorical or coded uses of violent language. False negatives may result from threats disguised in ambiguous or context-specific terms that platforms struggle to interpret. The Board referred to its analysis in the Knin cartoon decision, where it raised concerns about the company’s failure to remove implicit threats. It identified this as an ongoing issue that Meta must address.

The Board reaffirmed its position, stated in several previous decisions (Iran Protest Slogan, Russian Poem, UK Drill Music, Protest in India against France), that rhetorical or figurative uses of violent words do not necessarily convey a threat or incite violence. In such cases, proper contextual analysis is key, and an overly literal reading may result in enforcement errors.

Meta acknowledged that the removal decisions were incorrect, as none of the posts violated its policies. The posts have since been restored, and the OSB agreed that this action was appropriate.

2. Compliance with Meta’s human rights responsibilities

The Board emphasized that Article 19 of the International Covenant on Civil and Political Rights (ICCPR) protects political expression, and that Meta has a responsibility to safeguard such discourse on its platforms. The OSB applied the three-part test laid out in the aforementioned convention to assess whether Meta’s actions were consistent with its human rights obligations:

1. Legality

The Board considered that the Violence and Incitement policy satisfies the legality requirement, as it clearly does not prohibit non-threatening, rhetorical uses of violent language.

2. Legitimate Aim

The OSB held that the policy serves a legitimate aim since it seeks to protect the rights of others to life, public safety, and participation in public affairs. It prohibits content that poses a genuine risk of physical harm.

3. Necessity and Proportionality

The Board noted that the Violence and Incitement policy can only fulfill Meta’s human rights responsibilities if it addresses a pressing social need and is applied in the least intrusive manner possible to achieve its objectives. It expressed concern that rhetorical uses of violent language are subject to disproportionately high rates of error. Meta stated that the removals in these cases resulted from human error, not from deficiencies in the policy or enforcement systems. The OSB argued that properly trained reviewers—especially those with sufficient language proficiency and access to clear guidance—should be able to avoid errors in straightforward cases such as these. However, due to limited available data, the Board could not assess whether these cases reflected a systemic problem.

The OSB noted a correlation between false positives and false negatives: increased efforts to detect harmful speech may inadvertently lead to more wrongful removals of rhetorical content. Meta acknowledged the impact of over-enforcement on freedom of expression but maintained that its approach is justified by the need to protect user safety.

The Board agreed that any changes to the policy or enforcement process must be carefully scrutinized. Increasing tolerance for rhetorical uses of violent language must be accompanied by a clear understanding of the potential for veiled threats. In the context of abortion-related discourse, Meta must be especially attentive to how implicit threats can affect women, judges, and public figures.

The OSB held that the initial removals in these three cases were unnecessary and disproportionate. However, it did not find sufficient evidence to determine that the overall policy or its enforcement framework was disproportionate. The Board considered several possible causes for the errors, including simple human mistakes, challenges faced by reviewers located outside the U.S. in understanding the local political context, and the lack of specific guidance from Meta on moderating abortion-related content under this policy.

4. A future of continuous improvement and oversight

The OSB said it expected Meta to demonstrate ongoing improvement in the accuracy of the enforcement of its rules. It recommended that Meta continue to develop its automated tools to better align them with its human rights obligations. The goal should be to reduce false positives without increasing false negatives. The Board acknowledged the inherent challenge automated systems face when interpreting nuance, sarcasm, satire, and context.

The OSB recognized that Meta has improved the sensitivity of its enforcement systems, for example, by reducing over-enforcement in cases where users jokingly threaten friends. It expects Meta to continue refining both the policy and its enforcement mechanisms. The Board also expects Meta to share sufficient data to allow for future assessments of whether content moderation is meeting the requirements of necessity and proportionality.

Finally, the OSB stressed that the present cases raise concerns about the disproportionate removal of political speech when violent language is used rhetorically. It urged Meta to continue refining its approach to strike a better balance between protecting safety and preserving freedom of expression.

The Oversight Board overturned Meta’s original decisions to take down the content in all three cases.

3. Recommendations

The Board recommended that Meta provide the data it uses to assess the accuracy of its policy enforcement, in order to enable the OSB to evaluate whether such enforcement meets the standards of necessity and proportionality. It also expects this data to be sufficiently comprehensive to allow for independent validation of Meta’s explanations regarding enforcement errors.


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Expands Expression

By overturning Meta’s original decisions, the Oversight Board expanded expression by protecting political debate on abortion from undue interference caused by the over-enforcement of the Violence and Incitement policy. The Board reaffirmed that political speech, especially on deeply contested issues like reproductive rights, lies at the core of protected expression under international human rights standards. In this case, the posts did not contain threats or incitement to violence but employed rhetorical and satirical language to express views on abortion policy in the wake of a major U.S. Supreme Court decision. The OSB emphasized that misclassifying such expressions as harmful can chill legitimate discourse, especially when automated systems or moderators apply the policy too literally. By recognizing the importance of context and satire, the decision reinforces the need for careful and proportionate enforcement to avoid suppressing lawful political speech.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

The decision was cited in:

Official Case Documents

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback