Facebook Community Standards, Objectionable Content, Violent and graphic content
The Case of Video After Nigeria Church Attack
Nigeria
Closed Mixed Outcome
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The Oversight Board (OSB) overturned Meta’s decision to leave up a Facebook post containing a likely AI-manipulated image of a prominent Peruvian human rights defender, altered to show blood dripping from her face alongside a caption accusing NGOs of inciting violence and financial wrongdoing. Posted during anti-government protests by a member of La Resistencia—a group known for intimidating civil society actors—Meta considered the content did not constitute a “veiled threat” under Meta’s Violence and Incitement Policy and decided to keep it online. A Facebook user appealed the company’s decision before the Board. The OSB determined that both the image and caption, when taken in context, posed a credible risk of offline harm and that no action short of removal would sufficiently protect the targeted individual. Referring to Meta’s failure to recognize the threat and expressing concerns about underenforcement of veiled threats, the OSB recommended clarifying the Violence and Incitement Policy to cover coded threats across all forms of expression and conducting an annual accuracy assessment focusing on threats to human rights defenders and over-removal of political speech.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
In July 2024, a Facebook user in Peru shared what appeared to be an AI-manipulated image showing a headshot of a well-known human rights leader with blood dripping from her face. The accompanying Spanish-language caption accused non-governmental organizations (NGOs) of financial misconduct involving foreign funding and alleged that they were inciting violent protests. The post circulated during a time of widespread demonstrations against the government in Lima. It garnered approximately 1,000 views and fewer than 100 reactions.
Three days after the post appeared, a user reported it to Meta, arguing it violated the company’s content policies. A human reviewer found no breach of the rules, and the post remained online. When the user appealed Meta’s decision, the appeal was automatically closed without further review. The same user then escalated the case to the Oversight Board (OSB).
After the user’s appeal to the Board and before it selected the case, the post was also flagged through Meta’s Trusted Partner Program—a global network of NGOs, humanitarian agencies, and human rights researchers who report content and provide feedback on Meta’s enforcement. Following this report, Meta’s internal escalation teams reviewed the account behind the post and determined it violated the Terms of Service ”because the user had multiple accounts under the same or similar name.” [p. 4] The account was subsequently disabled, rendering the content inaccessible. Despite this, when the OSB selected the case, Meta re-reviewed the post and reaffirmed its original conclusion that the content did not violate its policies.
The Board emphasized the broader context in Peru. Since 2016, the country has faced acute political instability, with six presidents and three legislatures in under a decade. Tensions peaked in December 2022 after President Pedro Castillo was ousted and arrested, sparking mass protests rooted in longstanding issues of marginalization and inequality. UN bodies and human rights groups have documented abuses in response to the protests, including the use of excessive force, arbitrary detentions, and shrinking civic space. Women human rights defenders have been particularly targeted, often subjected to gender-based harassment with little access to justice or protection.
Simultaneously, legislative and political efforts have further restricted civil society. In 2024, Peruvian lawmakers proposed reforms to limit international funding for NGOs and their ability to pursue legal action on human rights violations. Congress passed these reforms in March 2025. These developments have been accompanied by online harassment campaigns, especially from right-wing groups such as La Resistencia. Since its formation in 2018, the group has used disinformation, threats, and intimidation to target journalists and human rights defenders—frequently smearing them as terrorists. The post at issue in this case originated from a prominent member of this group.
The main issue the Oversight Board had to assess was whether Meta’s decision to leave up a post containing a likely AI-manipulated image of a prominent Peruvian human rights defender—altered to show blood dripping from her face alongside a caption accusing NGOs of inciting violence and financial wrongdoing—was compatible with Meta’s Violence and Incitement Policy and with its human rights responsibilities.
The user who appealed the decision argued that the post was a covert death threat aimed at a human rights defender. They urged the Board to read it against the backdrop of persistent harassment and physical assaults on defenders in Peru and linked its timing to the July 2024 protests. They also said that the user who posted the content was affiliated with a group known for stoking violence, and that rhetoric of this kind has previously spilled over from the online space into real-world attacks.
As stated by the OSB, Meta’s Violence and Incitement Policy aims to prevent offline harm by removing content that incites or threatens violence, including visual or coded threats. The policy includes provisions for removing veiled threats when both a threat signal (e.g., retaliatory or coded language) and a context signal (e.g., a report from a target or a risk of imminent harm) are present. In this case, Meta determined that the post—featuring a digitally altered image of a human rights defender with blood on her face and a caption accusing NGOs of wrongdoing—did not meet the threshold for a violation. While Meta acknowledged the image was “a closer call,” it found no clear depiction of violence or injury and interpreted the blood imagery as a metaphorical “political critique” rather than a threat. Because the account had been disabled, Meta did not escalate the case for deeper review under its veiled threats framework, citing resource constraints. The company emphasized its engagement with human rights defenders and its ongoing efforts to improve safety on its platforms, but also noted it did not track data on content reviewed under its veiled threats provision.
The Oversight Board received 65 eligible public comments, with the vast majority (60) coming from Latin America and the Caribbean, and the remainder from Europe and North America. The comments addressed a range of issues, including Peru’s broader political and social context, the risks faced by human rights defenders (particularly women), legislative efforts affecting NGO operations, and the spread of online narratives that label civil society actors as “terrorists.” Many also discussed the activities of La Resistencia and the challenges of moderating content that may contain veiled threats.
In January 2025, the OSB held a stakeholder consultation with experts from advocacy groups, academia, intergovernmental bodies, and civil society to discuss online threats against human rights defenders. The discussion also covered efforts to push for stronger platform protections and the use of Meta’s Trusted Partner program to flag content with potential for offline harm.
1. Compliance with Meta’s Content Policies
The Board unanimously held that the contested content violated the company’s Violence and Incitement Policy. It considered that the combination of the image of the bloodied human rights defender and the accompanying text met Meta’s definition of a prohibited threat. The OSB also agreed that the post qualified as a “veiled threat” under the policy, “which categorizes potentially ambiguous posts as threats if they have both a ‘threat signal’ and a ‘context signal’ that together make up an implied or disguised threat.” [p. 10]
Regarding the “threat signal” requirement, the Board considered that the post met Meta’s threshold and constituted a veiled call to violence against a human rights defender. The image showed a digitally altered headshot of the defender with blood dripping from her face, paired with text accusing NGOs of inciting violence and corruption. The OSB strongly disagreed with Meta’s interpretation of the image as “political critique,” noting that the defender’s expression appeared calm only because the original image was a smiling professional photo. The altered image clearly conveyed injury, despite the lack of visible wounds. The Board emphasized that Meta’s moderators should have recognized the target and had access to technical tools to assess the image properly.
As for the “context signal” requirement, the Board agreed it could lead to imminent violence. It noted that in Peru, similar accusations have led to intimidation and attacks on human rights defenders, on behalf of the group La Resistencia. The OSB came to this conclusion by considering contextual information—referring to a report from the Office of the High Commissioner for Human Rights (OHCHR) and the Committee to Protect Journalists that have documented threats such as “your days are numbered” and “you will die” directed at media and civil society actors. In this case, a Trusted Partner report flagged the post as potentially contributing to violence. Meta had also been alerted to these risks by human rights defenders through reports, litigation, and stakeholder engagement.
The Board went on to examine the enforcement actions taken by Meta in relation to the post. It found it concerning that the operational distinction between threats that do and do not require context to enforce was resulting in underenforcement and more veiled threats remaining on Meta’s platforms.
To highlight this point, the OSB recalled cases such as UK Drill Music, Protest in India against France, and Knin Cartoon—where it expressed concerns over the lack of contextual analysis to assess whether content constituted “veiled threats to violence.” According to the Board, such analysis, regrettably, is only undertaken by Meta upon escalation. Moreover, the company’s current guidance for at-scale reviewers significantly limited the possibility of contextual analysis. The OSB noted that at-scale moderators are not instructed or empowered to identify content that violates the company’s escalation-only policies, like the rule prohibiting “veiled threats” under the Violence and Incitement Policy in this case (see Sudan’s Rapid Support Forces Video Captive). This means that the human reviewer in this case would not have been able to exercise discretion or judgment in evaluating the content when it was initially reported, nor escalate the post to teams empowered to enforce the context-sensitive policy line.
Subsequently, the Board noted that since Meta does not record how many pieces of content are reviewed for veiled threats, it was unable to gauge either their frequency or the extent of possible underenforcement. Yet, even if such threats against human rights defenders occur infrequently, their consequences are severe—undermining defenders’ ability to continue their work, instilling fear, and sometimes leading to physical harm. Thus, the OSB recommended that Meta carry out regular, high-quality evaluations of its handling of these cases to identify areas for improvement. This should include building a clearer picture of how common veiled threats are across its platforms and assessing how effectively its systems detect and act on them. Such analysis could also generate more detailed indicators—for example, the rate of threats aimed at human rights defenders and targeted evaluation tools. As part of these efforts, Meta could pilot an automated system capable of flagging suspected veiled threats for review by specialized escalation teams.
Lastly, the Board noted that Trusted Partners were vital for identifying veiled threats and providing context for accurate enforcement, especially as Meta shifts from automated systems to user reports. The Board urged Meta to properly support and resource the program to ensure effective use of this expertise.
2. Compliance with Meta’s Human Rights Responsibilities
The Board then assessed the contested content and the possibility of removing it in light of Meta’s Corporate Human Rights Policy and international human rights standards—particularly Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects the right to freedom of expression without discrimination.
Legality (Clarity and Accessibility of the Rules)
The OSB held that Meta’s policies on threats of violence and veiled threats were sufficiently clear in this case but recommended improvements to meet the principle of legality under international human rights law, which requires accessible and precise rules that guide both users and reviewers. Specifically, the Board urged Meta to clarify that “coded statements” can include visual and verbal threats, not just written ones, and to ensure that policies treat text and imagery holistically. It also suggested revising the term “veiled threat,” which may downplay the seriousness of the harm, noting that while the post in this case required contextual understanding, its threatening message was clear.
Legitimate Aim
Any restriction on freedom of expression must pursue a legitimate aim under the ICCPR, such as protecting the rights of others. Meta’s Violence and Incitement Community Standard met this requirement by aiming to prevent offline violence and protect physical safety. This aligns with the protection of rights under Articles 6, 9, 19, and 21 of the ICCPR (rights to life, security of person, freedom of expression, and freedom of assembly, respectively).
Necessity and Proportionality
To comply with Article 19(3) of the ICCPR, restrictions on expression must be necessary and proportionate—meaning they must be the least intrusive means to achieve a legitimate protective aim. In this case, the Oversight Board applied the Rabat Plan of Action’s six-factor test to assess the risk of violence posed by the post, referring to previous decisions where it applied the same test (Iran Protests Slogan and Call for Women’s Protest in Cuba). Applying this framework, the OSB analyzed whether the content crossed the threshold into prohibited incitement. It assessed: (1) the context in which the speech occurred, (2) the speaker’s position and influence, (3) the intent to incite harm, (4) the content and form of the message, (5) the extent and reach of its dissemination, and (6) the likelihood and imminence of resulting harm. All factors are weighed together to distinguish protected expression from unlawful incitement.
Referring to the case at hand, the Board concluded that the post, which included a manipulated image of a human rights defender and a text echoing narratives used to incite violence in Peru, constituted a credible threat. Given the context of past attacks incited by similar accusations and the prominent role of the poster within a group known for intimidating defenders, the OSB concluded that no less restrictive measure than removal could adequately protect the rights to life and security of the targeted individual.
The Board further emphasized that threats—even when implicit or requiring contextual understanding—create a chilling effect on the freedom of expression and activities of human rights defenders, especially women, who face disproportionate harassment. In Peru, these dangers are compounded by political repression and legislative efforts to restrict NGO activities. The OSB noted that public figures and institutions have failed to provide adequate protection to defenders, and reports document increasing violence, including killings. When allowed to remain online, posts like the one in question normalize hostility toward civil society actors and worsen an already precarious environment for defenders. As such, removal was deemed by the Board as both necessary and proportionate to safeguard human rights and prevent further harm.
Moreover, the Board noted it had received reports that the content, despite being inaccessible following the deactivation of the user’s account, has been reposted from different accounts associated with the user. Following this decision, Meta should ensure that identical content is removed, unless it is shared in a condemning or awareness-raising context.
The Oversight Board overturned Meta’s original decision to leave up the content.
Policy Advisory Statement
The OSB recommended Meta to improve the clarity of its Violence and Incitement Community Standard by explicitly stating that “coded statements” constituting veiled threats—whether conveyed through written, visual, or verbal forms—are prohibited. This clarification is essential to ensure that both text and imagery are consistently interpreted when evaluating threats.
To strengthen enforcement actions, the Board also urged Meta to conduct an annual accuracy assessment focused on detecting veiled threats, particularly those targeting human rights defenders. This should include analyses of false negatives (threats missed by enforcement systems) and false positives (political speech incorrectly removed), with a view to improving the identification of high-risk, low-prevalence threats.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
In this decision, the Oversight Board found that the removal of the content was justified and necessary to protect the safety of human rights defenders, given the credible risk of offline violence posed by the post’s threatening imagery and caption. While the decision restricts expression, the OSB concluded that this was a proportionate and legitimate limitation in light of Meta’s own Violence and Incitement Policy and its responsibilities under international human rights law. The Board emphasized the need to treat both the image—depicting a human rights defender with blood digitally added to her face—and the caption—accusing NGOs of inciting violence—as a combined threat, especially within the wider context of escalating repression in Peru. It expressed concern over Meta’s initial failure to act, highlighting that this case exemplifies broader shortcomings in Meta’s enforcement of its veiled threats policy and its inconsistency with the company’s stated human rights commitments and stakeholder engagement efforts. The Board called for stronger safeguards and accountability to ensure effective protection for human rights defenders facing both online and offline threats.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.