Political Expression, Facebook Community Standards, Violence And Criminal Behavior, Violence and Incitement
Oversight Board Case of Protest in India Against France
India
Closed Expands Expression
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The Oversight Board overturned Meta’s decision to remove a user’s reply on Threads that included the phrase “drop dead” in criticism of the Japanese Prime Minister. The user posted in response to a screenshot of a news article about a political funding scandal involving the Prime Minister’s Liberal Democratic Party members, who were accused of failing to report fundraising revenues. Meta removed the content under its Violence and Incitement Policy, considering the post’s strong language a violent threat. However, the Board found that in the context of Japanese social media, such language represented a common form of non-literal political expression aimed at exposing alleged political corruption and was unlikely to incite harm. The Board reiterated its concern that Meta’s Violence and Incitement Policy fails to clearly distinguish between figurative language and actual threats of violence. Although two moderators fluent in Japanese reviewed the content, they still misapplied the policy even when they understood the local sociopolitical context. The Board concluded that the removal was unnecessary and inconsistent with Meta’s human rights responsibilities, and emphasized the need for clearer additional guidance to help reviewers assess local language and context while aligning internal guidelines with the policy rationale.
* The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook, Instagram and Threads should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
In January 2024, a user publicly replied to a Threads’ post that contained a screenshot of a news article quoting Japanese Prime Minister Fumio Kishida. The article referred to a political funding scandal involving Kishida’s Liberal Democratic Party members, who were accused of failing to report fundraising revenues. Kishida stated that the allegedly unreported revenues “remained intact and were not a slush fund.” The main Threads post contained an image of the Prime Minister and a caption accusing him of tax evasion.
Consequently, the user criticized the Prime Minister’s explanation, calling for accountability before Japan’s legislative body. The response was composed of the interjection “hah,” followed by several Japanese hashtags that included the phrase “死ね” (“shi-ne,” translated as “drop dead/die”). These hashtags referred to the Prime Minister as a tax evader and included derogatory language, such as #dietaxevasionglasses and #diefilthshitglasses.
Both the post and the reply were published when Prime Minister Kishida addressed the funding scandal in parliament. At that time, prosecutors had recently indicted politicians from the Liberal Democratic Party for failing to report fundraising earnings, though Kishida himself was not indicted. He has served as Prime Minister since October 2021 and announced that he will not seek re-election in the Liberal Democratic Party’s leadership election, which occurred on September 27, 2024.
The user’s comment received no replies or likes. The post was reported once under Meta’s Bullying and Harassment Policy for allegedly containing “calls for death” towards a public figure. Due to moderation delays, a human reviewer assessed the content approximately three weeks later and removed it from Threads under Meta’s Violence and Incitement Policy. The affected user appealed this decision to Meta, but a second human reviewer upheld the removal. They then appealed before the Oversight Board (OSB). After the Board decided to hear the case, Meta reviewed its decision and determined that the content removal was an error. It restored the content to Threads.
In reaching its decision, the OSB took into account the broader political and cultural context of Japan. Research commissioned by the Board showed a prevailing sentiment of disapproval and criticism on Threads directed at the Prime Minister over the tax fraud allegations, while other posts used the phrase “死ね” (meaning “drop dead” or “die”), expressing similar disapproval. In addition, experts consulted by the Board noted that Japanese citizens commonly use social media to express political dissent.
According to linguistic experts consulted by the OSB, the Japanese social media application of words such as “死ね” (drop dead/die) is typically figurative and is an expression of intense frustration or anger—not literal threats. While political violence in Japan is rare, recent high-profile incidents—the assassination of former Prime Minister Shinzo Abe in 2022 and the 2023 bombing attempt during a campaign speech by Prime Minister Kishida—have heightened public sensitivity.
In 2017, the UN Special Rapporteur on Freedom of Expression raised concerns about freedom of expression in Japan. These concerns related to the use of direct and indirect pressure by government officials on media, the limited capacity to comment on historical events, and the additional restrictions on information access based on assertions of national security.
The primary issue before the Oversight Board was whether Meta’s removal of a user’s reply on a Threads post— referring to Japanese Prime Minister Fumio Kishida with hashtags using the phrase “drop dead/die”— complied with Meta’s content policies and human rights responsibilities.
The user argued that his comment was merely political critique. They said they were criticizing the Liberal Democratic Party government for allegedly enabling tax evasion. Additionally, the user alleged that Meta’s removal infringed on freedom of speech in Japan by suppressing legitimate criticism of a public figure.
Meta informed the Board that it initially removed the user’s comment because it contained the phrase “死ね” (“drop dead/die”) in hashtags, which it later determined was a figurative political expression rather than a credible death threat. The company explained that it generally cannot distinguish at scale between statements containing actual threats of death and figurative language intended to make a political point, which is why it initially removed the content. Meta notified the Board that Prime Minister Kishida is considered a public figure under the company’s Violence and Incitement and Bullying and Harassment policies, while the user replying to the post was not considered a public figure. The company also informed the Board that Prime Minister Kishida is also considered a “high-risk person” under the Violence and Incitement Community Standard.
Under its Violence and Incitement Policy, Meta prohibits “threats of violence that could lead to death (or other forms of high-severity violence).” Although the hashtags translated as “#dietaxevasionglasses” and “#diefilthshitglasses” both contained the word “die,” Meta distinguishes between the phrases “die” and “death to,” where the latter is explicitly prohibited. Meta acknowledged that such distinctions can be difficult, especially across languages. However, even if “die” was treated as a call for death, Meta concluded that in this case the speech was a non-literal threat that did not violate the spirit of the policy. The spirit of the policy allowance permits enforcement flexibility where a literal application would contradict the policy’s intent. Meta deemed the threat not literal because the other words from the hashtags and the reply itself talked about political accountability during hearings before the Japanese legislature. Thus, the call for a political leader to be held accountable before a legislature indicated that the death threat was figurative rather than literal. On these grounds, Meta determined that the content did not infringe the Violence and Incitement Policy. Regarding the Bullying and Harassment Policy, Meta concluded the content did not infringe it since the contested post did not “purposefully expose” Prime Minister Kishida even if the threat was literal. The user neither tagged him nor replied to his post, nor was the comment posted directly on his page.
1. Compliance with Meta’s Content Policies
The Board assessed the removal under the two cited policies:
Violence and Incitement Community Standard
Under the Violence and Incitement Community Standard, Meta prohibits threats of violence that could lead to death or other high-severity harm, particularly when directed at “high-risk individuals.” In reaching its original decision, Meta applied this policy to the user’s comment, interpreting it as a credible threat. However, the Board found that the content did not violate the company’s policy as the phrase was used in a non-literal context and did not represent a credible threat. Linguistic experts consulted during the case clarified that the phrase was figurative and commonly used in Japanese political discourse as a statement of dislike and disapproval rather than to incite violence.
Bullying and Harassment Community Standard
Under the Bullying and Harassment Community Standard, Meta prohibits content that purposefully targets and exposes public figures to harassment. The OSB concluded that the content also did not breach the aforementioned policy, as the user did not directly address the Prime Minister, post the comment on his page, nor tag him. Thus, the content failed to meet the policy’s threshold for harassment, as it was not deemed to have “purposefully exposed” a public figure.
2. Compliance with Meta’s Human Rights Responsibilities
The Board found that removing the content was inconsistent with Meta’s human rights obligations, particularly regarding freedom of expression as outlined in Article 19 of the International Covenant on Civil and Political Rights (ICCPR). This article protects public debate concerning public figures in the political domain and public institutions, mandating that any restrictions on expression imposed by a State must satisfy the requirements of legality, legitimate aim, and necessity and proportionality. The OSB used this “three-part test” to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights, which Meta itself has committed to in its Corporate Human Rights Policy. The Board emphasized that while companies may not bear the same obligations as governments, they still must protect its users’ rights to free expression. In addition, the Board has reiterated the significance of political speech directed at heads of state, recognizing that such leaders are subject to legitimate criticism and political opposition, even when speech could be considered offensive.
Legality (Clarity and Accessibility of the Rules)
The OSB held that Meta’s Violence and Incitement Policy does not meet the legality requirement under Article 19 of the ICCPR, which requires restrictions on expression to be provided by law, accessible, and formulated with sufficient precision and clarity to allow users to regulate their conduct accordingly. Meta policy prohibits “threats of violence that could lead to death (or other high-severity harm),” particularly when directed at “high-risk individuals.” However, the Board found that the terms “threat,” “high-risk person,” and expressions like “death to” or “die” were insufficiently defined and unclear. Specifically, the policy is unclear when distinguishing between non-literal political speech and credible threats, which further complicates enforcement decisions made by at-scale human reviewers.
Although the policy allows for contextual evaluation, at-scale human moderators lack discretion to assess intent and thus end up over-removing protected non-literal speech, the OSB opined. In other words, the Board emphasized that such reviewers cannot assess intent or credibility, which leads to potential over-enforcement. It reiterated the need for Meta to clarify that rhetorical threats are generally allowed except when they are directed at high-risk persons, particularly political leaders.
Moreover, to the OSB, the policy definitions of “high-risk individual” or “public figure” were insufficiently clear, resulting in uncertainty about the treatment of various categories. Although Meta raised concerns that publishing such definitions would allow for the circumvention of the policy, the Board recommended a balanced approach: publicly provide a general definition of high-risk individuals alongside an indicative list that includes heads of state, political candidates, journalists, and those with a past of having been targeted—building on earlier guidance from the Iran Protest Slogan case.
Legitimate Aim
Any restriction on freedom of expression must pursue one or more of the legitimate aims listed in Article 19 of the ICCPR. The OSB noted that the Violence and Incitement Policy aims to prevent potential offline harm by removing content that poses “a genuine risk of physical harm or direct threats to public safety.” [p. 15] It thus pursues the legitimate purpose of safeguarding the right to life and the right to security of person, the Board said.
Necessity and Proportionality
Under Article 19(3) of the ICCPR, any restrictions on expression must be necessary and proportionate. To assess whether Meta’s restriction in this case met this criterion, the OSB applied the Rabat Plan of Action’s six-part test. It argued that Meta’s initial decision to remove the content, pursuant to its Violence and Incitement Policy, was unnecessary, as it was not the least intrusive measure to protect Prime Minister Kishida’s safety.
The Board used the Rabat Plan of Action’s six test criteria—context, speaker, intent, content, extent of speech, and likelihood of harm—to assess the content under this prong of the three-part test. These factors provide valuable guidance in assessing the credibility of the threats and their real potential to incite violence. To the OSB, the content in question was posted during a politically tense time, amid a tax fraud scandal and allegations of tax evasion concerning Prime Minister Kishida. Experts clarified that despite political criticism rising in Japan, there was no direct correlation between online threats and recent violence against Japanese politicians. Moreover, the Board noted that the user was an ordinary individual with a small following, and that the comment received little to no engagement —thereby, its risk of harm was minimal. The OSB also underscored that the user’s comment denounced corruption and did not call for harm or violence. While the hashtags included the phrase “die,” they were embedded in a broader context of political criticism.
Although Meta ultimately restored the content upon escalation, the initial removal represented systemic issues in at-scale enforcement, the OSB concluded. It emphasized that Meta’s automated tools and general reviewer guidelines failed to account for local context or rhetorical nuance, resulting in over-enforcement and disproportionate restrictions on political speech. In this case, despite the comment’s non-literal critique of a public figure during a political scandal, Meta’s at-scale enforcement failed to reflect its own Violence and Incitement Policy, which allows for figurative language when context makes it clear that no real threat is intended.
The Board acknowledged that it is difficult to assess threats of violence, which can be highly context-dependent, particularly globally. While escalation-only policies—where content is assessed by subject matter experts—offer better threat evaluations, Meta lacks the expert capacity to scale such reviews. Moreover, escalation relies on external triggers like reports from Trusted Partners or the media, as seen in the Sudan’s Rapid Support Forces Video Captive case. Without sufficient data on content prevalence, the OSB was unable to assess to what extent such under-enforcement occurs.
In previous high-risk scenarios, like the ones outlined in the Colombia Protests and Iran Protest Slogan cases, Meta implemented tailored protections such as the Crisis Policy Protocol (CPP) and the Integrity Product Operations Center (IPOC). However, no mechanisms of this sort were implemented in this case. Meta told the Board that even a major event, like the assassination of former Prime Minister Abe, did not automatically trigger enhanced enforcement unless larger risk signals were involved. Rather, enforcement relied on default policies, which unbundled proportionality.
The OSB considered that the current reviewer guidelines rigidly instruct the removal of all “death to” statements targeting high-risk individuals without allowing for rhetorical or non-serious uses. In addition, the Board reiterated its concerns, drawing on cases like Iranian Woman Confronted on Street, Iran Protest Slogan, and Reporting on Pakistani Parliament Speech, where it affirmed that such inflexibility, especially when reflected in training data for automated tools, would lead to widespread over-enforcement of figurative threats as well as the suppression of lawful political speech. While those cases involved protests or elections, the core issue was the same: restriction of political speech without a credible threat of violence. Accordingly, the OSB stressed that Meta must protect such speech and avoid unjustified barriers to democratic discourse.
In light of the aforementioned arguments, the Oversight Board overtuned Meta’s original decision to remove the content.
3. Policy Advisory Statement
The Board recommended that Meta clarify its Violence and Incitement Policy by providing a general definition of “high-risk persons” with illustrative examples, ensuring that it reflects a balance between protecting public figures and allowing political expression. Moreover, Meta should clearly distinguish high-risk persons from “public figures”—by hyperlinking to its Bullying and Harassment definition of public figures in the Violence and Incitement Policy, and in any other community standards where public figures are referenced. The OSB also recommended updating internal guidelines for at-scale reviewers about calls for death using the specific phrase “death to” when directed against high-risk persons. Such updates should account for local contexts where the phrase is used casually, not as a real threat.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
The Oversight Board’s decision to overturn Meta’s removal of the user’s comment aligns with international standards on freedom of expression, specifically Article 19 of the ICCPR. In the context of Japan’s heightened political sensitivity, following the 2022 assassination of former Prime Minister Shinzo Abe and the 2023 attempted attack on Prime Minister Kishida, public discourse has grown increasingly tense. Through its decision in this case, the Board reaffirmed the significance of political speech directed at heads of state, recognizing that such leaders are subject to legitimate criticism and political opposition, even if the speech could be considered offensive. It recognized the reply as a form of non-literal political critique rather than a credible threat. This approach upholds proportionality in content moderation, thereby reinforcing the fundamental principle of freedom of expression in democratic societies.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
Standard I: The decision establishes a binding or persuasive precedent within its jurisdiction:
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.