Global Freedom of Expression

The Case of Video After Nigeria Church Attack

Closed Mixed Outcome

Key Details

  • Mode of Expression
    Electronic / Internet-based Communication
  • Date of Decision
    December 20, 2022
  • Outcome
    Oversight Board Decision, Overturned Meta’s initial decision
  • Case Number
    2022-011-IG-UA
  • Region & Country
    Nigeria, Africa
  • Judicial Body
    Oversight Board
  • Type of Law
    International Human Rights Law, Meta's content policies
  • Themes
    Facebook Community Standards, Objectionable Content, Violent and graphic content
  • Tags
    Meta Newsworthiness allowance, Oversight Board Content Policy Recommendation, Oversight Board Policy Advisory Statement, Oversight Board Enforcement Recommendation, Glorification of terrorism

Content Attribution Policy

Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:

  • Attribute Columbia Global Freedom of Expression as the source.
  • Link to the original URL of the specific case analysis, publication, update, blog or landing page of the down loadable content you are referencing.

Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.

Case Analysis

Case Summary and Outcome

The Oversight Board overturned Meta’s decision to remove a video from Instagram showing the aftermath of a terrorist attack on a church in Nigeria, in which at least 40 people were killed, and more were injured. Meta removed the content as it considered that it violated the Violent and Graphic Content policy, the Bullying and Harassment policy, and the Dangerous Individuals and Organizations policy. The Board decided that the video should be restored to the platform to respect freedom of expression and allow people to “discuss the events that some states may seek to suppress” by raising awareness and documenting human rights abuses. Nonetheless, the Board considered that the video needed a “disturbing content” warning screen to protect the privacy of the victims whose faces were visible. The Board also recommended Meta to provide users with a reasoned notification when a warning screen is applied to their content, and to review and clarify the language of the Violent and Graphic Content policy to ensure that it lines up with the moderators’ internal guidance.

*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. Decisions, except summary decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.


Facts

On 5 June 2022, a terrorist attack occurred in a Catholic church in Owo, southwestern Nigeria, killing at least 40 people, with nearly 90 others injured. A few hours after the incident, a user posted a caption-less video on Instagram that showed the aftermath of the attack, with motionless and bloodied bodies on the church floor, some with their faces visible, and with the sounds of people screaming and crying in the background.

The video was identified by one of Meta’s Violent and Graphic Content Media Matching Service banks, which “automatically flags content previously identified by human reviewers as a violation to the company’s rules.” [p. 5] The bank then referred the video in question to an automated content moderation tool called “a classifier”, which assesses how “likely content is to violate a Meta policy”. The classifier considered that the video could stay on Instagram but determined that the content “likely contained imagery of violent deaths.” Thus, it automatically applied a “disturbing content” warning screen, following Meta’s Violent and Graphic Content policy. Meta did not provide the user with a notification that the warning screen had been added. [p. 5]

Three users reported the content within the following 48 hours, but the reports did not get reviewed and “were closed”. According to Meta’s later response to the Board, “the reports had mistakenly been assigned to a low-priority queue.” [p. 6]

After Meta’s policy team became aware of the attack, they added videos of the incident to an “escalations bank”—another one of the Media Matching Service banks that identify and automatically remove any content that matches content considered by Meta’s specialist internal teams as violating. The videos that were added to the escalations bank included visible human innards.

Three days after the attack, Meta added a new video to the escalations bank, which allowed Meta’s systems to compare the newly added video to content that already existed on the platform. That video was nearly identical to the one posted on June 5. While this reviewing process was taking place, the user added to the post in question an English-language caption that stated that “the church was attacked by gunmen” and that “multiple people were killed”, calling the incident “a sad day.” The caption also included multiple hashtags that referred to firearms collectors, “firearm paraphernalia” and military simulations; others included insinuations to the sound of gunfire, and some referred to the live-action game “airsoft,” in which “teams compete with mock weapons.” [p. 6]

After the user added that caption, the escalations bank matched the user’s post to the newly added video and removed the content from the platform. The user appealed this decision. A human reviewer upheld it, which prompted the user to appeal before the Oversight Board.

After the Board selected this case, Meta reviewed the video that had been added to the escalations bank and determined that it did not violate any of its policies because it did not contain any “visible innards” or a “sadistic caption,” and removed it from the bank. Nevertheless, Meta upheld its decision to remove the content in this case because it determined that the hashtags included in the caption violated several of its policies, namely the Violent and Graphic Content, Bullying and Harassment, and Dangerous Individuals and Organizations policies. [p. 6]


Decision Overview

The main issue that the Oversight Board analyzed was whether Meta’s decision to remove a post on Instagram that showed the aftermath of a terrorist attack on a Nigerian Church was consistent with the company’s content policies, as well as its values and human rights responsibilities.

The user who appealed to the Board submitted that they shared the video to “raise awareness of the attack” and to “let the world know what was happening in Nigeria.” [p. 10]

In its submissions to the Board, Meta stated that the contested content violated the Violent and Graphic Content policy, the Bullying and Harassment policy, and the Dangerous Individuals and Organizations policy.

Regarding the Violent and Graphic Content policy, Meta explained that the policy requires that any “imagery” that includes the violent death of people gets covered with a warning screen. While adult users have the option of “clicking through” to view the content, minors are not provided with that option. However, when this content is “accompanied by sadistic remarks,” it gets removed to stop people from using the platform to “glorify violence or celebrate the suffering of others.” Meta explained that had it not been for the caption, the video in the present case would be allowed to stay on Instagram with a “content warning screen,” as it did not show “visible innards.” Such was the case of other videos of the same incident that were removed under the aforementioned policy, without having to meet the requirement of including sadistic remarks. [p. 10]

Meta said that its internal guidance for moderators defines sadistic remarks as “those that are enjoying or deriving pleasure from the suffering/humiliation of a human or animal.” The company provided examples of remarks that qualify as “sadistic,” which are categorized into those that exhibit “enjoyment of suffering” and “humorous responses”. Meta also said that such remarks can be expressed through hashtags and emojis. [p. 11]

Meta further explained that the allusion to the sound of gunfire in the present case was a “humorous response” to violence that “made light of the 5 June terror attack.” Meta also stated that the hashtags, referring to gunfire, firearm collectors, and firearm paraphernalia, could be “read as glorifying violence and minimizing the suffering of the victims” by “invoking humor and speaking positively about the weapons and gear used to perpetrate their death.” As for the hashtag referring to military simulations, it “compared the attack to a simulation, minimizing the actual tragedy and real-world harm experienced by the victims and their community.” Referring to the “airsoft” hashtags, Meta considered they compared the attack to “a game in a way that glorifies violence as something done for pleasure.” [p. 11]

Moreover, Meta submitted that the user stating in the caption that “they do not support violence”, and that the incident was “a sad day”, was not a clear indication that they were sharing the video to raise awareness of the attack. According to the company, even if the user intended to raise awareness, the use of “sadistic hashtags” would still result in the removal of the post. To support this, Meta submitted that the present case was different from the Sudan graphic video case (2022-002-FB-FBR), in which the user made their intention to raise awareness very clear in the hashtags while sharing disturbing content. [p. 11]

Secondly, Meta submitted that the post violated the Bullying and Harassment policy, which “prohibits content that mocks the death of private individuals”. For Meta, the use of hashtags referencing the sound of gunfire in the present case was considered a “humorous response to the violence shown in the video.” [p. 11]

Thirdly, Meta decided that the content violated the Dangerous Individuals and Organizations policy. For the company, the removed content “mock[ed] the victims of the attack and [spoke] positively about the weapons used,” and therefore was seen as a form of “praise” prohibited by the policy—especially since Meta had labeled the 5 June attack as a “multiple-victim violence event”. Therefore, any content “deemed to praise, substantively support or represent” that incident was prohibited under the aforementioned policy. [p. 12]

Meta also submitted to the Board that in this case the user did not receive a notification of the warning screen “because of a technical error”. However, Meta revealed after further questioning that, while Facebook users generally receive a reasoned notification of a warning screen, Instagram users do not.

Finally, Meta insisted that its actions, in this case, were necessary to strike an appropriate balance between its values, and were consistent with international human rights law—as the policy on sadistic remarks was clear and accessible, sought to protect the rights of others, public order and national security, and that less severe actions would not “adequately address the risk of harm.” [p. 12]

Compliance with Community Standards

The Board analyzed the three content policies that were included in Meta’s submissions to assess whether removing the content was appropriate.

1. Violent and Graphic Content

As the Board noted, the Violent and Graphic Content policy provides that Meta “removes content that contains sadistic remarks towards imagery depicting the suffering of humans and animals.” It also states that it “allows graphic content with some limitations to help people condemn and raise awareness about important issues such as human rights abuses, armed conflicts or acts of terrorism.” The policy also requires warning screens to “alert people that content may be disturbing”—for example in content that portrays violent deaths. Directly below the policy rationale, it’s stated that users cannot post “sadistic remarks towards imagery that is deleted or put behind a warning screen.” Yet no further explanation or examples regarding what constitutes sadistic remarks were given. [p. 13-14]

The Board’s majority decided that the hashtags in the caption of the removed content were not sadistic since they were “not used in a way that shows the user is enjoying or deriving pleasure from the suffering of others.” It also held that the hashtags should not be considered as commentary on the video—contrary to the Sudan graphic video case, in which the hashtags clearly conveyed the intention to share the graphic video to document human rights abuses. [p. 14]

The Board explained that the user’s hashtags, referring to “airsoft,” firearms, and military simulations, did not “show that the user was enjoying or deriving pleasure from the suffering of others” under the Violence and Graphic Content policy. The airsoft hashtags, according to the Board’s independent research, were used widely among airsoft and firearm enthusiasts, and are more directly associated with enthusiasm for the game. The Board further explained that Meta should have noticed that the hashtags were used to raise awareness about the attack amongst the people the user usually communicated with, as well as others, and that the hashtags were unrelated to the content of the video and the caption posted directly below it. [p. 14]

The majority of the Board also stated that it was likewise clear that the caption added by the user did not “indicate that they were enjoying or deriving pleasure from the attack,” as the user expressly stated that the attack represented a “sad day,” and that “they do not support violence.” Comments on the post made by other users, the Board noted, further indicated that the user’s followers understood the user’s intent to raise awareness. The user’s responses to those comments also showed “further sympathy with victims.” [p. 15]

While the Board accepted Meta’s argument that the user stating that they do not support violence should not always be “accepted at face value”—as some users may include them only to evade moderation—, the Board decided that it should not be necessary for a user to “expressly state condemnation” when posting about terrorist acts, and that expecting them to do so “could severely limit expression in regions where such groups are active.” (see the Board’s decision in the Mention of the Taliban in news reporting case) [p. 15]

2. Bullying and harassment policy

According to tier 4 of the Bullying and Harassment policy, content that “mocks the death or serious injury of private individuals” is prohibited. [p. 16]

The Board’s majority decided that the post and hashtags could not be considered as “mockery” under this policy, as the purpose of it was not “an attempt at humor” but “an attempt to associate with others.” For the Board, this was seen in other users’ comments on the post and the user’s responses to them (see the Board decision in the Nazi quote case). The Board held that, while it is important to take into consideration the survivors and victims’ families, the “responses to this content indicate that those perspectives do not necessarily weigh against keeping content on the platform.” [p. 16-17]

3. Dangerous individuals and organizations policy

Tier 1 of the Dangerous Individuals and Organizations policy prohibits content that “praises, substantively supports or represents multiple-victim violence.” The Board’s majority found that the use of hashtags in the caption should not have been understood as glorifying violence, nor praising the attack, for the same reasons the content was not deemed sadistic. [p. 17]

Compliance with Meta’s values

The Board’s majority decided that removing the content was incompatible with Meta’s value of “Voice.” However, The Board in its entirety agreed that the contested content “implicate[d] the dignity and privacy of the victims of the 5 June attack” and their families—as some of the faces of the victims in the video were visible. [p. 18]

The majority of the Board highlighted that the value of “Voice” had to be protected, particularly because the content sought to “draw attention to serious human rights violations, including attacks on churches in Nigeria.” The majority also stated that the user’s hashtags “[did] not contradict the user’s sympathy” towards the victims and that their use was “consistent with the user’s efforts to raise awareness”, without being “sadistic”. Hence, the Board considered that the restoration of the content, with a warning screen, was consistent with Meta’s values of “Voice”, “Privacy”, “Dignity” and “Safety.” [p. 19]

Compliance with Meta’s human rights responsibilities

To analyze whether Meta’s actions complied with its obligations under International Human Rights Law, the Board applied the three-part test provided by Article 19 of the International Covenant on Civil and Political Rights (ICCPR). The three-part test provides that restrictions to freedom of expression can be valid as long as they are prescribed by law (legality), pursue a legitimate aim, and are necessary and proportionate.

1. Legality

Referring to UN General Comment no. 34 (para. 25-26) and The Santa Clara Principles on Transparency and Accountability in Content Moderation, the Board highlighted that Meta’s rules should be understandable, accessible, and clear, and should include “detailed guidance and examples of permissible and impermissible content,” to inform individuals about what they can do and what they can not. [p. 20]

Upon analyzing each content policy, the Board unanimously held that the Violent and Graphic Content policy was not sufficiently clear as to how users can “raise awareness of graphic violence,” in a manner consistent with the policy, and with regards to the content that falls under Meta’s definition of “sadistic.”

Citing its recommendation in the Sudan graphic video case, the Board suggested Meta to amend the policy to “specifically allow imagery of people and dead bodies to be shared to raise awareness or document human rights abuses.” [p. 21]

While the Board did not find any legality concerns in the Bullying and Harassment policy, it held that tier 1 of the Dangerous Individuals and Organizations standard “does not appear to have a consistent policy regarding when it publicly announces events that it has designated,” which affects users’ ability to know or understand why their content was removed in many cases [p. 22]

2. Legitimate Aim

The Board held that each of the three policies at issue pursued the legitimate aim of protecting the rights of others, and cited previous cases where it reached that conclusion (Sudan graphic video, Pro-Navalny protests in Russia, and Mention of the Taliban in news reporting).

3. Necessity and proportionality

The Board’s majority held that removing the content was neither necessary nor proportionate, and that “it should be restored with a ‘disturbing content’ warning screen.” It decided that the warning screen was necessary to protect the victims and their families’ privacy since “the victims’ faces were visible and the location of the attack was known.” Furthermore, the Board considered that there was no “dismemberment” or “visible innards” in the contested content, which, if present, would justify removing the content or “[giving] it] a newsworthiness allowance” to stay on the platform. Thus, the Board concluded that a warning screen was a proportionate measure to balance free expression and the rights of others, even if it would “reduce both reach and engagement with the content.” [p. 23]

The Board’s majority also held that the caption “as a whole, including the hashtags,” could not be considered sadistic, and “it would need to have more clearly demonstrated sadism, mockery or glorification of the violence for removal of the content to be considered necessary and proportionate.” [p. 24]

Policy Advisory Statement

The Board recommended Meta to review the wording of its Violent and Graphic Content policy to guarantee that it matches the moderators’ internal guidance. It also said that the company should notify Instagram users when a warning screen is added to their content, specifying the policy rationale justifying such action.

Dissenting Opinion:

A minority of the Board considered that Meta’s decision to remove the video from the platform was correct since, in its opinion, the “shooting-related hashtags” were sadistic and “could traumatize” the victims or their families, and that a warning screen would not lessen this effect. The minority also determined that Meta was right to be cautious in light of the terrorist attacks in Nigeria, especially in cases when the victims could be identified. [p. 15]


Decision Direction

Quick Info

Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.

Mixed Outcome

This decision has a mixed outcome. While the Board underscored the right to freedom of expression in contexts where the content seeks to raise awareness and document human rights abuses, it weighed this right with the victims’ and families’ privacy. In doing so, the Board decided the content should be restored with a warning screen, a less severe measure that justifiably restricts the content’s reach in accordance with international human rights standards.

Global Perspective

Quick Info

Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.

Table of Authorities

Related International and/or regional laws

  • ICCPR, art. 19

    The Board referred to this norm to highlight the importance of the right to freedom of opinion and expression, and how to legitimately restrict it.

  • UNHR Comm., General Comment No. 34 (CCPR/C/GC/34)

    The Board analysed Meta’s human rights responsibilities, regarding the right to freedom of expression, under Article 19 of the ICCPR and its general comment.

  • Rapporteur on freedom of opinion and expression, A/HRC/38/35 (2018)

    The Board referred to this document to explain the legality requirement of the three-part test.

  • UN Special Rapporteur on freedom of opinion and expression, A/74/486 (2019)

    The Board referred to this document to explain the legality requirement of the three-part test.

  • ICCPR, art. 17

    The Board referred to this Article to highlight Meta’s human rights responsibilities regarding the right to life.

  • The Santa Clara Principles on Transparency and Accountability in Content Moderation (2018)

    The Board referenced these Principles to analyze whether Meta’s rules were understandable, accessible and clear.

  • OSB, Mention of the Taliban in News Reporting, 2022-005-FB-UA (2022)

    The Board referred to this case to discuss how users can comment about terrorist acts.

  • OSB, Russian Poem, 2022-008-FB-UA (2022)

    The Board highlighted this case to explain that the Graphic Content policy was unclear.

  • OSB, Sudan graphic video, 2022-002-FB-MR (2022)

    The Board highlighted this case to explain that the Graphic Content policy was unclear.

  • OSB, Colombian Police Cartoon, 2022-004-FB-UA (2022)

    The Board referred to this case to recommend Meta to improve its procedures when adding content to Media Matching Service banks.

  • OSB, Pro-Navalny protests in Russia, 2021-004-FB-UA (2021)

    The Board referred to this case to discuss the legitimate aim of the Bullying and Harassment Community Standard.

  • OSB, Nazi quote, 2020-005-FB-UA (2021)

    The Board cited this decision to highlight that comments left on content by its authors, friends and followers, indicate the poster’s likely intent.

Case Significance

Quick Info

Case significance refers to how influential the case is and how its significance changes over time.

The decision establishes a binding or persuasive precedent within its jurisdiction.

According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”

The decision was cited in:

Official Case Documents

Official Case Documents:


Attachments:

Have comments?

Let us know if you notice errors or if the case analysis needs revision.

Send Feedback