Facebook Community Standards, Objectionable Content, Sexual Solicitation, Adult Nudity and Sexual Activity, Instagram Community Guidelines, Referral to Facebook Community Standards
Oversight Board Case of Gender Identity and Nudity
United States
Closed Expands Expression
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
This case is available in additional languages: View in: العربية
On June 13, 2022, the Oversight Board overturned Meta’s original decision to remove an Instagram post that, according to the user, showed pictures of Arabic words which could be used in a derogatory way toward men with “effeminate mannerisms”. Meta initially removed the content for violating its Hate Speech policy but restored it after the user appealed. After being reported by another user, Meta removed the content again for violating its Hate Speech policy. According to Meta, before the Board selected this case, it submitted the content for an additional internal review which determined that it did not violate the company’s Hate Speech policy. Meta then restored the content to Instagram.
The Board considered that while the post contains slur terms, the content was covered by an exception for speech “used self-referentially or in an empowering way” and an exception that allowed the quoting of hate speech to “condemn it or raise awareness”. As a result, the Board found that the company’s initial decision to remove the content was an error that was not in line with Meta’s Hate Speech policy.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
In November 2021, a public Instagram account that identified itself as a space for discussing queer narratives in Arabic culture posted pictures on a carousel. The caption, which the user wrote in both Arabic and English, explained that each image depicted a different word that could be used in a derogatory way towards men with “effeminate mannerisms” in the Arabic-speaking world, including the terms “zamel,” “foufou,” and “tante/tanta.” In the caption, the user stated that they did not “condone or encourage the use of these words” [p. 4] and explained that they had previously been abused with the slurs and that the post intended “to reclaim [the] power of such hurtful terms.”[p.2]
The content was viewed approximately 9,000 times, receiving around 30 comments and about 2,000 reactions. Within three hours of publication, two users reported it for “adult nudity or sexual activity” and “sexual solicitation,” respectively. Each report was dealt with separately by different human moderators. The moderator who reviewed the first report took no action. However, the moderator who reviewed the second report removed the content for violating Meta’s Hate Speech policy.
Consequently, the user appealed the removal of their post, and a third moderator restored the content to the platform. After Meta restored the content, another user reported it as “hate speech,” a fourth moderator conducted a review, again removing the content.
The user appealed a second time, and after a fifth review, another moderator upheld the decision to remove the content. Meta then notified the user of that decision, and consequently, they submitted an appeal to the Oversight Board.
The main issue for the Board to analyze was whether Meta’s decision to remove the content containing slur terms violated Meta’s Hate Speech policy and whether the post’s removal was in line with the company’s values and its human rights responsibilities.
In their statement to the Board, the user explained that their account was a place “to celebrate queer Arab culture” [p. 7]. They described that although it is a “safe space,” it has increasingly been targeted by homophobic trolls who write abusive comments and mass-report content.
They noted that their intent in posting the content was to celebrate “effeminate men and boys” in Arab society, who were often belittled with the derogatory language highlighted in the post. They clarified that through the post, they had attempted to reclaim the derogatory words as a form of resistance and empowerment and argued that they had made clear that they did not condone or encourage the use of the words in the pictures as slurs. The user likewise stated that they considered their content complied with Meta’s content policies which expressly permit the use of otherwise banned terms when used self-referentially or in an empowering way.
In its rationale, Meta explained that it had initially removed the content since it contained “a derogatory term for gay people” [p. 8], a prohibited word on Meta’s slur list in its Hate Speech Policy. However, it stated that it ultimately reversed its decision and restored the content since it fell within Meta’s exceptions for “content that condemns a slur or hate speech, discusses the use of slurs including reports of instances when they have been used, or debates about whether they are acceptable to use” [p. 8]. Meta acknowledged that the context indicated that the user was drawing attention to the hurtful nature of the word and therefore was non-violating.
Compliance with Meta’s content policies
After the Board examined the arguments presented by the parties, it voiced that it had selected the case since the over-moderation of speech by users from persecuted minority groups was a serious and widespread threat to their freedom of expression. It then proceeded to analyze if the content fell into an exception in the Hate Speech policy. The Board recalled its rationale used in the decision of the cases of Wampum Belt and Two buttons meme, where it had noted that it is not necessary for a user to explicitly state their intention in a post for it to meet the requirements of an exception to the Hate Speech policy. Additionally, it noted that it was enough for a user to be clear in the context of the post that they are using hate speech terminology in a way that the policy allowed. However, the Board deemed that in the immediate case, the user had expressed that they did not “condone or encourage” the offensive use of the slur terms in question but instead had attempted to resist and challenge the dominant narrative to reclaim the power of such hurtful terms.
Furthermore, the Board noted that the statement of intent and context made it clear that the content unambiguously fell within the exception. Nevertheless, it highlighted that Meta had removed the content on three occasions.
Compliance with Meta’s values
The Board found that Meta’s initial decision to remove the content was inconsistent with Meta’s values of “Voice” and “Dignity” and did not serve the value of “Safety.” It further held that while it was consistent with Meta’s values to prevent using slurs to abuse people on its platforms, the company was not consistent in applying the exceptions set out in the policy to expression from marginalized groups.
Compliance with Meta’s human rights responsibilities
The Board then proceeded to analyze if Meta’s initial decision to remove the content was consistent with its human rights responsibilities as a business by employing the three-part test in Article 19 of the International Covenant on Civil and Political Rights (ICCPR).
I. Legality (clarity and accessibility of the rules)
In the Board’s view, the Hate Speech policy structure was not sufficiently clear. It indicated multiple areas of opacity in the policy, including whether slurs designated for particular geographies were removed from the platform only when posted or viewed in those geographies or regardless of where they were posted or viewed. It further highlighted that the information on the processes and criteria for developing the slur list and market designation, especially regarding how linguistic and geographic markets were distinguished, was unavailable to users. Thus, it deemed that without this information, users could find difficulty assessing what words could be considered slurs, based solely on the definition of slurs in the Hate Speech policy that relied on subjective concepts such as inherent offensiveness and insulting nature.
II.Legitimate aim
The Board then explained that the policy at issue did pursue the legitimate aim since it aimed to protect the rights of others, equality, and protection against violence and discrimination based on sexual orientation and gender identity.
III. Necessity and proportionality
Regarding the necessity and proportionality of the measure taken by the company, the Board considered that it was unnecessary to remove the content in this case as the removal was a clear error that was not in line with the exception in Meta’s Hate Speech policies. Further, it esteemed that the removal was not the least intrusive instrument to achieve the legitimate aim because, in each review resulting in removal, Meta had taken down the entire carousel containing ten photos for alleged policy violations in only one of the photos.
Given the importance of reclaiming derogatory terms for LGBTQIA+ people in countering discrimination, the Board esteemed that Meta should be particularly sensitive to the possibility of wrongful removal of the content in this case and similar content on Facebook and Instagram. It recalled its conclusion in the decision of Wampum Belt, where it had stated that it was essential to evaluate the performance of Meta’s enforcement of Facebook’s Hate Speech, taking into account the effects on particularly marginalized groups.
In the Board’s view, social media is often one of the only means for LGBTQIA+ people to express themselves in countries which penalize their free expression. It further noted that over-moderation of speech by users from persecuted minority groups seriously threatens their freedom of expression. Moreover, it considered that the errors in the present case, which included three separate moderators determining that the content violated the Hate Speech policy, indicate that Meta’s guidance to moderators in assessing references to derogatory terms could be insufficient. Finally, the Board expressed its concern that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.
In sum, the Board found that, while slur terms were used, the content was not hate speech because it fell into an exception in the Hate Speech policy for slur words that were “used self-referentially or in an empowering way,” as well as the exception for quoting hate speech to “condemn it or raise awareness”. Furthermore, it considered that the company’s decision to remove the content was inconsistent with Meta’s Values and human rights responsibilities. Accordingly, the Board overturned Meta’s original decision to remove the content.
Policy advisory statement:
To help moderators better assess when to apply exceptions for content containing slurs, the Board recommended Meta translate its internal guidance into Modern Standard Arabic. It also considered that Meta should be more transparent about how it creates, enforces, and audits its market-specific lists of slur terms.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
The Oversight Board’s decision expands expression by establishing that the content in question fell into an exception in Meta’s Hate Speech policy as it reports, condemns, and discusses the negative use of homophobic slurs by others and uses them in an expressly positive context.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
The Board analyzed Facebook’s human rights responsibilities through this precept on freedom of expression and employed the three-part test established in this Article to assess if Facebook’s actions allowed expression to be limited; the Board referred to the General Comment for guidance.
The Board referred to this Article to highlight Facebook’s human rights responsibilities as a business, particularly to the right to non-discrimination.
While employing the three-part test to assess if Facebook’s actions allowed expression to be limited, the Board referred to the General Comment for guidance.
By referring to this case, the Board noted that regarding artistic expression from Indigenous persons, it was not sufficient for the company to evaluate the performance of the enforcement of Facebook’s Hate Speech policy as a whole. Instead, the Board stressed that Meta should take into account the effects on particularly marginalized groups.
Regarding the development of the slurs list, the Board reiterated that in this case, it had recommended Meta be more transparent on the procedures and criteria for developing the list.
The Board recalled this case to emphasize the importance of context in assessing whether content falls into the exceptions to the Hate Speech policy.
The Board referred to this case to highlight that it had previously recommended Meta clarify to Instagram users that Facebook’s Community Standards apply to Instagram in the same way they apply to Facebook, with some exceptions
The Board remarked that in this case, it had found it was not necessary for a user to explicitly state their intention in a post in order for it to meet the requirements of an exception to the Hate Speech policy.
The Board referred to this case to highlight that it had previously recommended Meta clarify to Instagram users that Facebook’s Community Standards apply to Instagram in the same way they apply to Facebook, with some exceptions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.