Facebook Community Standards, Objectionable Content, Hate Speech/Hateful Conduct, Safety, Bullying and Harassment
Oversight Board Case of Negative Stereotypes of African Americans
United States
Closed Mixed Outcome
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The Oversight Board found that AI-generated nude images of female public figures violated Meta’s Bullying and Harassment policy and were inconsistent with the company’s human rights obligations. In a split outcome, the Board overturned Meta’s decision to leave up an AI-generated nude image of an Indian public figure highlighting flaws in the automated appeals system that closed the case without review. It also upheld Meta’s removal of a similar image depicting an American public figure. The Board emphasized that such content causes severe harm, particularly to women and girls who are disproportionately targeted, and concluded that removal is the only effective and proportionate response to protect individuals’ rights to privacy, dignity, and safety. It recommended that Meta clarify and consolidate its policies on manipulated sexual content, adopt clearer language reflecting the non-consensual nature of this abuse, and improve reporting mechanisms for victims, especially those without public visibility.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
The Oversight Board (OSB) assessed two cases involving AI-generated nude images of female public figures. In the first case, a user posted an AI-manipulated image of a nude woman whose face resembled that of an Indian public figure. The image appeared to be based on a photo of the woman in beachwear that was also included in the same Instagram post. The account that shared the image claimed to publish only AI-generated images of Indian women and used hashtags suggesting the image was artificially generated.
A user reported the post to Meta for pornography, but the report was automatically closed after 48 hours without human review. When the user submitted a second appeal, it was also automatically closed. The case was then brought to the Oversight Board. After the Board selected the case for review, Meta reassessed the content and removed it for violating its Bullying and Harassment policy. Meta also disabled the account that posted the content and added the image to its Media Matching Service (MMS), which helps detect and remove previously identified violating content.
The second case involved an AI-generated image of a nude female American public figure being groped. The image was posted in a Facebook group dedicated to AI creations, and the group’s name was included in the caption. Meta removed the image for violating its Bullying and Harassment policy, classifying it as “derogatory sexualized photoshop or drawings.” A similar image had previously been posted and removed after Meta’s subject matter experts determined it also violated the same policy. This image was likewise added to the MMS bank. The user who posted the image appealed its removal, but the appeal was automatically closed. The user then brought the case to the Board, requesting that the content be restored.
To contextualize the case, the Oversight Board highlighted the growing threat and severe harm associated with deepfake intimate imagery—often AI-generated, sexualized images that falsely depict real people, mostly women. These manipulated visuals are part of a rapidly escalating global trend, with deepfake pornography accounting for 98% of all deepfake content online and disproportionately targeting women (99% of cases). Public comments to the Board (e.g., Witness and Women in International Security) noted that such content can carry particularly damaging consequences in conservative cultural contexts and can be created with increasing ease, making nearly anyone with an online presence a potential target.
The emotional and psychological toll of image-based sexual abuse, including deepfakes, is profound. Victims report a range of harmful effects, including shame, paranoia, and loss of dignity. Research indicates that the trauma inflicted by deepfake sexual content may rival that of real non-consensual sexual images. Cases from multiple countries—including Pakistan, where an 18-year-old woman was reportedly killed over a doctored image—illustrate the lethal consequences such content can have, especially in communities with rigid social norms.
Despite some national efforts to regulate deepfakes (India and the United States), the Board acknowledged that legal systems often move too slowly to effectively contain the spread of such content. Civil society organizations like India’s Breakthrough Trust also warned of systemic secondary victimization, where survivors face blame rather than support from legal institutions. As a result, many public comments emphasized that social media platforms must act as a first line of defense, investing in prompt and robust content moderation systems.
While Meta has made notable progress in addressing non-consensual intimate image sharing (NCII) through advanced detection and removal technologies, experts and public commenters stressed that similar attention must be given to deepfake intimate imagery. Although NCII involves real content and deepfakes are fabricated, both forms constitute image-based sexual abuse and demand proactive, consistent, and culturally informed responses from digital platforms.
On 25 July 2024, the Oversight Board issued a decision on the matter. The main issue it analyzed was whether Meta’s original decision not to remove an AI-generated nude image of an Indian female public figure in the first case, and its decision to remove a similar AI-generated image of an American public figure in the second case, were consistent with its content policies and human rights obligations.
In the first case, the reporting user raised concerns about AI-generated explicit images of celebrities on Instagram, particularly given the platform’s accessibility to teenagers. The content creator did not submit a statement. In the second case, the poster claimed the image was intended to entertain and not to harass or degrade the public figure.
Meta explained that both cases were assessed under its Bullying and Harassment and Adult Nudity and Sexual Activity policies. It concluded that both images violated the Bullying and Harassment policy, but only the second image—depicting a person being groped—violated the Adult Nudity and Sexual Activity policy. According to Meta, violations are assessed on a case-by-case basis using context cues and external signals from third-party fact-checkers and Trusted Partners.
Meta considered that the image in the first case was clearly AI-generated and sexually suggestive, with hashtags confirming its synthetic origin. The second image was flagged as AI-generated due to visual inconsistencies and its posting in a Facebook group dedicated to AI creations. The second image also breached Meta’s rules prohibiting depictions of breast-squeezing, whereas the first did not meet the threshold for nudity violations. Both were ultimately removed.
Meta justified the removals as necessary to uphold values such as safety, privacy, dignity, and voice, concluding the content had minimal artistic value and primarily served sexual or exploitative purposes. In May 2024, Meta updated its Adult Nudity and Sexual Activity policy to establish that photorealistic imagery is real unless proven otherwise. This includes realistic AI-generated depictions of nudity involving real people, aiming to address non-consensual and underage content better. Under the updated standard, both images would still warrant removal.
Meta further explained that its Media Matching Service (MMS) bank hashes violating content to facilitate automatic removal of known material. In the second case, the image had already been banked. In the first case, it had not been added until the Board specifically inquired, highlighting an inconsistency. Meta cited the absence of media signals in the first case and noted that not all violating content is immediately banked due to concerns about over-enforcement.
(1) Compliance with Meta’s content policies
The Board analyzed whether the two posts violated the Bullying and Harassment policy and the Adult Nudity and Sexual Activity policy. It concluded that both images violated Meta’s prohibition on “derogatory sexualized photoshop” under the Bullying and Harassment policy. Both had been altered to display the face of a real person alongside a nude or nearly nude body, with clear contextual clues indicating the content was AI-generated. The post from the first case included hashtags indicating AI generation, and the second case was posted in a Facebook group dedicated to AI imagery.
Additionally, the OSB agreed with Meta that only the second case violated the Adult Nudity and Sexual Activity policy, as it depicted the woman’s breast being squeezed. According to the Board, the image in the first case did not violate this policy as it did not fulfill Meta’s definition of a close-up of nude buttocks.
(2) Compliance with Meta’s human rights responsibilities
The Board applied the three-part test to evaluate whether Meta fulfilled its obligations regarding users’ freedom of expression in accordance with Article 19 of the International Covenant on Civil and Political Rights (ICCPR).
Legality (clarity and accessibility of the rules)
The principle of legality requires that rules limiting expression be clear, accessible, and precise, allowing individuals to understand and regulate their behavior accordingly, and shouldn’t grant unchecked discretion to those enforcing them.
The OSB found that while the term “derogatory sexualized photoshop” should have been clear to the users posting the content in the two cases, it is generally unclear to most users. Meta explained that this term refers to manipulated images that are sexualized in an unwanted and derogatory way, such as combining a real person’s head with a nude or nearly nude body. The Board suggested that the term “non-consensual” would be a clearer descriptor than “derogatory,” to emphasize the unwanted nature of these manipulations.
Additionally, it held that the term “photoshop” was outdated and too narrow, as it does not cover the wide range of media manipulation techniques, especially those using generative AI. Thus, the OSB advised Meta to update its language to encompass newer, broader editing methods in a way that is clear to both users and moderators.
The Board also recommended that these prohibitions be included in the Adult Sexual Exploitation Community Standard rather than the Bullying and Harassment policy, as users may not intuitively associate such images with bullying. It stressed that the clarity of the rules is especially important when the same content could be permissible if it were consensually created and shared, as in the first case involving the Indian public figure.
The RATI Foundation for Social Change, an Indian NGO supporting victims of sexual violence, said in a public comment that it was unaware of the “derogatory sexualized photoshop” prohibition under the Bullying and Harassment policy. Instead, it reported such content under policies like Adult Nudity and Sexual Activity, Child Exploitation, and Adult Sexual Exploitation.
The OSB underscored that placing this prohibition under the Bullying and Harassment policy assumes users post AI-generated explicit images to harass, which may not always be true. External research commissioned by the Board showed that users post such content for various reasons, including building an audience, monetizing, or redirecting users to off-platform sites, without the intent to harass. It also highlighted third-party research that revealed motivations like “fun,” “flirting,” or trading images, which suggested the current policy’s focus on harassment is confusing for both posters and reporters.
The Board recommended moving these prohibitions to the Adult Sexual Exploitation policy, which focuses on non-consensual image sharing. To it, this shift would better address the issue by focusing on the lack of consent and the harm caused by such content. The OSB also suggested renaming the policy to something clearer, like “Non-Consensual Sexual Content,” to better reflect its scope.
Legitimate aim
The Board considered that Meta’s prohibition on deepfake intimate imagery aims to protect several rights: the right to physical and mental health, as such content is extremely harmful to victims; freedom from discrimination, as evidence shows that this content disproportionately targets women and girls; and the right to privacy by preventing the unauthorized creation and dissemination of personal images.
Necessity and proportionality
Under Article 19(3) of ICCPR, restrictions on expression must be necessary and proportionate. This means they must be suitable for achieving their protective purpose while being the least intrusive means available to accomplish that goal.
The Board concluded that prohibiting and removing deepfake intimate imagery were necessary and proportionate measures to protect the rights of those affected. Such content causes severe harm to the people depicted in it, undermining their rights to privacy and protection from mental and physical harm—due to the non-consensual sexualization of their images.
Given the severity of these harms, the OSB concluded that content removal was the only effective solution, as less intrusive measures, such as labeling, would be inadequate. In the Altered Video of President Biden decision, the Board recommended labeling manipulated content to prevent users from being misled. However, in this case, the OSB decided that labelling wouldn’t address the harm caused by the creation, sharing, and viewing of these images. Additionally, it highlighted that this content disproportionately targeted women and girls, making it a discriminatory issue.
The Board also reviewed Meta’s use of MMS banks, noting that while the image in the second case had been added to one, the image in the first case was only added after the Board raised the issue. The OSB expressed concerns that many victims of non-consensual deepfake intimate imagery may not have a public profile and face the burden of continuously reporting these images, which is both traumatizing and resource-intensive. On this point, it said that Meta’s reliance on media reports to signal non-consensual imagery is helpful for public figures but insufficient for private individuals who lack media coverage. The Board stressed that Meta should also rely on other signals to detect non-consensual depictions of private individuals.
Regarding the proportionality of Meta’s actions, the OSB considered whether all users who shared these images should receive penalties, not just the original poster. The RATI Foundation pointed out in its public comment that offenders often use multiple accounts, and only one might be penalized, allowing the other to continue posting. The Board underlined that while MMS banks can help address this issue, their utility is limited by the database of known images—a fact that is especially important when taking into account that AI-generated content produces “new” images. Furthermore, the OSB acknowledged that striking all users who share such content could enhance enforcement but might lead to unjust penalties for those unaware that the images are non-consensual. Meta’s MMS bank was initially not configured to apply strikes to avoid over-enforcement, but the Board suggested reconsidering this approach, given that users can now appeal these decisions.
Finally, the OSB analyzed whether Meta should treat non-consensual intimate imagery sharing and deepfake intimate imagery separately within its policies. Meta argued that the two categories differ since non-consensual intimate imagery requires explicit signals of non-consent, whereas derogatory sexualized photoshopping does not. However, the Board suggested that AI-generated or manipulated sexual content could itself be seen as a signal of non-consent, regardless of whether the content was produced commercially or privately.
The OSB held that Meta’s policies already have significant overlap, which may confuse users. At the time of this decision, Meta’s definition of intimate imagery under the Adult Sexual Exploitation policy included private sexual conversations and imagery from private settings, including manipulated imagery with nudity or sexual activity. The Board discussed the possibility of presuming that all AI-generated sexual images are non-consensual, which aligns with Meta’s current handling of derogatory sexualized “photoshop”. It considered that while this could occasionally lead to the removal of consensual content, as demonstrated by the Breast Cancer Symptoms and Nudity and Gender Identity and Nudity decisions, Meta already assumes AI-created sexualized content is unwanted.
Ultimately, the OSB concluded that merging non-consensual intimate imagery and deepfake imagery policies could leverage Meta’s success in combating non-consensual intimate image sharing. However, the Board noted that adopting the presumption that all intimate imagery is non-consensual risked over-enforcement of non-violating content. Hence, it concluded that this approach was not feasible without risking the widespread removal of legitimate content due to Meta’s reliance on automated tools.
Right to Remedy
The OSB expressed concern over the auto-closing of appeals in the first case, where both the original report and the appeal regarding deepfake intimate imagery were closed without review. Meta explained to the Board that content reports, excluding Child Sexual Abuse Material, are eligible for auto-closing if technology does not detect a high likelihood of a violation and the report isn’t reviewed within 48 hours.
The OSB underlined that many users might be unaware that their appeals may never receive human review, leaving victims of deepfake intimate imagery without recourse. While recognizing the challenges of large-scale content moderation and the necessity of automated systems, the Board stressed that using auto-closing in cases involving deepfakes or non-consensual intimate imagery could severely undermine victims’ privacy and safety.
Policy advisory statement
The OSB recommended that Meta move the prohibition on “derogatory sexualized photoshop” to the Adult Sexual Exploitation policy to provide greater clarity for users and consolidate its policies on non-consensual content. Additionally, it suggested the term “derogatory” be replaced with “non-consensual” to better reflect the nature of the content. Similarly, it recommended updating the term “photoshop” to a more general term covering a broader range of media manipulation techniques. Furthermore, the Board proposed that Meta include a new signal for lack of consent in its Adult Sexual Exploitation Policy, specifically identifying AI-generated or manipulated content as a violation, regardless of whether it was created commercially or in a private setting.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
The Oversight Board’s decision reflects a mixed outcome for freedom of expression. On one hand, the decision contracts expression by upholding restrictions on the creation and sharing of AI-generated nude imagery on Meta’s platforms. This limits users’ ability to disseminate certain forms of synthetic or manipulated content, even in cases where the intent may not be clearly malicious. In doing so, it narrows the scope of permissible expression in the digital space, particularly in relation to AI-generated media.
However, the restriction is justified under international human rights law, as it serves to protect the rights of others—most notably the rights to privacy, dignity, and protection from psychological and reputational harm. The Board emphasized that the harms caused by non-consensual AI-generated sexualized imagery are serious, widespread, and often irreversible, particularly when such content circulates online without the subject’s consent.
Importantly, the decision advances protections for women and girls, who are disproportionately targeted by deepfake and other synthetic intimate imagery. By affirming the removal of such content and urging stronger, clearer policies based on consent rather than presumed intent, the OSB recognizes the gendered impact of this abuse and reinforces Meta’s human rights responsibilities to prevent discrimination and promote online safety.
Overall, while the decision limits certain expressions involving AI-generated content, it does so in a targeted, proportionate manner aimed at safeguarding the fundamental rights of individuals affected by intimate image abuse striking a careful balance between freedom of expression and the protection of vulnerable groups.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.