Freedom of Association and Assembly / Protests, Political Expression, Facebook Community Standards, Objectionable Content, Hate Speech
Oversight Board Case of Colombia Protests
Colombia
Closed Expands Expression
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
On October 3, 2023, the Oversight Board overturned Meta’s decision to remove a video, posted by a Cuban news platform on Instagram, of a woman criticizing men by comparing them to animals and encouraging women to join her in protests in the streets against the government. The user was a verified Instagram account that described itself as critical of the Cuban government. The post was viewed more than 90,000 times. It was automatically flagged by Meta’s system and sent to human review for potentially violating Meta’s Hate Speech policy. The content was removed after several moderators deemed it violated the policy. The Board found that the women’s statements in the video were qualified behavioral statements, allowed under Meta’s Hate Speech policy. Under Meta’s policies, qualified behavioral statements “use statistics, reference individuals, or describe direct experience.” The Board further highlighted the importance of understanding external context when issuing content moderation decisions, especially in countries where freedom of expression and peaceful assembly are suppressed or met with violence. In light of Cuba’s political context, the message of the content (calling to protest State repression), and the role social media platforms play in exercising activism, the Board considered that removing the content was the wrong decision.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
In July 2022, a Cuban news platform posted a video on its verified Instagram account of a woman calling other women to protest alongside her. In the video, the woman called Cuban men “rats” and “mares” who carry urinal pots for not standing up to the government’s repression. Furthermore, “the text overlying the video connect[ed] political change to women’s protests”. [p. 1] The post’s caption was in Spanish and included quotes from the video and hashtags referring to the Cuban “dictatorship” and “regime.” Moreover, it called for international attention to the humanitarian situation by using the hashtag #SOSCuba. The video gathered 90,000 plays and less than 1,000 shares.
The post was shared around the anniversary of nationwide protests that occurred in July 2021 in response to the lack of economic, social, and cultural rights. They were the largest demonstrations in Cuba’s recent history. Between July 2021 and July 2022, state repression intensified in the country. A few days before the post was uploaded, the Cuban police killed a young man. The woman in the video referred to the incident by saying, “we cannot keep allowing the killing of our sons.” The post was shared in a very tense social context.
Seven days after the video was posted, it was identified by an automated hostile speech classifier as content that potentially violated Meta’s policies and sent for human review. A moderator analyzed the content and considered it violated the company’s Hate Speech policy as it dehumanized men based on their sex. The content was automatically escalated for secondary review due to the account’s cross-check status. Meta’s cross-check system is a review mechanism designed to apply additional scrutiny to content moderation decisions concerning high-profile users, such as celebrities, politicians, news outlets, and other influential figures.
On escalation, two moderators reviewed the content. Both considered that the post violated Meta’s Hate Speech policy. Subsequently, the content was removed on February 24, 2023—seven months after it was initially posted—and a standard strike was applied to the user’s account without any feature limit. The delay in the review process was due to a backlog in Meta’s review queues.
On the same day of the content’s removal, the user appealed Meta’s decision. “The content was again reviewed by a moderator who, on February 26, 2023, upheld the original decision to remove it. The content was not escalated to policy or subject matter experts for additional review at this time.” [p. 6] The user then appealed before the Oversight Board (OSB).
On October 3, 2023, the Oversight Board issued a decision on the matter. The main issue before the Board was whether the removal of a video, posted by a news outlet on Instagram, featuring a woman comparing men to rats and mares who carry urinal pots, and asking women to join her in protests against the government, was consistent with Meta’s content policies, values, and human rights obligations.
In its appeal to the Board, the user urged social media companies to understand the situation in Cuba better. According to the news platform, the video referred to the July 2021 protests, and the woman in it was calling Cuban men to solve the crisis.
On the other hand, Meta submitted that the post was removed under Tier 1 of the Hate Speech policy which prohibits content comparing one group to “animals that are culturally perceived as intellectually or physically inferior.” The company admitted it did not understand the cultural meaning of the phrase “mares loaded with chamber pots or toilets.” Nonetheless, Meta considered the phrase violated the policy because it compared men to animals carrying human feces. It explained that comparing men to rats and toilet-laden horses dehumanized men based on their sex and could exclude them from the discussion.
Meta explained that it considered applying a “spirit of the policy” allowance in this case, which allows content to remain on the platform in cases where the strict enforcement of a policy produces results that are inconsistent with the policy’s rationale or objectives. The company refrained from this because it deemed the content violated both the letter and the spirit of the policy. Meta further argued that it treats all groups, defined by protected characteristics, equally and removes any attacks against any group even if the attacked group is more culturally and socially powerful. The company referred to this as the “protected characteristic-agnostic” approach. Meta also explained that when it moderates posts, it only analyzes its content and disregards any wider social or political context.
Compliance with Meta’s content policies and values
1. Content Rules and Values
The Board considered that the contested content was a qualified behavioral statement that did not constitute hate speech. Under Meta’s policies, qualified behavioral statements “use statistics, reference individuals, or describe direct experience.” According to the OSB, comparing men to “rats” or “mares loaded with urinal pots” could be interpreted as a statement that violated the aforementioned policy if read out of context. However, the Board noted that the post—when read in whole and in context—did not aim to dehumanize men or incite violence against them. The OSB argued that the statements in the video were qualified as they called attention to the behavior of Cuban men in the political context of Cuba. This, the Board considered, was evident by the usage of the #SOSCuba hashtag. Thus, the OSB held that the post did not violate the Hate Speech Community Standard and that removing it was inconsistent with said policy.
Next, the OSB referred to the public comments and experts’ submissions. According to them, the words “rats” and “mares” are used in Cuba to imply cowardice. The Board also highlighted the context of the content, considering it was uploaded during a wave of strong state repression. In light of the post’s context, its hashtags, other statements uttered in the video—such as “we cannot keep allowing the killing of our sons”—, and the woman’s call to protest, the OSB held that the content expressed an opinion about the behavior of Cuban men during an on-going struggle in the country.
A minority of the Board questioned the “protected characteristic-agnostic” approach, especially in cases where its enforcement led to restricting historically marginalized groups. This minority suggested that a proportionate Hate Speech policy should acknowledge societal and cultural power dynamics to prevent stifling under-represented groups.
The Board agreed that the content fostered the value of “Voice” and that removing it was inconsistent with Meta’s values. As the OSB held, the approach adopted in this case echoed the one used in the Violence Against Women decision, where the Board concluded that the contested content should be read as a whole and categorized it as a qualified behavioral statement.
2. Enforcement action
The Board criticized the 7-month delay in this case. To it, “the delay ultimately meant the content remained on the platform while waiting for the final stage of cross-check secondary review. The content remaining on the platform is an outcome in line with the Board’s analysis of the application of the Hate Speech Community Standard. However, that outcome was not in accordance with Meta’s understanding that the content was harmful.” [p. 14]
Moreover, the Board expressed concerns about how contextual information was analyzed in content moderation decisions that benefitted from additional human review. Referring to the case of Knin Cartoon, the OSB acknowledged how challenging it is to assess hate speech—considering its context—at scale, especially in cases of implicit or explicit dehumanizing speech that has previously led to atrocities. On this point, it reiterated that content moderation, when enforced to address the cumulative harms of hate speech, is consistent with Meta’s human rights responsibilities even if some content doesn’t directly incite violence or discrimination.
Subsequently, the OSB highlighted that Meta has established policy exceptions in order to avoid stifling public debate on relevant issues—such as political speech on historical events. This, the Board held, is why Meta’s reviewers must be able to distinguish between qualified and unqualified behavioral statements, and why Meta’s automated systems and reviewers must be able to factor contextual information in their decisions and not rely solely on the post’s content. It additionally noted that reviewers at an escalation level were expected to deliver better results as they had more access to context. In the case at hand, the Board held, this did not happen since even after an escalated review, Meta failed to issue the right decision.
To the OSB, Meta’s decision in this case was concerning and raised questions regarding whether contextual information was sufficiently considered during the cross-check escalation process. The Board considered that Meta should improve how context is analyzed in its workflow to prevent future similar content from being removed.
Compliance with Meta’s human rights responsibilities
Upon analyzing Meta’s human rights responsibilities, the Board explained that freedom of expression, including political speech, is protected by Article 19 of the International Covenant on Civil and Political Rights (ICCPR or the Covenant). Similarly, the OSB held that the right to peaceful assembly is protected by Article 21 of the Covenant and extends to online activities. The Board noted that Meta’s fulfillment of its obligations towards these rights was crucial due to the extreme restrictions they face in Cuba. Furthermore, the Board underlined the heightened protection that the contested content benefitted from due to its nature: A woman’s call for protest to defend rights at a significant political moment. It also highlighted the ongoing popular discontent towards the Cuban government for using repressive measures against the population.
To determine whether Meta’s restrictions to the contested post abided by international standards on human rights, the OSB applied the three-part test established in Article 19 of the ICCPR. According to it, limitations to rights such as freedom of expression must be clear (legality), pursue a legitimate aim, and be necessary and proportional.
1. Legality (clarity and accessibility of the rules)
As the Board said, the principle of legality requires rules on content moderation (or any other limits to freedom of expression) to be understandable and accessible to users. Additionally, unfettered discretion to restrict freedom of expression shouldn’t be given to those who enforce those rules. In practice, this means that Meta’s users should have access to the rules, and content reviewers must have clear guidance on how to apply them.
Next, the OSB explained that the Hate Speech policy prohibits attacks against groups based on protected characteristics. Dehumanizing speech, for example, is considered an attack under the policy. This type of speech includes comparisons, generalizations, or unqualified behavioral statements, about animals culturally perceived as inferior. To Meta, unqualified behavioral statements explicitly attribute a behavior to all the members of a group defined by a protected characteristic—or to a majority. However, the Board said, the same policy permits qualified behavioral statements.
As Meta admitted to the Board in the Violence against women decision, it can be hard for reviewers to distinguish between qualified and unqualified behavioral statements if they don’t consider the context. The company also mentioned it instructs its reviewers to default to removing content if the user did not make it clear whether the statement was qualified or not due to the complexity of determining intent. Considering this, the Board said that the guidance provided to reviewers limits their ability to analyze content even when there are clear cues within the context itself that point toward deciding that a behavioral statement is qualified. The OSB noted that in the case at hand it was clear that the content was a statement “discussing specific historical and conflict events” which reflected “the critical judgment of the woman in the video when she refers to the behavior of Cuban men in the specific context of the historic Cuban protests of 2021 and the wave of repression that followed in 2022.” [p. 17] The inclusion of the #SOSCuba clarified this further.
Considering this, the OSB argued that content reviewers should have the opportunity and resources to do contextual analysis when enforcing policies and concluded that the language of the Hate Speech policy, and its internal guidance, was vague, confusing, and contradictory—which made it challenging for reviewers to reach the right conclusion.
2. Legitimate aim
For a restriction on freedom of expression to be valid, it must pursue one of the legitimate aims outlined by the ICCPR, such as safeguarding the “rights of others.” The Board, following the Knin Cartoon decision, held that the Hate Speech policy does fulfill this part of the test as it “aims to protect people from the harm caused by hate speech.” [p. 18]
3. Necessity and proportionality
Citing General Comment 34, the OSB mentioned that “the principle of necessity and proportionality provides that any restrictions on freedom of expression ‘must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; [and] they must be proportionate to the interest to be protected.’” [p. 18] While the Board disagreed with Meta’s categorization of the content as hate speech, it recognized the challenges of moderating hate speech, especially when it regards statements comparing others to animals.
To the Board, Meta’s decision not to apply a policy exception could be explained in light of the company’s lack of consideration toward contextual analysis. According to the OSB, “in applying an overly literal reading of the content, Meta overlooked important context; disregarded a relevant carve-out from its own policy; and adopted a decision that was neither necessary nor proportionate to achieve the legitimate aim of the Hate Speech policy.” [p. 19]
The Board decided to implement the Rabat Plan of Action to analyze the contested content, focusing on the post’s social and political context and the content itself. The OSB began by noting the strong wave of repression that followed the 2021 protests, and the death of a young Cuban man at the hands of the police, to highlight key events that catalyzed calls for protest against the government. Regarding the content, the OSB considered that in it a woman was expressing her opinion about the behavior of Cuban men during the aforementioned events and called women to protest to defend the lives of “our sons.”
Taking this into account, the Board concluded that the post did not incite violence against men as it did not attribute a behavior to them or dehumanize them based on their gender; instead, it used harsh language to urge Cuban men to join the protests. Moreover, the OSB highlighted the significant negative impact that the removal had on the woman in the video, the user who shared it, and on the political debate: “Meta’s decision to remove the post is likely to have had a disproportionate impact on the woman in the video who overcame many difficulties that exist in Cuba, including access to the internet and the risks of speaking out against the government. Additionally, the removal is likely to have placed an unnecessary burden on the user – the news platform – which has had to overcome barriers to disseminate information about what is happening in Cuba.” [p. 19]. Furthermore, the strike applied to the user’s account could have “aggravated the situation and potentially resulted in the account’s suspension.” [p. 19]
On this point, the Board reiterated that contextual analysis was crucial for content moderation, especially when dealing with political speech. Referring to the Colombia Protests decision, the OSB said, for example, that controversial statements including homophobic slurs could be of public interest if uttered during protests. The Board also argued, citing the Iran Protest Slogan decision, that Meta’s over-removal of political expression created risks to human rights. Finally, quoting the Pro-Navalny Protests in Russia decision, the OSB highlighted the importance of external contextual analysis in the enforcement of Meta’s policies.
The Board also took notice of the persecution women have suffered in Cuba in response to the July 2021 protests, as they have been subjected to political and sexual violence.
Considering this, the OSB urged Meta to exercise more care when moderating content in regions where political speech and peaceful assembly are suppressed or met with violence. Social media platforms in Cuba, the Board argued, offer a limited but significant avenue to criticize the government and exercise activism.
In light of these arguments, the Oversight Board overturned Meta’s decision to remove the post.
Recommendations
The Board decided not to issue any recommendations and instead reiterated Recommendations No. 1, No.3, and No. 8 from its cross-check policy advisory opinion. Recommendation No. 1 urged Meta to safeguard expression by implementing a list-based over-enforcement prevention program. This system should be different from the one that protects expression, which Meta considers to be a commercial priority. The company should also guarantee that content submitted by human rights advocates and other parties receives further review by Meta. Recommendation No. 3 urged Meta to integrate linguistic and contextual knowledge to improve its content moderation assessments. Finally, Recommendation No. 8 considered that Meta should use specialized staff that benefits from local input to create over-enforcement prevention lists.
Additionally, the Board recalled Recommendation No. 2 from the Violence Against Women decision. According to it, Meta must update its guidance on what constitutes a qualified behavioral statement, as the current guidance makes it almost impossible for reviewers to reach the correct decision.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
By emphasizing the crucial role that Meta’s platforms play as avenues for activism in countries where dissident speech is often met with repression, and by acknowledging the importance of safeguarding political expression, even through harsh language, the OSB fosters a protective online environment that favors free expression in complex political contexts. Moreover, the Board’s considerations highlighting contextual analysis in content moderation offer a more sophisticated and nuanced approach to better understand what should be categorized as hate speech and what not.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
The Board referred to this provision to assess Meta’s responsibilities towards human rights regarding freedom of expression.
The Board mentioned this norm to highlight how the adequate protection of freedom of expression safeguards the right to peaceful assembly.
The Board referred to this instrument to highlight Facebook’s businesses’ human rights responsibilities.
While employing the three-part test, the Board referred to this General Comment for guidance.
The Board referred to this decision to discuss what can be considered a qualified behavioral statement and the challenges this pose on content moderation reviewers. Additionally, the Board recalled Recommendation No. 2 from this decision.
The Board referenced this decision to underscore the importance of safeguarding political speech related to protests.
The Board mentioned this case to reiterate the challenges content moderation reviewers face when moderating content that compares a group to animals.
The Board referenced this case to mention that Meta can adopt content moderation practices that are inconsistent with States’ human rights obligations, if they are necessary and proportionate.
The Board referred to this case to argue that, in certain circumstances, moderating content that does not directly incite violence or discrimination may be consistent with Meta’s human rights responsibilities.
The Board referenced this case to highlight the importance of external context analysis in content moderation.
The Board referenced this case to highlight the importance of external context analysis in content moderation.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.