Content Moderation, Content Regulation / Censorship, Digital Rights, Political Expression, Integrity And Authenticity, Misinformation, Account Integrity and Authentic Identity
Van Haga v. LinkedIn
Netherlands
Closed Expands Expression
Global Freedom of Expression is an academic initiative and therefore, we encourage you to share and republish excerpts of our content so long as they are not used for commercial purposes and you respect the following policy:
Attribution, copyright, and license information for media used by Global Freedom of Expression is available on our Credits page.
The Oversight Board upheld Meta’s decision to leave up a Facebook post featuring an altered video of then-President of the United States, Joe Biden. The video was edited to loop a scene, making it appear Biden was inappropriately touching his adult granddaughter, and was captioned to call him a “sick pedophile.” The Board found the post did not violate Meta’s Manipulated Media policy because the policy only restricts videos made with AI that show people saying words they did not say. This video, the Board noted, was not AI-generated and depicted an action, not speech. The Board also found the looping edit was obvious and unlikely to mislead an average user. However, it expressed significant concerns that the policy was too narrow, incoherent, and failed to address the potential harms of such content, especially since many elections were held throughout 2024.
*The Oversight Board is a separate entity from Meta and will provide its independent judgment on both individual cases and questions of policy. Both the Board and its administration are funded by an independent trust. The Board has the authority to decide whether Facebook and Instagram should allow or remove content. These decisions are binding, unless implementing them could violate the law. The Board can also choose to issue recommendations on the company’s content policies.
In May 2023, following the U.S. midterm elections, a Facebook user posted a seven-second video clip based on real footage from October 2022 of then-President Biden voting with his granddaughter. In the original video, the Biden places an “I Voted” sticker on his granddaughter’s chest and kisses her on the cheek. The altered video loops the moment his hand touches her chest to make it appear he is touching her inappropriately. The post’s caption called the former President a “sick pedophile” and described his voters as “mentally unwell.”
Although the altered video went viral and was circulated via Facebook with different captions, this specific post had fewer than 30 views and was not shared. A user reported it for violating the company’s Hate Speech policy, but Meta’s automated systems closed the report without review. A human reviewer later upheld the decision to leave the post up after the user appealed to Meta.
The same user subsequently appealed to the Oversight Board (OSB) for further review.
On 5 February 2024, the Oversight Board issued a decision on the matter. The main issue it had to decide was whether Meta’s decision to leave up a post, featuring an altered video of then-President Joe Biden touching his granddaughter in a manner that could be perceived as inappropriate, was consistent with its content policies and its human rights responsibilities.
The user who reported the content submitted that the contested post was a “blatantly manipulated video to suggest that Biden is a pedophile.
The user who originally posted the content did not provide a statement to the Board.
For its part, Meta submitted that its decision to leave the content on the platform was correct and complied with its Community Standards. The company explained that the video did not violate its Manipulated Media policy, as the policy only applies to videos that are altered by artificial intelligence (AI) to make it appear a person said words they did not say. Since this video was not created with AI and depicted an action rather than speech, it fell outside the policy’s scope. Furthermore, Meta stated that the accompanying caption, which described then-President Biden as a “sick pedophile,” did not violate its Bullying and Harassment policy. The company justified this by stating its policy permits criminal allegations and expressions of contempt against public figures, as such speech can be part of political discourse. Meta also said that the post was not selected for review by its third-party fact-checking partners because its systems prioritize viral content, and this particular post had very low visibility.
(1) Compliance with Meta’s Content Policies
The Board agreed with Meta that the content did not violate the Manipulated Media policy. As underscored by the OSB, the policy has two strict, defining criteria: (1) it only applies to videos that have been created or altered by AI, and (2) it only prohibits media that makes a person appear to be saying words they did not say. The contested content, altered with a simple loop edit, failed both tests: It was not AI-generated, and it depicted an action, not speech. The Board also noted that Meta considers the potential to mislead an “average user” about its authenticity a key characteristic of violating manipulated media. The majority of the OSB considered that the looping effect was an “obvious alteration” that users could easily identify as edited.
The Board’s majority also held that the caption calling the President a “sick pedophile” was permitted under the Bullying and Harassment policy. Meta’s policy explicitly carves out an exception for “criminal allegations against adults, even if they contain expressions of contempt or disgust.” The majority viewed the phrase as a criminal allegation (“pedophile”) coupled with an expression of contempt (“sick”) directed at a public figure, which Meta argues can be part of political discourse.
For the OSB, it was reasonable that Meta’s third-party fact-checking program did not review this specific post due to its extremely low distribution and lack of virality, as fact-checking resources are prioritized based on a content’s potential reach.
(2) Compliance with Meta’s Human Rights Responsibilities
The Board analyzed Meta’s decision under the international human rights framework—specifically Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which protects freedom of expression, including political speech. As set out in Article 19(3) of the ICCPR, restrictions on this right must meet a three-part test: they must be established by legal, pursue a legitimate aim, and be necessary and proportionate.
The OSB reiterated the high value of political speech, noting that according to General Comment No. 34, “the value of expression is particularly high when it discusses political issues, candidates and elected representatives. This includes expression that is ‘deeply offensive,’ insults public figures and opinions that may be erroneous.” [p. 16] The Board emphasized that these forms of expression are “essential for the enjoyment of the right to take part in the conduct of public affairs and the right to vote.” [p.17]
However, the Board also acknowledged a key distinction, referencing its prior decision in the Knin Cartoon case: while States are bound by international human rights law, Meta as a company has different human-rights responsibilities. This distinction grants Meta the latitude to legitimately remove certain content and apply less strict standards when doing so.
a. Legality (Clarity and Accessibility of the Rules)
The OSB considered that Meta’s Manipulated Media policy raised legality concerns. This requirement demands that restrictions on free expression be clear and accessible, so users can understand the limits placed on their rights. The Board underscored that this policy was published in two different places (as a standalone policy and within the Misinformation Standard) with differing rationales and operative language, creating confusion. One version contained a critical typographical error, stating it would remove content that “would likely mislead an average person to believe … that the video is the product of artificial intelligence or machine learning,” which is the opposite of its intent. [p.14] It also noted, as highlighted in the Armenian Prisoners of War Video and India Sexual Harassment Video decisions, that the aforementioned policy states it requires “additional information and/or context to enforce,” a condition typically handled by specialized escalation teams and not frontline moderators—although this is not clarified to users.
b. Legitimate Aim
The OSB considered that Meta failed to clearly define the harm the policy is meant to prevent. This lack of clarity is exacerbated by the fact that the policy’s rationale is presented differently across two official sources, creating a confusing and incoherent framework for users. To the Board, the stated rationales (that manipulated media “could mislead” or “can go viral quickly”) were insufficient. On the one hand, the Misinformation Community Standard (Section IV) justifies the rule by citing expert advice that “false beliefs regarding manipulated media cannot be corrected through further discourse,” [p. 8] while the standalone Manipulated Media policy offers only the circular and unpersuasive rationale that such content could mislead. The OSB clarified that preventing people from being misled is not, in itself, a legitimate reason to restrict freedom of expression, especially in the political context where contested claims are inherent. The Board concluded that a legitimate aim would be protecting specific rights like the right to vote and participate in public affairs (Article 25, ICCPR) from undue interference, although Meta has not explicitly anchored its policy to this aim.
c. Necessity and Proportionality
Following the Human Rights Committee General Comment 34, the OSB mentioned that the necessity and proportionality principles provide “that any restrictions on freedom of expression must be appropriate to achieve their protective function; [and] they must be the least intrusive instrument amongst those which might achieve their protective function.” [p.20] It further noted that “the removal of content would not meet the test of necessity if the protection could be achieved in other ways that do not restrict freedom of expression.” [p. 20]
Considering this, the Board concluded that Meta’s primary reliance on the removal of content for violations of the Manipulated Media policy was a disproportionate measure in most cases. Hence, it called for the implementation of “less restrictive means” that would better balance the mitigation of possible harm with the protection of freedom of expression. Specifically, the OSB recommended the consistent application of labels attached directly to the media itself (e.g., at the bottom of a video) to inform users that the content has been significantly altered.
The Board held that this measure should be applied automatically to all identical instances of manipulated media across platforms, “independently from the context in which it is posted, across the platform and without reliance on third-party fact-checkers.” [p.17] This approach is likely to ensure scalable and consistent enforcement, mitigating harm to electoral processes and individual reputations without resorting to censorship.
The OSB emphasized that such labeling would decrease excessive removals and promote user trust, thereby meeting the necessity requirement under human rights principles. This recommendation stands in direct contrast to Meta’s current use of third-party fact-checkers—a process the Board criticized because content demoted based on fact-checker ratings occurs without notifying users or providing them with appeal mechanisms, raising significant concerns about transparency and due process.
The OSB further highlighted that Meta’s Manipulated Media policy relies on arbitrary and illogical technical distinctions that prevent it from effectively addressing the actual harms caused by altered content. The policy’s narrow focus on AI-manipulated videos that make people appear to say words they did not say fails to account for the broader risks posed by other types of media manipulation. On this point, the Board emphasized that depicting individuals doing things they did not do can be just as misleading and damaging as falsifying speech. This distinction between action and speech lacks a coherent rationale, particularly given that advancements in editing tools have made all forms of media manipulation increasingly accessible and convincing.
Moreover, the OSB criticized the policy’s exclusion of non-AI-altered content and audio-only media. So-called “cheap fakes,” which are edited using simple and widely available tools, are currently more prevalent than AI-generated “deepfakes” and can be equally deceptive. The Board also highlighted that audio-only content poses a unique risk, as it often includes fewer inauthenticity cues that might otherwise reveal manipulation, making it potentially as, or more, misleading than videos.
Finally, the OSB urged Meta to expand the Manipulated Media policy to cover audio and audiovisual content that shows people doing things they did not do, regardless of the method of creation. However, the Board also cautioned against hastily including photographs in this expanded framework, noting that past research, such as that cited in the COVID-19 misinformation advisory opinion, suggests that the effectiveness of labeling may diminish over time due to overexposure. To ensure consistent and scalable enforcement, it recommended that Meta prioritizes labeling for video and audio content while conducting further research into the risks and challenges associated with manipulated photographs. This approach would allow the company to address the most pressing harms without overwhelming its enforcement systems or diluting the impact of its interventions, the Board said.
Consequently, the Oversight Board upheld Meta’s decision to leave up the post.
Policy Advisory Statement
The OSB urged Meta to:
Dissenting or Concurring Opinions
A minority of Board members dissented, arguing that the content should have been removed. They believed that a maliciously altered video presented as false evidence of a serious crime like pedophilia “falls under the spirit of the policy as it still could mislead an average user.” [p. 15] Regarding the caption, the minority argued that when paired with a deliberately falsified video, the statement “sick pedophile” transcended a mere criminal allegation and became a malicious personal attack that should violate the Bullying and Harassment policy. They concluded that such content constituted unprotected speech that directly harmed electoral integrity and justified a removal.
Decision Direction indicates whether the decision expands or contracts expression based on an analysis of the case.
This decision expands expression by reinforcing the principle that restrictive content policies must be narrowly tailored and based on a clear, legitimate aim. The Board’s ruling protects political expression, even when it is offensive, by holding Meta to a high standard for justifying removals. Its reasoning aligns with international human rights standards by mandating that restrictions must be clear (legality), serve a vital objective like electoral integrity (legitimate aim), and use the least intrusive means possible (proportionality), favoring labels over censorship. The OSB also advised Meta to reconsider its Manipulated Media policy—which overlooks harmful non-AI manipulated media—by updating it coherently and sufficiently to address the flow of online disinformation. The Board’s decision seeks to foster a more robust and informed public discourse, which is essential for democratic processes.
Global Perspective demonstrates how the court’s decision was influenced by standards from one or many regions.
Case significance refers to how influential the case is and how its significance changes over time.
According to Article 2 of the Oversight Board Charter, “For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar.” In addition, Article 4 of the Oversight Board Charter establishes, “The board’s resolution of each case will be binding and Facebook (now Meta) will implement it promptly, unless implementation of a resolution could violate the law. In instances where Facebook identifies that identical content with parallel context – which the board has already decided upon – remains on Facebook (now Meta), it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well. When a decision includes policy guidance or a policy advisory opinion, Facebook (now Meta) will take further action by analyzing the operational procedures required to implement the guidance, considering it in the formal policy development process of Facebook (now Meta), and transparently communicating about actions taken as a result.”
Let us know if you notice errors or if the case analysis needs revision.