Global Freedom of Expression

3 to 4 October

REGULATING THE ONLINE PUBLIC SPHERE: From Decentralized Networks to Public Regulation

  • 9:50am-4:30pm ET Day 2: 10:00am-4:30pm ET
  • Online Event

The stakes could not be higher for finding a solution for how to regulate the global public sphere, with the survival of modern democratic governance hanging in the balance. The power the big platforms wield over public discourse has distorted the marketplace of ideas, to a point where there is broad consensus internationally that some form of state intervention is necessary. In the United States, that realization is particularly contentious as it is at odds with US tradition and the First Amendment, which favors competition over regulation not only of the economy but also of ideas. As frustrations rise over how to address these issues, and competing visions of regulation emerge from the Digital Markets Act in Europe, to the Access Act in the United States, to the Texas Social Media Law, alternative technical solutions, rather than political ones, look increasingly worthy of consideration.

In this context, Columbia Global Freedom of Expression and Justitia present the “Regulating the Online Public Sphere: From Decentralized Networks to Public Regulation” Conference, which will take place on October 3 & 4, 2022. On the first day, speakers will discuss new models of decentralized networks and, on the second day, the different regional approaches to public and private regulation of content moderation on the Internet. The conference is an initiative of the Future of Free Speech project.

Participants can register to receive the links for the event and join virtually. The conference will be livestreamed on the GFoE YouTube channel.

[Agenda PDF Linked Here]

Rapporteurs Report Summarizing Panel Discussions

Day 1: October 3 Livestream

Day 2: October 4 Livestream

Agenda

Day 1 – Monday, October 3, 2022

10:00-10:10am Welcome Remarks by Columbia University
10:10-10:20am Presentation of the First Day
Jacob Mchangama, Founder and Executive Director, Justitia and Future of Free Speech Project
10:20-11:50am

 

 

 

 

 

Session I: Mapping the Decentralized Ecosystem

Moderator: Mike Masnick, CEO, The Copia Institute
Speakers:
Daphne Keller, Director of Program on Platform Regulation, Stanford Cyber Policy Center
Golda Velez, Co-Founder, Cooperation.org (Community builder for BlueSky)

Alex Feerst, CEO, Murmuration Labs – Law, Policy, Trust & Safety
Alan Z. Rozenshtein, Associate Professor of Law, University of Minnesota Law School
11:50-12:00pm Break
12:00-1:30pm

 

 

 

 

 

Session II: How to Get There from Here? Regulatory Requirements and Necessary Standards

Moderator:
Kate Klonick, Associate Professor of Law, St John’s University
Speakers:
Barbora Bukovská, Senior Director for Law and Policy, ARTICLE 19
Cory Doctorow, Special Advisor, Electronic Frontier Foundation
Andrew McLaughlin, Co-founder, Higher Ground Labs; Board Chair, Access Now
Zoe Darmé, Senior Manager, Search, Government Affairs and Public Policy at Google

2:30-2:35pm

 

Greetings from UNESCO & The Way Forward in Multilateral Regulatory Policy
Guilherme Canela Godoi, Chief of the section of Freedom of Expression and Safety of Journalists, UNESCO
2:35-4:15pm

 

 

 

 

Session III: Business Viability: Decentralizing Power and Opening Up Competition

Moderator:
Farzaneh Badiei, Head of Outreach and Engagement, Digital Trust and Safety Partnership
Speakers:
Michael Lwin, Managing Director & Co-Founder, Koe Koe Tech
Alison McCauley, Chief Advocacy Officer, Unfinished Labs
Dave McGibbon, Founder & CEO, Passbase

Day 2 – Tuesday, October 4, 2022

10:00-10:10am Opening Remarks & Presentation of the Second Day
Catalina Botero, Columbia Global Freedom of Expression; Co-Chair, Oversight Board of Meta
10:10-11:30am

 

 

 

 

 

Session IV, Part 1: Regulating Online Speech: Between the First Amendment and the Digital Services Act

Moderator: Pamela San Martin, Board Member, Oversight Board of Meta
Speakers:
David Kaye, Clinical Professor of Law, University of California, Irvine
Matthias C. Kettemann, Professor of Innovation, University of Innsbruck; Research Program Leader, Leibniz-Institute for Media Research | Hans-Bredow-Institut
Agustina Del Campo, Director, Centro de Estudios en Libertad de Expresión y Acceso a la Información (CELE)
11:30-11:35am Break
11:35-12:50pm
Session IV, Part 2: Regulating Online Speech: Between the First Amendment and the Digital Services Act

Moderator: Jamal Greene, Dwight Professor of Law, Columbia Law School; Co-Chair, Oversight Board of Meta
Speakers
Suzanne Nossel, Chief Executive Officer, PEN America
Jacob Mchangama, Founder and Executive Director, Justitia and the Future of Free Speech Project
Martin Fertmann, Researcher, Leibniz-Institute for Media Research | Hans-Bredow-Institut
1:30-2:45pm

 

 

 

 

 

 

 

Session V: International Human Rights Law as the Basic Framework of Meta’s Oversight Board Decisions
Co-hosted in partnership with the Oversight Board of Meta

Moderator: Joel Simon, Fellow, Tow Center for Digital Journalism at Columbia Journalism School
Speakers:
Joan Barata, Intermediary Liability Fellow, Stanford Cyber Policy Center
Daphne Keller, Director of Program on Platform Regulation, Stanford Cyber Policy Center
Susan Benesch, Founder and Director, Dangerous Speech Project
Kate Klonick, Associate Professor of Law, St John’s University
Monroe Price, Senior Research Fellow at the Centre for Socio-legal Studies, Oxford University
2:45-3:15pm Break
3:15-4:30pm

 

 

 

 

 

 

(ONLINE ONLY IN SPANISH) Sesión VI: El derecho internacional de los derechos humanos como marco de las decisiones del Consejo de Supervisión de Meta
Co-hosted in partnership with the Oversight Board of Meta

Moderador & Presentación: Joan Barata, Intermediary Liability Fellow at Cyber Policy Center, Stanford University
Panelistas:
Susan Benesch, Founder and Director, Dangerous Speech Project
Agustina Del Campo, Director, Centro de Estudios en Libertad de Expresión y Acceso a la Información (CELE)
Carlos Cortés Castillo, Co-Founder, Linterna Verde

*All times are in Eastern Standard Time (EST).

Day 1: The Future of Decentralized Networks and Free Speech

Decentralized network models, such as “middleware,” federated and distributed, have gained attention in recent years as a remedy for a variety of informational harms while protecting freedom of speech. These networks, based on open protocols coordinating with the existing platforms, would enable diverse applications with unique interfaces where users have greater control over their content curation, privacy and data. Blockchain technologies further offer possible solutions to the viral spread of disinformation, and the preservation of content over time, among others. They would not only create competition in the marketplace but also restore control over communications to the individual, in keeping with the original vision of the internet before it was co-opted by big companies. These models could significantly reduce the levels of information abuse as each user would be able to select transparent filters based on their own interests and privacy levels, thereby also reducing the need for external regulation or censorship. Each of these alternatives offer a range of solutions, however, they also come with their own risks if they are not compliant with international human rights standards from the beginning.

What is clear at this moment in time, is that “you can’t get there from here.” Hence, the first challenges are how to enable their evolution in light of today’s market concentration and regulatory environment. Examples of decentralized networks have been around for years, but they have not gained substantial market share, and one of the most promising versions, BlueSky, is still under development. Platforms will ultimately need to either be forced through antitrust laws to open up competition, or willingly support these new protocols-based systems as an option to the intractable and costly problems of current content moderation. Other requirements for their viability include sufficient levels of interoperability across the ecosystem, as well as mandatory access, which will require regulatory intervention similar to what has taken place in the telecommunications sector. Such changes could create opportunities for new business models which are not driven by data monetization, but rather are service based such as for content curation or private datastores. For any of the above to come to fruition, and to avoid mistakes of the past, now is the time to set the necessary standards and policies that are rights protective for these new ecosystems and pro-actively consider how to mitigate the associated risks.

To that end, The Future of Free Speech Project and Columbia Global Freedom of Expression seek to bring together a group of stakeholders from the technology industry, civil society and academia for a conference to foster a participatory dialogue from a global perspective. The purpose of the conference is to build a holistic understanding of the challenges at hand. Likewise, we aim to promote knowledge of digital innovation and good practices in Freedom of Expression and facilitate the exchange of experiences between key actors regarding emerging decentralized networks, existing norms and standards (hard and soft law) in the field, and the implications for Human Rights.

Session I

This panel will define the main decentralized network models, including federated and distributed/blockchain, and discuss the technical solutions they offer, and cannot offer, for content moderation and curation. They will further consider how each addresses the problems of algorithmic transparency, disinformation and censorship, and then assess the risk factors, such as filter bubbles/extremist communities and data privacy concerns. The panel will assess how to prioritize the competing values and goals, to identify what guarantees and safeguards are necessary to ensure the protection of users’ rights in these models.

Session II

This session will explore the market and regulatory challenges to creating an enabling environment for these alternative networks. Speakers will consider necessary minimum standards and what policies should be considered to ensure the protection of human rights are built into the different phases of evolution. Specifically, they will address unbundling and mandatory access, interoperability, data portability, and privacy protection.

Session III

The moderator will speak with representatives of leading companies to learn about the market challenges they face, their strategies for navigating the current landscape and what incentives they think would improve innovation and competition. Panelists share their perspectives on why and how the market must change to support new players, but also to avoid the “race to the bottom.” They will explore what kind of business models are emerging as alternatives to monetizing data that allow for profitability but also ensure transparency, data protection, and adequate affordable curation. 

Day 2: Regulating Online Speech

Despite the latest alternatives of decentralized moderation, such as the ones that will be discussed on the first day of the conference, there are also different proposals for public regulation of freedom of expression in the digital sphere which are fundamental to discuss. The technological optimism that accompanied the emergence of social networks suggested that the absence of regulation was the best way for these networks to optimize their democratizing potential. However, over time, social networks have become not only a place through which people connect, access knowledge, discuss issues of public interest and exercise greater political control, but also a space where harms – such as hate speech, discrimination, harassment, bullying, among others – proliferate and result in actual torment in people’s lives offline and a challenge for democracy.

Faced with this new reality, platforms have expanded their powers by implementing new community rules online while also substantially increasing the amount of content they remove. In this context, legitimate claims have arisen that must be addressed regarding, on the one hand, the proliferation of speech that cause damage to people without sufficient remedies and, on the other hand, an “over”moderation of content by the platforms that is opaque, which can exclude content that is protected by freedom of expression from the digital conversation. Therefore, it is essential to discuss the existing regulatory options including recent legislatives proposals, the Texas Social Media Law, the proposals promoted by presidents such as Jair Bolsonaro in Brazil, and the Digital Services package in Europe.

Consequently, the second day of the conference will analyze the distinct alternative approaches to the regulation of online speech between the U.S., Europe and Latin America. The panels will focus on the advantages of different regional regulatory approaches, on state and private speech regulation, on regulation through procedural and transparency duties, and situate online speech regulation within the challenges of ensuring human rights and social cohesion in digitalized communication spaces. The discussion will also include existing proposals for regulated self-regulation and compare and contrast regional case-law.

Session IV

These panels seek to analyze the distinct alternative approaches to the regulation of online speech between the U.S., Europe and Latin America. The panels will focus on the advantages of different regional regulatory approaches, on state and private speech regulation, on regulation through procedural and transparency duties, and situate online speech regulation within the challenges of ensuring human rights and social cohesion in digitalized communication spaces.

Session V & Session VI

The Meta Oversight Board was created to help “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up and why”. The Board can also make policy recommendations for changes in the way that the company operates its community standards and practices. One of the key tasks of the Board is to assess consistency between content decisions taken by Facebook and Instagram and their own internal (private) principles and rules. The overview of the decisions adopted so far by the OSB shows that the Board has used a solid human rights-based approach, putting international legal standards at the center of its internal debates and determinations. This scrutiny has even taken the Board, in some cases, to the point of criticizing Community Standards and other moderation documents as the basis for the final decision, thus recommending their repeal or reform.

The panels will discuss how human rights have become the basic framework of the decisions of the Board, and how legal standards originally established to protect individuals vis-à-vis limitations imposed by State authorities have been adapted to the completely different reality of privately enforcing content policies at scale. Another important matter for debate is to what extent the Board is a solution to be adopted by different types of platforms as a tool for better handling conflicts around content moderation and improving consistency and respect for human rights. Moreover, will this instrument reshape the legal interpretation of human rights in the digital realm? Can we expect a dialogue between the Board(s) doctrine and the standards set by international bodies and regional courts? Could decisions of the only existing oversight board end (so far) up setting the basic constitutional standards of content moderation across platforms? Would in any case this be a desirable outcome?

Dr. Farzaneh Badiei is the head of outreach and engagement at Digital Trust and Safety Partnership. She is the founder of Digital Medusa, an initiative that focuses on protecting the core values of our global digital space with sound governance. For the past decade, Farzaneh has directed and led projects about Internet and social media governance. She has also been a leader in building and engaging with diverse communities and stakeholders. She has undertaken research at Yale Law School, Georgia Institute of Technology and the Humboldt Institute for Internet and Society in Berlin.

Joan Barata works on freedom of expression, media regulation and intermediary liability issues. He teaches at various universities in different parts of the world and has published a large number of articles and books on these subjects, both in academic and popular press. His work has taken him in most regions of the world, and he is regularly involved in projects with international organizations such as UNESCO, the Council of Europe, the Organization of American States or the Organization for Security and Cooperation in Europe, where was the principal advisor to the Representative on Media Freedom. Joan Barata also has experience as a regulator, as he held the position of Secretary General of the Audiovisual Council of Catalonia in Spain and was member of the Permanent Secretariat of the Mediterranean Network of Regulatory Authorities.

Susan Benesch founded and directs the Dangerous Speech Project (dangerousspeech.org), to study speech that can inspire violence – and to find ways to prevent this, without infringing on freedom of expression. She conducts research on methods to diminish harmful speech online, or the harm itself. Trained as a human rights lawyer at Yale, Susan has worked for NGOs including Amnesty International and Human Rights First. She is also Faculty Associate of the Berkman Klein Center for Internet & Society at Harvard University.

Catalina Botero Marino is the consulting director of Global Freedom of Expression. She is a lawyer, director of the UNESCO Chair on Freedom of Expression at the Universidad de Los Andes, co-chair of the Oversight Board of Facebook and Instagram, member of the external transparency panel of the Inter-American Development Bank, commissioner of the International Commission of Jurists and member of the Advisory Board of the International Bar Association’s Human Rights Institute. She is an adjunct professor at American University’s Human Rights Academy. She was Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights of the OAS, Dean of the Faculty of Law of the Universidad de Los Andes, and an Associate Judge of the Constitutional Court and of the Council of State in Colombia.

Barbora Bukovská has been ARTICLE 19’s Senior Director for Law and Policy since 2009. She leads the development of ARTICLE 19 policies and provides legal oversight across the organization. Barbora has extensive experience working on a range of human rights issues, including protection from discrimination, access to justice, deprivation of liberty, reproductive rights, and community development. She also initiated about 50 cases at the European Court of Human Rights on these issues. From 2006 to 2008, she was the Legal Director at the Mental Disability Advocacy Centre, an international organization working on the rights of people with disabilities in Europe and Central Asia.

Guilherme Canela Godoi holds the position of chief of the section of Freedom of Expression and Safety of Journalists at UNESCO headquarters in Paris. For 8 years, he held the position of Communication and Information Regional Adviser for Latin America and the Caribbean at UNESCO Montevideo Office. During those years, he performed as Regional Coordinator of the UNESCO Initiative for the Promotion of Democracy and Freedom of Expression in judicial systems in Latin America. He has a B.A. in International Relations from the University of Brasília (UNB) and a Master’s Degree on Political Science from the University of São Paulo (USP).

Carlos Cortés Castilo is a lawyer and journalist working on social media governance, communications and journalism. He co-founded Linterna Verde, an internet and society non-profit center, and hosts an online talk show. He is part of TikTok’s safety advisory council in Latin America (LatAm) and between 2015 and 2017, he was Twitter’s Public Policy manager for Spanish-speaking LatAm. He has a law degree from Los Andes University (Colombia) and Master Studies in Communications and Media Governance at the London School of Economics.

Zoe Darmé leads public policy for Google Search, specifically on issues related to content, privacy, AI, and trust. In this role, she is responsible for managing advocacy, external engagement, data governance, and content governance — in support of Search’s mission to organize the world’s information and make it universally accessible and useful. Previously, Zoe worked at Microsoft and Facebook, where she focused on multistakeholder approaches to trust and safety, content governance and moderation, and the overall safety landscape online. Zoe also led global outreach for the establishment of Facebook’s Oversight Board. Prior to joining the tech industry, Zoe served in various policy and program capacities with the United Nations’ Department of Peacekeeping Operations. While Director of Strategic Initiatives at John Jay College for Criminal Justice, Zoe led teams conducting research and providing technical assistance on homicides and gang violence. Previously, she served with the U.S. Department of Justice, where she focused on police and justice system reform.

Agustina Del Campo directs the Center for Studies on Freedom of Expression (CELE) at Universidad de Palermo and is vice chair of the Board of the Global Network Initiative. She has a law degree and a master’s in international legal studies with a specialization in international human rights law. Agustina teaches graduate and postgraduate courses at Universidad de Palermo, Universidad de Buenos Aires and Universidad de San Andres in Argentina. Compiler of the series internet and human rights and towards an internet free of censorship at CELE. She also serves as a member of the editing board at Revista Chilena de derecho y Tecnologia, and she is a member of the board of Access Now.

Cory Doctorow is a senior advisor for the Electronic Frontier Foundation as well as a science fiction author, activist, journalist and blogger — the editor of Pluralistic and the author of young adult novels like LITTLE BROTHER and HOMELAND and novels for adults like ATTACK SURFACE and WALKAWAY, in addition to nonfiction books like HOW TO DESTROY SURVEILLANCE CAPITALISM. He is the former European director of the Electronic Frontier Foundation and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles.

Alex Feerst leads Murmuration Labs, which works with leading tech companies and web3 projects to develop online trust and safety policies, software, and operations. He was previously Head of Legal and Head of Trust and Safety at Medium, and General Counsel at Neuralink. In 2021, he co-founded the Digital Trust and Safety Partnership, the first industry-led initiative to establish best practices for ensuring and assessing online safety. He serves as a board member of the MobileCoin Foundation, an advisor to the Filecoin Foundation, and an editorial board member of the Journal of Online Trust & Safety.

Martin Fertmann is a researcher at the Leibniz-Institute for Media Research | Hans-Bredow-Institute and a doctoral candidate at the University of Hamburg’s Center for Law in Digital Transformation. His research focuses on commercial content moderation, platform regulation and their respective compatibility with international human rights standards. He has published numerous contributions within his research focus and co-chaired academic conferences on human rights in internet governance and on transparency in tech regulation. He has participated in a research sprint at the Berkman Klein Center for Internet and Society (Harvard University) and undertaken research visits to LIP6 Lab in Paris (Sorbonne University and the French National Center for Scientific Research) and the Norwegian Centre for Human Rights (University of Oslo). Before pursuing his PhD, Martin studied law in Hamburg and Beijing, with a focus on media law, while volunteering with the University of Hamburg’s Cyber Law Clinic.

Jamal Greene is the Dwight Professor of Law at Columbia Law School, where he teaches constitutional law, comparative constitutional law, the law of the political process, First Amendment, and federal courts. His scholarship focuses on the structure of legal and constitutional argument. Professor Greene is the author of numerous articles and book chapters and is a frequent media commentator on constitutional law and the Supreme Court. Prior to joining Columbia’s faculty he was an Alexander Fellow at New York University Law School. Professor Greene served as a law clerk to the Hon. Guido Calabresi on the U.S. Court of Appeals for the Second Circuit and for the Hon. John Paul Stevens on the U.S. Supreme Court. He currently serves as co-chair of the Oversight Board, an independent body set up to review content moderation decisions on Facebook and Instagram.

David Kaye is a professor of law at the University of California, Irvine, and director of its International Justice Clinic. From 2014 – 2020 he served as the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. He is also the author of Speech Police: The Global Struggle to Govern the Internet (2019), Independent Chair of the Board of the Global Network Initiative, and a Trustee of ARTICLE 19. He writes regularly for international and American law journals and media outlets. David began his legal career with the U.S. State Department’s Office of the Legal Adviser, is a member of the Council on Foreign Relations, and is a former member of the Executive Council of the American Society of International Law.

Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, and was formerly the Director of Intermediary Liability at CIS. Her work focuses on platform regulation and Internet users’ rights. She has published both academically and in popular press; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2015 Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking Intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.

Prof. Dr. Matthias C. Kettemann, LL.M. (Harvard), is Professor of Innovation, Theory and Philosophy of Law at the Department for Theory and Future of Law at the University of Innsbruck and heads research programs and groups on digital law and platform governance at the Leibniz Institute for Media Research | Hans-Bredow-Institut (Hamburg) and the Humboldt Institute for Internet and Society (Berlin).  In addition to his work at the HBI, Matthias C. Kettemann is a research group leader for “Global Constitutionalism and the Internet” and head of the research project “The Public International Law of the Internet” at the Alexander von Humboldt Institute for Internet and Society, Berlin. He is also the head of section of “International Law and the Internet” at the Max Planck Institute for Comparative Public Law and International Law and a member of the board of directors and research group leader for “Platform and Content Governance” at the Sustainable Computing Lab, Vienna University of Economics and Business.

Kate Klonick is an Associate Professor at St. John’s University Law School, a fellow at the Brookings Institution and Yale Law School’s Information Society Project. Her writing on online speech, freedom of expression, and private internet platform governance has appeared in the Harvard Law Review, Yale Law Journal, The New Yorker, the New York Times, The Atlantic, the Washington Post and numerous other publications. For the 2022-2023 academic year, she is in residence in Cambridge as a Visiting Scholar at the Rebooting Social Media Institute at Harvard’s Berkman Klein Center.

Michael Lwin has run the Koe Koe Tech Foundation in Myanmar for nearly a decade. Pre-coup, Koe Koe Tech had over 150 employees, 95% Myanmar nationals, with 55% women. In happier days, Koe Koe Tech worked on building fundamental civilian e-government, including digitalization of Supreme Court cases, health statistics, tax, water, and electricity bill payments, electronic medical records, and maternal and child health messaging. Michael is a lawyer (JD NYU, Arnold & Porter LLP) and a technologist (UPenn CS masters candidate, Agile/Scrum, React.js, Solidity, Hardhat, NestJS, C#/.NET, ELK stack). His team is building a decentralized platform for civil society to independently review Facebook, Twitter, and TikTok’s content moderation pipelines, based on Michael’s Yale Journal on Regulation article. Disgruntled by the lack of state response to the Myanmar coup, Michael became interested in web3 as a possible alternative, and also works as a senior product manager for the 0VIX protocol.

Mike Masnick is the founder & editor of the popular Techdirt blog as well as the founder of the Silicon Valley think tank, the Copia Institute. In both roles, he explores the intersection of technology, innovation, policy, law, civil liberties, and economics. His writings have been cited by Congress and the EU Parliament. His pivotal 2019 paper, “Protocols, Not Platforms” was cited by Twitter founder Jack Dorsey as the inspiration for Twitter to explore a fundamental shift in strategy. Masnick and Techdirt have also been key players in the ongoing battles over net neutrality, encryption and anti-SLAPP laws. Via the Copia Institute, Masnick has pioneered new uses of games & simulations to help better explain complex present issues and explore future possibilities. Masnick is also known for coining the term “The Streisand Effect,” to describe how attempting to stifle speech online can serve to draw even more attention.

Alison McCauley is the Chief Advocacy Officer at Unfinished, and the founder of Unblocked Future, a strategy firm that helps emerging tech pioneers with thought leadership and go-to-market. Alison holds degrees in psychology, sociology and organizational behavior and development from Stanford University. For over five years, Alison has worked to raise awareness and understanding of the “new possible” unlocked by blockchain technology, and how that translates to a next era of the Web: Web3. Her focus on Web3 and blockchain technology builds on two decades of helping founders who work at the edge of possible to translate their work to the world and take their products to market. In 2018, Alison authored the best-selling book Unblocked, which projected how blockchains could shape organizations and culture.

Dave McGibbon is the co-founder and Chief Executive Officer of Passbase, a privacy focused identity verification tool. Launched in 2019, Passbase has since raised over $10 million in funding and now has offices in Berlin and New York. Dave previously spoke at Web Summit and Consumer Identity World and together with his co-founders was featured on the Forbes 30 under 30 list. Prior to Passbase, Dave was an investment associate at GoogleX where he helped to commercialize Alphabet’s ambitious moonshot projects.

Jacob Mchangama is the founder and executive director of Justitia, a think tank focusing on human rights and a former Visiting Scholar at Columbia Global Freedom of Expression. He is a Senior Fellow at the Foundation for Individual Rights and Expression, member of the Forum on Information and Democracy’s Steering Committee for the working group on network accountability regimes, the Danish government’s independent Free Speech Commission, and a member of the steering committee of the World Expression Forum. Jacob has commented extensively on free speech and human rights in the media and academic journals including the Washington Post, the Wall Street Journal, The Economist, Foreign Affairs, Foreign Policy, Human Rights Quarterly and The Journal of Free Speech Law. He is the author of several books most recently the critically acclaimed “Free Speech: A History From Socrates to Social Media” (2022), and the writer and host of the podcast “Clear and Present Danger: A History of Free Speech”.

Andrew McLaughlin is a co-founder and partner at Higher Ground Labs, a venture fund that backs progressive social and political technology startups. He is also President and COO of Assembly OSM, a venture-backed startup building high-rise urban residential buildings in a greener, faster, higher-quality, high-tech way. In past lives, Andrew has been Deputy Chief Technology Officer of the United States under President Obama; Head of Global Public Policy at Google; Chief Policy officer of ICANN; a partner at betaworks, a venture studio in NYC; a senior executive at Tumblr and Medium; and CEO of Digg and Instapaper. He has taught courses at Stanford Law School and Harvard Law School, been a senior academic fellow or affiliate at Princeton and Columbia, and served as the founding executive director of the Tsai Center for Innovative Thinking at Yale. He chairs the board of the global human rights organization Access Now, is on the boards of Swing Left and the Electric Coin Company, and chairs the Digital Strategy Committee of the Brooklyn Public Library.

PC: Beowulf Sheehan

Suzanne Nossel is Chief Executive Officer at PEN America and author of Dare to Speak: Defending Free Speech for All. Prior to joining PEN America, she served as the Chief Operating Officer of Human Rights Watch and as Executive Director of Amnesty International USA. She has served in the Obama Administration as Deputy Assistant Secretary of State for International Organizations, leading US engagement in the UN and multilateral institutions on human right issues, and in the Clinton Administration as Deputy to the US Ambassador for UN Management and Reform. Nossel coined the term “Smart Power,” which was the title of a 2004 article she published in Foreign Affairs Magazine and later became the theme of Secretary of State Hillary Clinton’s tenure in office. She is a featured columnist for Foreign Policy magazine and has published op-eds in The New York Times, Washington Post, and LA Times, as well as scholarly articles in Foreign Affairs, Dissent, and Democracy, among others. Nossel is a member of the Oversight Board, an international body that oversees content moderation on social media. She is a former senior fellow at the Century Foundation, the Center for American Progress, and the Council on Foreign Relations. Nossel is a magna cum laude graduate of both Harvard College and Harvard Law School.

Monroe Price retired from the faculty of the Annenberg School for Communication, University of Pennsylvania. He is a Senior Research Fellow at the Centre for Socio-legal Studies at Oxford University. He graduated magna cum laude from Yale, where he was executive editor of the Yale Law Journal. He clerked for Associate Justice Potter Stewart of the U.S. Supreme Court and was an assistant to Secretary of Labor W. Willard Wirtz. Professor Price was founding director of the Program in Comparative Media Law and Policy at Wolfson College, Oxford, and a Member of the School of Social Sciences at the Institute for Advanced Study in Princeton. He was deputy director of California Indian Legal Services, one of the founders of the Native American Rights Fund, and author of Law and the American Indian.

Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney’s Office for the District of Maryland.

Pamela San Martín is a lawyer from Mexico City who has dedicated her career to advancing human rights, freedom of expression and democratic institutions. Between 2014 and 2020, she served as one of 11 Electoral Councilors at Mexico’s National Electoral Institute — the highest position in the country’s electoral management body. There she worked on organizing peaceful elections, managing key issues of campaign regulation and freedom of speech and information, and developing policies to guarantee minority rights in democratic processes. She has also sat on the editorial board of one of Mexico’s most widely circulated newspapers. Prior to her time at the Electoral Institute, San Martín worked at Mexico City’s Human Rights Commission for almost a decade. She is currently a consultant on elections, democracy, and human rights, particularly in countries facing polarization and violence, and is a member of Meta’s Oversight Board.

Joel Simon is a consultant for Global Freedom of Expression. He is also a fellow for the Tow Center for Digital Journalism at Columbia Journalism School and a senior visiting fellow at Columbia’s Knight First Amendment Institute. For over 15 years, he served as the executive director for the Committee to Protect Journalists (CPJ). Simon led the CPJ through a period of expansion. Before becoming executive director of CPJ, Simon served as the Americas program coordinator and then deputy director. As a journalist in Latin America, Simon covered the Guatemalan civil war, the Zapatista uprising in Southern Mexico, the debate over the North American Free Trade Agreement, and the economic turmoil in Cuba following the collapse of the Soviet Union. His latest book. The Infodemic; How Censorship and Lies Made the World Sicker and Less Free, co-authored with Robert Mahoney, was published in April 2022 from Columbia Global Reports.

Golda Velez has 30 years of software engineering experience including recent work combatting organized fraud rings through AI models at Postmates.  She is also deeply involved in human rights advocacy through direct relationships with affected individuals, as well as creating opportunities for remote work for those impacted by world events. She has been involved with the community around the Bluesky project from Twitter and has untested proposals for how to address disinformation in a rich and heterogeneous ecosystem of permissionless trust claims (cooperation.org). She also works as an devops engineer for 3box.io, and resides in Tucson, Arizona with her family.

This event is hosted in partnership with Justitia and the Future of Free Speech Project.

Founded in August 2014, Justitia is Denmark’s first judicial think tank. Justitia aims to promote the rule of law and fundamental human rights and freedom rights both within Denmark and abroad by educating and influencing policy experts, decision-makers, and the public. In so doing, Justitia offers legal insight and analysis on a range of contemporary issues.

 

The Future of Free Speech is a collaboration between its founding member, Copenhagen based judicial think tank Justitia, Columbia University’s Global Freedom of Expression and Aarhus University’s Department of Political Science. To better and and counter the decline of free speech, “The Future of Free Speech” project seeks to answer three big questions: Why is freedom of speech in global decline? How can we better understand and conceptualize the benefits and harms of free speech? And how can we create a resilient global culture of free speech that benefits everyone?

 

On Day 2, Session V and Session VI are also organized in partnership with the Oversight Board.

The Oversight Board is a global body of experts that will review Facebook’s most difficult and significant decisions related to content on Facebook and Instagram. It will make binding decisions on that content, which means Facebook must implement them unless doing so could violate the law. The board will also be able to issue policy recommendations. The board’s independent judgement is critical to its function. Both the board and its administration are funded by an independent trust and supported by an independent company that is separate from Facebook. The board’s structure, responsibilities, purpose and relationship to Facebook are outlined in the Oversight Board Charter. The board acts as a service provider to Facebook.

 

Background Resources / Further Research

 Decentralized Networks

An Introduction to the Federated Social Network, by Richard Esguerra, Electronic Frontier Foundation, March 21, 2011: the article looks at how a federated online social network works and why it is important.

Protocols, Not Platforms: A Technological Approach to Free Speech, by Mike Masnick, the Knight First Amendment Institute at Columbia University, August 21, 2019: advocating for a return to protocols and thus “the early promise of the web,” Masnick argues such changes will promote free speech and limit the impact of online abuse.

Twitter Makes A Bet On Protocols Over Platforms, by Mike Masnick, Techdirt, December 11, 2019: Masnick reflects on Twitter’s plans to experiment “with protocols as a different approach to how the company might architect its business,” emphasizing that although its impact might not be big, it has a good chance of becoming “one of the most significant directional shifts for the mainstream internet in decades.”

Social Graph as a Key to Change, by Frank H. McCourt, Jr. and Braxton Woodham: the authors introduce the “social graph” and Project Liberty initiative and explain why their approach to the concept of a decentralized social network is different.

The Decentralized Web Could Help Preserve The Internet’s Data For 1,000 Years. Here’s Why We Need IPFS To Build It, by Carson Farmer, Techdirt, May 5, 2020: Farmer argues we need the decentralized web so that we can shift away from the status of internet users having barely any control over their data in the highly centralized environment “under the control of a few dominant corporate entities on the web.”

Decentralized Social Networking Protocol (DSNP), by Project Liberty, October 2020: this white paper unpacks DSNP and addresses the issues of ownership, privacy, authenticity, portability, usability, and extensibility for the elements that create a decentralized social network.

DSNP, Delivering the Social Network as Core Internet Functionality, by Braxton Woodham, Co-Creator, DSNP: Woodham describes the Decentralized Social Networking Protocol, its key domains, the use of blockchain, and the protocol scope, as well as the work done and what lies ahead.

Ecosystem Review, by Jay Graber, January 2021: protocols, applications, and topics structure Graber’s 60-page overview of the decentralized social ecosystem.

Content Moderation Case Study: Decentralized Social Media Platform Mastodon Deals With An Influx Of Gab Users (2019), by Copia Institute, Techdirt, March 3, 2021: the case study discusses Mastodon, a more decentralized alternative to Twitter that allowed for more direct content moderation by users. The article argues Mastodon “has experienced slow, but steady, growth since its inception in 2016.”

Making the Internet Safe for Democracy, by Francis Fukuyama, Journal of Democracy, Johns Hopkins University Press, April 2021: the article surveys diverse approaches to reducing the power of large internet platforms and advocates for the technology and regulation approach through middleware companies.

Decentralized Social Media Platforms Aren’t New, They Just Aren’t Popular. But they might hold the key to address the problems of Big Tech, by Asmita Karanje, Medium, April 11, 2021: Karanje breaks down the premise of decentralized social networks and introduces existing alternatives to the big tech, explaining what they bring to the table.

Middleware for Dominant Digital Platforms: a Technological Solution to a Threat to Democracy, by Francis Fukuyama, Barak Richman, Ashish Goel, Roberta R. Katz, A. Douglas Melamed, and Marietje Schaake, Cyber Policy Center, Stanford University: the authors argue middleware shifts editorial power from big technology platforms to diverse firms and their competition.

The Future of Platform Power: Making Middleware Work, by Daphne Keller, Journal of Democracy, Johns Hopkins University Press, July 2021: the article interrogates middleware services and arising problems, including technological feasibility, costs of content curation, and privacy.

Tech Monopolies and the Insufficient Necessity of Interoperability, by Cory Doctorow, Locus Magazine, July 5, 2021: in this commentary, Doctorow argues forcing interoperability into tech companies can serve as one step toward “our long overdue monopoly reckoning.” Doctorow unpacks, among other things, network effects, switching costs, and competitive compatibility.

Why Decentralization Can Help Solve the Content Moderation Problem, by Alex Feerst, The Future Rules, Forkast.News, September 22, 2021: Feerst unpacks what content moderation entails, explains “why it is a cosmically huge problem,” and argues there is a need for “nuanced tools […] to ensure the decentralized web is a place for everyone.”

A New Hope For Moderation And Its Discontents? by Alex Feerst, Greenhouse by Techdirt, October 5, 2021: through the lens of expression, Feerst describes how things were during Web 1.0, what got us to Web. 2.0, and what Web. 3. 0 promises to bring – transparency, multiplicity of expression venues, and robust competition.

The Battle For Control Of The Metaverse: Can Open Innovation Outrun Corporate Domination? by Alison McCauley, Forbes, March 22, 2022: clashing two versions of the metaverse – the corporate one led by Zuckerberg and an open and inclusive one that empowers its users – McCauley advocates for the latter.

A Useful, Critical Taxonomy of Decentralization, beyond Blockchains, by Cory Doctorow, Medium, May 12, 2022: Doctorow argues that “not all who decentralize are bros,” thinking of markets as “a tool, not an ethical imperative,” which does not align with the core of the web3 project. To Doctorow, “the web3 project not only values markets beyond their worth but also sees the problems of markets as the result “distortion by regulators” and wants to eliminate the publicly accountable governance” that is essential to the extraction of good results from markets.

Blockchain Technology

Blockchain and Freedom of Expression, by ARTICLE 19, 2019: the report examines the blockchain technology implications on freedom of expression issues and observes four use cases – content dissemination, content authentication, data storage, and cryptocurrency transactions.

How Blockchain Can Help Combat Disinformation, by Kathryn Harrison and Amelia Leopold, Harvard Business Review, July 19, 2021: the authors unpack three key areas through which blockchain can address disinformation roots – verification, independence of content creators, and community-driven accuracy standards.

Privacy, Middleware, and Interoperability: Can Technical Solutions, Including Blockchain, Help Us Avoid Hard Tradeoffs? by Daphne Keller, The Center for Internet and Society at Stanford Law School, August 23, 2021: focusing on blockchain technologies, the article explores possible solutions that both protect privacy and enable interoperability.

Blockchain Can Help Combat the Threat of Deepfakes. Here’s How, by Evin Cheikosman, Karin Gabriel, and Nadia Hewett, World Economic Forum, October 21, 2021: starting with examples of deepfake targeting, the authors then explain hashing algorithms, cryptographic signatures, and blockchain timestamping.

Decentralizing Content Moderation

Why Decentralisation of Content Moderation Might Be the Best Way to Protect Freedom of Expression Online, by ARTICLE 19, March 30, 2020: the article reflects on the Bluesky initiative and forwards another approach – the “unbundling” form of decentralization.

Twitter’s Decentralized Future. The Platform’s Vision of a Sweeping Open Standard Could Also Be the Far-Right’s Internet Endgame, by Lucas Matney, TechCrunch, January 15, 2021: the article assesses Bluesky and its inherent risks, including the active use of decentralized platforms by the right-wing.

A Self-Authenticating Social Protocol, by The Bluesky Team, Bluesky Blog, April 6, 2022: explaining the self-authenticating protocols through three key components, the Bluesky Team argues they help achieve the objectives of portability, scale, and trust.

Moderating the Fediverse: Content Moderation on Distributed Social Media, by Alan Z. Rozenshtein, 2 Journal of Free Speech Law (forthcoming 2023), written on September 8, 2022: Rozenshtein gives an overview of the “Fediverse,” examines the matter of its content moderation through a case study, and considers how policymakers can influence it.

Regulating Online Speech

If Lawmakers Don’t Like Platforms’ Speech Rules, Here’s What They Can Do About It. Spoiler: The Options Aren’t Great, by Daphne Keller, Greenhouse by Techdirt, September 9, 2020: Keller divides the article into two parts – Changing the Rules and Changing the Rulemakers. The former examines five approaches to platforms’ legal speech rules. The latter discusses stripping platforms of decisions and grounding them in competition and user autonomy instead.

The Digital Berlin Wall – How Germany (Accidentally) Created a Prototype for Global Online Censorship – Act two, Jacob Mchangama and Natalie Alkiviadou, The Future of Free Speech, Justitia, October 1, 2020: the report, an update of the 2019 version, argues Germany’s Network Enforcement Act (NetzDG) has inspired legislation that aims to provide legitimacy for online censorship: within a year, the number of countries adopting laws similar to NetzDG almost doubled and reached 25.

Rushing to Judgment: Are Short Mandatory Takedown Limits for Online Hate Speech Compatible with The Freedom of Expression? by Jacob Mchangama, Natalie Alkiviadou, and Raghav Mendiratta, The Future of Free Speech, Justitia, January 15, 2021: the report surveys the timeframes of national legal proceedings in hate speech cases, comparing those with the time limits imposed on social media platforms by some governments. While domestic courts take 778.47 days on average, the platforms are forced to remove hate speech within hours or a week.

Who Cares about Free Speech? – Findings From a Global Survey of Free Speech, by The Future of Free Speech, June 7, 2021: the survey conducted by YouGov in February 2021 for Justitia shows that 90% of citizens in 33 countries support free speech, yet the score drops when free speech is put against controversial statements (regarding religion or minority groups) and potential trade-offs (regarding national security or economic stability).

Splintered Speech. Digital Sovereignty and the Future of the Internet, by PEN America, June 2021: the report interrogates digital sovereignty and authoritarian states’ leverage over corporations and calls for a global approach to digital regulation that prioritizes human rights and global connectivity.

A Framework of First Reference – Decoding a Human Rights Approach to Content Moderation on Social Media, by Jacob Mchangama, Natalie Alkiviadou, and Raghav Mendiratta, The Future of Free Speech, Justitia, November 12, 2021: the report unpacks relevant provisions of international human rights law in relation to hate speech and disinformation and develops “a rights-protective and transparent moderation” approach for social media platforms.

Report: The Wild West? Illegal comments on Facebook, by Jacob Mchangama and Lukas Callesen, The Future of Free Speech, Justitia, January 26, 2022: at the backdrop of legislative proposals on social media regulation, the report, written in Danish, illustrates the scope of illegal content on Danish Facebook pages. It estimates only 0.0066% of 63 million comments investigated constitute crime.

Applying the First Amendment

Search Engines, Social Media, and the Editorial Analogy, by Heather Whitney, the Knight First Amendment Institute at Columbia University, February 27, 2018: the article sets out to deconstruct the “editorial analogy” and other analogical methods in First Amendment litigation involving the big tech. Whitney suggests turning to normative theory instead.

The Paradox of Free Speech in the Digital World: First Amendment Friendly Proposals for Promoting User Agency, by Nadine Strossen, Washburn Law Journal, Vol. 61, 2021: Strossen explores a range of measures that could limit the “censorial power” of dominant social media platforms and promote user agency.

Censorship by Any Other Name: A Response to Nadine Strossen on Private Censorship Online, by Michael Conklin, Washburn Law Journal, Vol. 61, 2021: Conklin expands Strossen’s article and discusses, among other things, what social media can learn from government censorship and how important it is to clarify public/private distinctions when applying First Amendment.

Friction-In-Design Regulation as 21St Century Time, Place and Manner Restriction, by Brett M. Frischmann and Susan Benesch, August 1, 2022: the article argues the First Amendment is not “an insurmountable barrier” to speech regulation and proposes to apply “friction-in-design regulation” to the digital networked society.

Digital Services Act

The American Approach to Free Speech Is Flawed—but It’s the Best Option We Have, by Suzanne Nossel, Slate, July 28, 2020: Nossel notes that “those advocating more aggressive government policing of online hate put enormous trust in officials to draw boundaries around permissible speech.” Nossel argues empowering governments with the capacity to regulate speech leads only to the preservation of their authority, not the protection of the vulnerable.

The Right Way to Regulate Digital Harms, by David Kaye and Jason Pielemeier, Project Syndicate, December 21, 2020: Kaye and Pielemeier tackle the European approach toward content moderation and discuss the importance of “clear and comprehensive regulatory approaches, based on human-rights principles,” when addressing toxic content online.

The EU Digital Services Act: Towards a More Responsible Internet, by Andrej Savin, Copenhagen Business School, CBS LAW Research Paper No. 21-04, Journal of Internet Law, February 16, 2021: Savin reviews the EU Digital Services Act (DSA) and suggests “that the proposed act does not alter the fundamental premises upon which EU digital regulation lies but that it significantly increases procedural tools available for national and EU enforcement.”

A New Order: The Digital Services Act and Consumer Protection, by Caroline Cauffman and Catalina Goanta, European Journal of Risk Regulation, Vol. 12, Issue 4, April 15, 2021: Cauffman and Goanta examine the reform proposed “by means of the DSA” through four themes: “(1) the internal coherence of European Union law; (2) intermediary liability; (3) the outsourcing of solutions to private parties; and (4) digital enforcement.”

Europe’s Digital Services Act: On a Collision Course with Human Rights, by Cory Doctorow, Electronic Frontier Foundation, October 27, 2021: Doctorow comments on the EU Digital Services Act, which he saw as an ambitious project “to rein in the power of Big Tech.” The author, however, expressed his worry due to the “over-blocking, under-performing, monopoly-preserving copyright filters.”

The Digital Services Act: An Analysis of Its Ethical, Legal, and Social Implications, by Aina Turillazzi, Mariarosaria Taddeo, Luciano Floridi, and Federico Casolari, January 12, 2022: the scholars review the DSA “to map and evaluate its ethical, legal, and social implications.” They argue the DSA prompted varying interpretations and advocate for “a more robust framework for the benefit of all stakeholders.”

Will Banning Hate Speech Make Europe Safer? by Jacob Mchangama, The Wall Street Journal, February 4, 2022: Mchangama argues that by targeting extremists, the newly announced EU plan “ignores history’s lessons about the danger of restricting unpopular views.”

The Real Threat to Social Media Is Europe. The EU is passing legislation that will weaken free speech laws beyond the breaking point, by Jacob Mchangama, Foreign Policy Magazine, April 25, 2022: Mchangama gives an overview of the EU’s Digital Services Act (DSA), highlighting a few positive elements yet arguing the DSA fails at striking a balance between combating harm and protecting free speech.

Thoughts on the DSA: Challenges, Ideas and the Way Forward through International Human Rights Law, by Jacob Mchangama, Natalie Alkiviadou, and Raghav Mendiratta, The Future of Free Speech, Justitia, May 5, 2022: the paper evaluates the European approach to content moderation through the proposed DSA and the challenges it brings, emphasizing hate speech and disinformation. The authors make recommendations and propose a rights-based approach.

Banning Hate Speech Won’t End Extremist Violence, by Jacob Mchangama, Persuasion, June 6, 2022: Mchangama foresees the problem getting worse and argues that the best strategy in fighting extremism is “to develop trustworthy public institutions […], create a digital sphere that encourages trust and cooperation rather than outrage and polarization, and strengthen engagement in controversial and difficult conversations.”

Digital Services Act: A Short Primer, by Martin Husovec and Irene Roche Laguna, Oxford University Press (Forthcoming 2023), July 5, 2022: the scholars give an introduction to the EU’s DSA and explain the core principles on which it is established.

EU: Will the Digital Services Act Hold Big Tech to Account? by ARTICLE 19, July 5, 2022: the article argues that for the DSA to be successful, human rights protection – safeguarding free speech online in particular – must stand at its core. The article also shows the positive aspects and shortcomings of the DSA, concluding that “it may well set the global standard for regulating online content.”

Meta’s Oversight Board

How to Make Facebook’s ‘Supreme Court’ Work, by Kate Klonick and Thomas Kadri, the New York Times, Opinion, November 17, 2018: Klonick and Kadri argue that the Oversight Board is a promising idea “but only if it’s done right,” which falls into the hands of Mark Zuckerberg and “ultimately rests on choices that [he] has yet to make.”

The Platform Governance Triangle: Conceptualising the Informal Regulation of Online Content, by Robert Gorwa, Internet Policy Review, Vol. 8, Issue 2, June 30, 2019: Gorwa examines several informal arrangements that govern online content in Europe, mapping them onto a “governance triangle” model, and discusses “three key dynamics shaping the success of informal governance arrangements.”

The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression, by Kate Klonick, Yale Law Journal, Vol. 129, No. 2418, 2020, June 30, 2020: Klonick analyzes the creation of the Oversight Board and explains its subsequent implications for freedom of expression through the adjudication lens. Klonick concludes the Facebook Oversight Board “has great potential to set a new precedent for user participation in private platforms’ governance and a user right to procedure in content moderation.”

But Facebook’s Not a Country: How to Interpret Human Rights Law for Social Media Companies, by Susan Benesch, Yale Journal on Regulation Bulletin 38 (86-111), 2020, September 14, 2020: Benesch argues each provision of the international human rights law must first receive interpretation on how/if it applies to social media companies, focusing on Articles 19 and 20 of the International Covenant on Civil and Political Rights.

How to Judge Facebook’s New Judges. The social media company’s search for consistent rules has been long, winding, and entirely self-defeating, by Jacob Mchangama, Foreign Policy Magazine, December 4, 2020: Mchangama forwards principles that could effectively guide Facebook’s Oversight Board decision-making and strengthen free speech.

Inside the Making of Facebook’s Supreme Court, by Kate Klonick, The New Yorker, February 12, 2021: Klonick reports on the development of Facebook’s Oversight Board – from Noah Feldman’s memo to Zuckerberg and the initial workshops with experts to the board’s first rulings – along with the controversies that accompany it.

Facebook’s Oversight Board Was Supposed to Let Facebook off the Hook. It Didn’t, by Jack M. Balkin and Kate Klonick, The Washington Post, May 6, 2021: Balkin and Klonick explain how the social network may have hoped to outsource responsibility, buy legitimacy, and “offload difficult decisions about how to adjudicate speech claims to someone else.” Yet, the authors argue, the Board “won’t always be willing to play along.”

Is the Facebook Oversight Board an International Human Rights Tribunal? by Laurence Helfer and Molly K. Land, Lawfare, May 13, 2021: Helfer and Land identify the “key similarities between the Oversight Board and international human rights courts and quasi-judicial monitoring bodies,” considering how the Board can learn from them and their struggles “to build their authority and legitimacy over time.”

Facebook’s Oversight Board & the Rule of Law: The Importance of Being Earnest, by Lakshmi Gopal, American Bar Association, Business Law Today, October 12, 2021: Gopal stresses that the Oversight Board cannot be described as a court or tribunal based on “ordered opinions that reference the principles of rule of law and international human rights.” Gopal argues presently the Board is “an experiment with various kinds of potential, potential that is itself yet unknown.”

Meta’s Oversight Board and Transnational Hybrid Adjudication – What Consequences for International Law, by Rishi Gulati, KFG Working Paper Series, No. 53 (2022), March 2022: Gulati considers whether the world is witnessing the birth of a special type of “transnational hybrid adjudication,” analyzing if the Oversight Board can be perceived as “a transnational adjudicative body that joins the myriad of other international dispute resolution mechanisms that exist today.”

Facebook’s Faces, by Chinmayi Arun, Harvard Law Review, March 20, 2022: Arun examines the place of the Oversight Board in Facebook’s multifaceted ecosystem and discusses “how far [the Board] can oversee Facebook.”

The Meta Oversight Board’s Human Rights Future, by Laurence R. Helfer and Molly K. Land, Duke Law School Public Law & Legal Theory Series No. 2022-47, August 22, 2022: the authors argue that comparing Meta’s Oversight Board to domestic courts leads to incorrect assessments of it. They suggest we view the Board as “a de facto human rights tribunal” in order to recognize its potential to develop human rights norms over time. 

Applying International Human Rights Law for Use by Facebook, by Michael Lwin, Yale Journal on Regulation, September 14, 2020: the article argues the international human rights law requires reinterpretation and readaptation in order to apply to social media companies – Lwin proposes a framework for that.

Past events

Reimagine the Internet, 5/10/2021 – 5/14/2021

Co-hosted by the Knight First Amendment Institute at Columbia University and the Initiative on Digital Public Infrastructure at the University of Massachusetts, Amherst, the virtual conference explored the internet future and efforts to design new internet spaces that could lead to healthier outcomes.

Imagining a Better Online World: Exploring the Decentralized Web, 1/27/21 – 6/30/21

The series of workshops organized by METRO, Internet Archive, DWeb, and Library Futures examined decentralized technologies through questions of privacy, data control, and censorship resistance.

Moderating the Fediverse: Content Moderation on Distributed Social Media, 5/12/22

Alan Rozenshtein’s presentation took place at the DIMACS Workshop on Computer Science and Law: Content Moderation. Rozenshtein described the Fediverse model for social media and analyzed its strengths and weaknesses.