Published in Yale Journal on Regulation, September 14, 2020
In recent years, social media platforms have been beset with hate speech, misinformation, disinformation, incitement of violence, and other content that cause real-world harm. Social media companies, focusing solely on profit-maximization and user-engagement, have been largely asleep at the wheel during outbreaks of violence in countries such as Myanmar, Sri Lanka, New Zealand, and India–events all linked in some way to online content. When social media companies began trying to reduce harmful content, they made tweaks: incremental, non-transparent, and often inconsistent changes to their moderation rules. To build a more effective and consistent system, some international lawyers have suggested that social media companies adopt international human rights law (IHRL)–especially the International Covenant for Civil and Political Rights (ICCPR)–as a unified source for content moderation rules. However, IHRL was written and ratified for use by states, not private companies. Moreover, IHRL emerged long before the Internet and social media were widespread. IHRL must therefore be interpreted and adapted for this new purpose. As a first step towards honing and refining its application, this article proposes a framework for the use of IHRL by social media companies.