Global Freedom of Expression

ChatGPT, can you solve the content moderation dilemma?

Key Details

  • Region
    International
  • Themes
    Content Moderation

Can ChatGPT Solve the Content Moderation Dilemma?

By Emmanuel Vargas Penagos

Published in the International Journal of Law and Information Technology, Volume 32, Issue 1, 2024

Abstract

This article conducts a qualitative test of the potential use of large language models (LLMs) for online content moderation. It identifies human rights challenges arising from the use of LLMs for that purpose. Different companies (and members of the technical community) have tested LLMs in this context, but such examinations have not yet been centred in human rights. This article, framed within EU law—particularly the EU Digital Services Act and the European human rights framework—delimits the potential challenges and benefits of LLMs in content moderation. As such, this article starts by explaining the rationale for content moderation at policy and practical levels, as well as the working of LLMs. It follows with a summary of previous technical tests conducted on LLMs for content moderation. Then, it outlines the results of a test conducted on ChatGPT and OpenAI’s ‘GPTs’ service. Finally, it concludes with the main human rights implications identified in using LLMs for content moderation.

Access the article here

Authors

Emmanuel Vargas Penagos

LLB (Uniandes), LLM (Amsterdam), PhD student, Örebro University
Co-founder, El Veinte