That Violates My Policies: AI Laws, Chatbots, and The Future of Expression

Key Details

  • Region
    International
  • Themes
    Digital Rights

The Future of Free Speech released a study on generative AI and its global impact on free expression and access to information. The report reviews laws and policies across six jurisdictions, as well as the corporate practices of eight leading AI models.  

The Future of Free Speech is an independent, nonpartisan think tank based at Vanderbilt University. This study is the result of a year-long effort led by Jordi Calvet-Bademunt, Senior Research Fellow, Jacob Mchangama, Executive Director, and Isabelle Anzabi, Research Associate – all from The Future of Free Speech – in collaboration with local experts, who contributed chapters on several jurisdictions.

Executive Summary

Generative artificial intelligence (AI) has transformed the way people access information and create content, pushing us to consider whether existing frameworks remain fit for purpose. Less than three years after ChatGPT’s launch, hundreds of millions of users now rely on models from OpenAI and other companies for learning, entertainment, and work. Against a backdrop of political tension and public backlash, heated debates have emerged over what kinds of AI-generated content should be considered acceptable. Generative AI’s capacity both to expand and to restrict expression makes it central to the future of democratic societies.

This raises urgent questions: Do national laws and corporate practices governing AI safeguard freedom of expression, or do they constrain it? Our report — “That Violates My Policies”: AI Laws, Chatbots, and the Future of Expression — addresses this by assessing legislation and public policies in six jurisdictions (the United States, the European Union, China, India, Brazil, and the Republic of Korea) and the corporate practices of eight leading AI providers (Alibaba, Anthropic, DeepSeek, Google, Meta, Mistral AI, OpenAI, and xAI). Taken together, these public and private systems of governance define the conditions under which generative AI shapes free expression and access to information worldwide.

This report marks a step toward rethinking how AI governance shapes free expression, using international human rights law as its benchmark. Rather than accepting vague rules or opaque systems as inevitable, policymakers and developers can embrace clear standards of necessity, proportionality, and transparency. In doing so, both legislation and corporate practice can help ensure that generative AI protects pluralism and user autonomy while reinforcing the democratic foundations of free expression and access to information.

Authors

Jordi Calvet-Bademunt

Senior Research Fellow at The Future of Free Speech

Jacob Mchangama

Executive Director at The Future of Free Speech

Isabelle Anzabi

Research Associate at The Future of Free Speech