CASCA Report: User Awareness and Experience of Content Moderation on Meta Platforms in India

Key Details

  • Region
    Asia and Asia Pacific
  • Themes
    Content Moderation, Digital Rights

The Centre for Advanced Studies in Cyber Law and Artificial Intelligence (CASCA) published the results of a recently conducted study on user awareness of and interaction with Meta’s content moderation in India.

The Centre for Advanced Studies in Cyber Law and Artificial Intelligence, CASCA, at the Rajiv Gandhi National University of Law, Punjab, India, compiled a report based on almost 300 responses to the survey, “Understanding User Awareness and Experience of Content Moderation on Meta Platforms in India.” Their findings point to structural concerns and problems with trust, transparency, and accessibility within Meta’s content moderation framework:

“While 93% of respondents knew about Meta’s reporting function, only 73.6% had ever used it. Even more striking, over 91% said they regularly encountered harmful content, yet just 34.8% felt their reports ever led to meaningful action. Trust gaps run deep: 64.2% doubted Meta would act on their reports, and more than half said they never even noticed updates on the status of their complaints. When it came to the Meta Oversight Board, the “Supreme Court” of Meta’s content moderation, the disconnect was sharper. Over 54% of respondents had never even heard of it, and only 15% knew they could actually appeal Meta’s decisions all the way to the Board.”

Learn more here.

Authors

Tanmay Durani

Sanskriti Koirala

Amishi‌ ‌Jain

Uday Gupta

R. Dayasakthi