Policing the conversation: evaluating different approaches to content moderation

Content meant to manipulate, deceive, or radicalize users is increasingly commonplace in major online platforms. Many see this as destabilizing to democracy and public discourse; misinformation erodes trust in the media and democratic institutions, while extremist content promotes hatred and violence. Governments and platforms are developing new technologies and policies to help address these issues – but their benefits must be weighed against their costs, and potential for abuse. This session assesses the compatibility of public and private-led responses to misinformation and extremist content, with a free and open internet, public and democratic accountability, and core principles of human rights.

Governments and platforms are developing new technologies and policies to help address these issues – but their benefits must be weighed against their costs, and potential for abuse. This session assesses the compatibility of public and private-led responses to misinformation and extremist content, with a free and open internet, public and democratic accountability, and core principles of human rights.