Skip to main content Accessibility help
×
Human Rights and AI-Powered Content Moderation
31 Mar 2025 to 31 Oct 2025

Call for Papers 

Human Rights and AI-Powered Content Moderation

Cambridge Forum on AI: Law and Governance publishes content focused on the governance of artificial intelligence (AI) from law, rules, and regulation through to ethical behaviour, accountability and responsible practice. It also looks at the impact on society of such governance along with how AI can be used responsibly to benefit the legal, corporate and other sectors.

Following the emergence of generative AI and broader general purpose AI models, there is a pressing need to clarify the role of governance, to consider the mechanisms for oversight and regulation of AI, and to discuss the interrelationships and shifting tensions between the legal and regulatory landscape, ethical implications and evolving technologies. Cambridge Forum on AI: Law and Governance uses themed issues to bring together voices from law, business, applied ethics, computer science and many other disciplines to explore the social, ethical and legal impact of AI, data science, and robotics and the governance frameworks they require.

Cambridge Forum on AI: Law and Governance is part of the Cambridge Forum journal series, which progresses cross-disciplinary conversations on issues of global importance.

The journal invites submissions for the upcoming Themed Issue:  Human Rights and AI-Powered Content Moderation Guest Edited by Joan Barata and Natalie Alkiviadou.

The deadline for submissions is 31 October 2025. Deadline for final version of originals after review process is 31 December 2025.

Outline: Artificial Intelligence (AI) equips social media platforms with tools to manage the vast and constantly growing flow of online content, playing a crucial role in enforcing content policies. Increasing regulatory pressure through laws such as the Digital Services Act While the use of automated content moderation is essential for addressing crimes such as child exploitation and non-consensual explicit material involving adults, applying AI to more complex and context-dependent areas like hate speech presents significant challenges. Specifically, while these systems aim to enhance online safety, they introduce significant legal, ethical and technical challenges that require urgent attention. In such a sphere, the growing reliance on AI in content moderation raises critical questions about its impact on human rights and freedoms, particularly the right to freedom of expression, privacy and the prohibition of discrimination.

A key concern is the freedom of expression and non-discrimination, as AI-driven moderation, especially for hate speech, often lacks the ability to grasp linguistic nuance, cultural context and intent. This can lead to over-removal, censorship, and the disproportionate silencing of marginalised voices. Moreover, biased datasets and insufficient training data contribute to discriminatory enforcement, disproportionately affecting minority communities and raising serious concerns about violations of free speech. AI-driven content moderation also presents privacy risks, requiring large-scale data collection that may infringe on users’ rights under laws like the General Data Protection Regulation. Additionally, as private corporations play an increasing role in governing public discourse, accountability gaps emerge in terms of transparency safeguards, due process and redress for wrongful content removals. 

In addition to this, social media platforms and other intermediaries are also increasingly offering AI systems to users to facilitate and enhance content creation and improve impact and reach. These new features may trigger new legal debates and complexities around content moderation, reponsabilities and liability, as well as regarding the modalities of exercise of the right to freedom of expression and subjects/entities protected.

In light of the above, this themed issue seeks to explore the intricate relationship between AI-driven content moderation and human rights, with a particular emphasis on freedom of expression, privacy, and non-discrimination. By bringing together experts in fields such as law, philosophy, technology, policy, linguistics and ethics, this themed issue will foster an informed debate on responsible AI governance in content moderation. Contributors will be expected to present original and relevant research that critically analyzes existing legal frameworks and proposes solutions to mitigate AI bias, improve algorithmic accountability, and strike a balance between combating harmful content and upholding fundamental rights. Submissions will be assessed based on their originality, interdisciplinary approach, analytical depth, and clarity, ensuring that this issue serves as a valuable resource for policymakers, academics, and practitioners alike.

Submission guidelines

Cambridge Forum on AI: Law and Governance seeks to engage multiple subject disciplines and promote dialogue between policymakers and practitioners as well as academics. The journal therefore encourages authors to use an accessible writing style.

Authors have the option to submit a range of article types to the journal. Please see the journal’s author instructions for more information.

Articles will be peer reviewed for both content and style. Articles will appear digitally and open access in the journal. 

All submissions should be made through the journal’s online peer review systemAuthor should consult the journal's author instructions prior to submission. 

All authors will be required to declare any funding and/or competing interests upon submission. See the journal’s Publishing Ethics guidelines for more information. 

Contacts 

Questions regarding submission and peer review can be sent to the journal’s inbox at [email protected].