The European Union’s Digital Services Act (DSA) introduces a new regulatory approach to address the societal harms of online platforms: Systemic risk assessments. While a core component of the DSA, the regulation only outlines the standards and processes governing systemic risk assessments in broad strokes. It remains unclear what these systemic risk assesments will entail in practice. This Article develops a proposal of how systemic risk assessments should be implemented. It situates systemic risk assessments as a critical step toward platform accountability as they address societal harms, while existing approaches, such as remedy mechanisms, only protect user rights. Engaging with intangible harms and regulating speech and public discourse, risk assessments also entail significant challenges. Conventional reference points for content moderation regulation, such as terms and conditions, contractual freedom, fundamental rights and expertise, do not provide practical and legitimate bases to concretize risk assessment obligations. Public actors, such as the European Commission, should refrain from defining substantive standards, too, as they are directly bound by freedom of expression guarantees. Instead, the Article argues, the Commission should foster a procedural framework, a “virtuous loop,” which empowers civil society and allows it to specify and refine the standards governing systemic risks over time. Developing this framework, the Article explains how systemic risk assessment can fix “multistakeholderism,” and “multistakeholderism,” in turn, can help make systemic risk assessments work.