Generative artificial intelligence (AI) systems, such as large language models, image synthesis tools, and audio generation engines, present remarkable possibilities for creative expression and scientific discovery but also pose pressing challenges for privacy governance. By identifying patterns in vast troves of digital data, these systems can generate hyper-realistic yet fabricated content, surface sensitive inferences about individuals and groups, and shape public discourse at an unprecedented scale. These innovations amplify privacy concerns about nonconsensual data extraction, re-identification, inferential profiling, synthetic media manipulation, algorithmic bias, and quantification. This article argues that the current U.S. legal framework, rooted in a narrowly targeted sectoral approach and overreliance on individual notice and consent, is fundamentally mismatched to address the emergent and systemic privacy harms of generative AI. It examines how the unprecedented scale, speed, and sophistication of these systems strain core assumptions of data protection law, highlighting the misalignment between AI’s societal impacts and individualistic, reactive approaches to privacy governance. The article explores distinctive privacy challenges posed by generative AI, surveys gaps in existing U.S. regulations, and outlines key elements of a new paradigm to protect individual and collective privacy rights that (1) shifts from individual to collective conceptions of privacy; (2) moves from reactive to proactive governance; and (3) reorients the goals and values of AI governance. Despite significant obstacles, it identifies potential policy levers, technical safeguards, and conceptual tools to inform a more proactive and equitable approach to governing generative AI.