We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Our privacy is besieged by tech companies. Companies can do this because our laws are built on outdated ideas that trap lawmakers, regulators, and courts into wrong assumptions about privacy, resulting in ineffective legal remedies to one of the most pressing concerns of our generation. Drawing on behavioral science, sociology, and economics, Ignacio Cofone challenges existing laws and reform proposals and dispels enduring misconceptions about data-driven interactions. This exploration offers readers a holistic view of why current laws and regulations fail to protect us against corporate digital harms, particularly those created by AI. Cofone then proposes a better response: meaningful accountability for the consequences of corporate data practices, which ultimately entails creating a new type of liability that recognizes the value of privacy.
Chapter 2 introduces an “information economy” framework for approaching the epistemology of testimony. It is argued that, in a well-designed epistemic community, the norms governing information acquisition and information distribution will be different. This is because the dominant concern of information acquisition is quality control, whereas the dominant concern of information distribution is to provide access. The central idea, then, is to understand knowledge generation in terms of the norms governing information acquisition and to understand knowledge transmission in terms of the norms governing information distribution. The reason for adopting this approach is its explanatory power.In particular, the framework (a) explains a range of cases in the testimony literature; (b) provides a principled understanding of the transmission–generation distinction; and (c) explains the truth behind various and conflicting positions in the epistemology of testimony. Moreover, the framework nicely integrates with other plausible positions in epistemology, the philosophy of language, action theory, social science, and cognitive science.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.