We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Chapter 2 looks at transparency and fintech tools. The premise behind many so-called fintech innovations in consumer markets is to make more personalised financial products available to an often underserved and largely inexperienced cohort. Many consumers are not good at managing their day-to-day finances, selecting optimal credit products or investing for the future. Fintech products, and the applications associated with them, are commonly promoted on the basis they will use consumer data, AI capacities, and a lower cost basis to promote competition and better serve consumers, including financially excluded or vulnerable consumers. Paterson, Miller, and Lyons challenge these premises by demystifying the kinds of capacities that are possible through the fintech technologies being offered to consumers. The most common form of fintech solutions offered to consumers are credit, budgeting, and investment tools. These typically do not disrupt existing service models through the use of deep learning AI. Rather they are commonly enabled by encoding the rules of thumb used by mortgage brokers and financial advisers. They make a return through methods criticised on when deployed by social media platforms, namely on-selling data, targeted advertising, and commission-based sales. There is moreover little incentive for fintech providers to make products that benefit marginalised cohorts for whom there is minimal relevant data and little likelihood of lucrative return. The authors argue that greater transparency is required about what is being offered to consumers though fintech tools and who benefits from them, along with greater accountability for ill-founded and even sensationalised claims.
Chapter 6 explores a different path: building privacy law on liability. Liability for material and immaterial privacy would improve the protection system. To achieve meaningful liability, though, laws must compensate privacy harm, not just the material consequences that stem from it. Compensation for financial and physical harms produced by the collection, processing, or sharing of data is important but insufficient. The proposed liability framework would address informational exploitation by making companies internalize risk. It would deter and remedy socially detrimental data practices, rather than chasing elusive individual control aims. Courts can distinguish harmful losses from benign ones by examining them on the basis of contextual and normative social values. By focusing on harm, privacy liability would overcome its current problems of causation quagmires and frivolous lawsuits.
Governments are increasingly adopting artificial intelligence (AI) tools to assist, augment, and even replace human administrators. In this chapter, Paul Miller, the NSW Ombudsman, discusses how the well-established principles of administrative law and good decision-making apply, or may be extended, to control the use of AI and other automated decision-making (ADM) tools in administrative decision-making. The chapter highlights the importance of careful design, implementation and ongoing monitoring to mitigate the risk that ADM in the public sector could be unlawful or otherwise contravene principles of good decision-making – including consideration of whether express legislative authorisation for the use of ADM technologies may be necessary or desirable.
Our privacy is besieged by tech companies. Companies can do this because our laws are built on outdated ideas that trap lawmakers, regulators, and courts into wrong assumptions about privacy, resulting in ineffective legal remedies to one of the most pressing concerns of our generation. Drawing on behavioral science, sociology, and economics, Ignacio Cofone challenges existing laws and reform proposals and dispels enduring misconceptions about data-driven interactions. This exploration offers readers a holistic view of why current laws and regulations fail to protect us against corporate digital harms, particularly those created by AI. Cofone then proposes a better response: meaningful accountability for the consequences of corporate data practices, which ultimately entails creating a new type of liability that recognizes the value of privacy.
In this ambitious collection, Zofia Bednarz and Monika Zalnieriute bring together leading experts to shed light on how artificial intelligence (AI) and automated decision-making (ADM) create new sources of profits and power for financial firms and governments. Chapter authors—which include public and private lawyers, social scientists, and public officials working on various aspects of AI and automation across jurisdictions—identify mechanisms, motivations, and actors behind technology used by Automated Banks and Automated States, and argue for new rules, frameworks, and approaches to prevent harms that result from the increasingly common deployment of AI and ADM tools. Responding to the opacity of financial firms and governments enabled by AI, Money, Power and AI advances the debate on scrutiny of power and accountability of actors who use this technology. This title is available as Open Access on Cambridge Core.
Chapter 1 summarizes the dramatic but unexpected societal and international security changes that have accompanied the introduction of the Internet. It also provides a quick introduction to packet-based switching that underpins the Internet as well as the World Wide Web, which transformed the Internet from a technical wonder into a very useful societal tool. It lays out the principle challenges of cybersecurity, considers malicious actors and motivations, and begins to consider the roles governments play in making cyberspace safer.
Chapter 9 takes up artificial intelligence (AI) and ethics. Beginning in Ancient Greece with the first autonomous machines, this chapter presents a brief history of AI. It then examines excessively ambitious expectations in the twentieth century for the potential for AI and the adverse consequences for research funding that resulted, now dubbed the “AI Winter.” New technologies, especially those with the elevated expectations of AI, often draw a lot of positive speculation, some of it misplaced. The chapter also reviews technologies that were explored in developing AI, such as logic, symbol manipulation, problem solving, expert systems and machine learning based largely on artificial neural networks. It also examines “adversarial attacks” in which very slight changes in an input can change the classification of an image. The applications of AI technologies to robots are discussed and caveats issued for their use. These include ethical issues that arise with the use of lethal autonomous weapons systems. The chapter closes with a discussion of the application of AI technologies to cybersecurity.
Chapter 3 briefly contrasts classic telephone circuit-switched communication with the more flexible packet-switched Internet. It introduces the domain name system(DNS) , which is in effect the telephone directory for the Internet, and describes how domain names are translated into binary addresses. DNS explains basic internet communication protocols, that is, how computers “talk” to one another by sending packets of bits over the Internet and describes the algorithms that route packets along different paths between sources and destinations. Packet routing helps to make packet communication robust in the face of network disruptions, such as might occur during a military conflict. The chapter also details a variety of encryption methods, focusing on public-key cryptography, which is widely used for secure communications that enable shopping online, banking transactions, and privacy. It also introduces digital signatures, the equivalent of a human signature, for authentication of the sender of messages. Finally, it examines threats to secure public-key cryptography.
Chapter 10 draws conclusions and proposes ways to improve security in cyberspace. As the preceding chapters make clear, information and communications technology is the latest tool humans developed that has widespread impact on economic and social development on earth and will be critical as humans start to setup colonies in the solar system. Just like other tools, information technology holds both the promise of a better future and the prospect for increasing misery unless digital divides narrow and people adapt to new economic realities brought about by technological change. Engineering and scientific principles underlie the Internet, and we cannot overlook that humans write code, develop algorithms, and manufacture the hardware that create cyberspace with all its benefits and risks. To structure thinking about the future directions of cybersecurity, we propose three questions. First, whose Internet is it? Next, how should we think about cybersecurity? The final question is, what role should governments play in responding to significant cyber events?