We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.
The rules of war, formally known as international humanitarian law, have been developing for centuries, reflecting society’s moral compass, the evolution of its values, and technological progress. While humanitarian law has been successful in prohibiting the use of certain methods and means of warfare, it is nevertheless destined to remain in a constant catch-up cycle with the atrocities of war. Nowadays, the widespread development and adoption of digital technologies in warfare, including AI, are leading to some of the biggest changes in human history. Is international humanitarian law up to the task of addressing the threats those technologies can present in the context of armed conflicts? This chapter provides a basic understanding of the system, principles, and internal logic of this legal domain, which is necessary to evaluate the actual or potential role of AI systems in (non-)international armed conflicts. The chapter aims to contribute to the discussion of the ex-ante regulation of AI systems used for military purposes beyond the scope of lethal autonomous weapons, as well as to recognize the potential that AI carries for improving the applicability of the basic principles of international humanitarian law, if used in an accountable and responsible way.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).
The actors that are active in the financial world process vast amounts of information, starting from customer data and account movements over market trading data to credit underwriting or money-laundering checks. It is one thing to collect and store these data, yet another challenge to interpret and make sense of them. AI helps with both, for example, by checking databases or crawling the Internet in search of relevant information, by sorting it according to predefined categories or by finding its own sorting parameter. It is hence unsurprising that AI has started to fundamentally change many aspects of finance. This chapter takes AI scoring and creditworthiness assessments as an example for how AI is employed in financial services (Section 16.2), for the ethical challenges this raises (Section 16.3), and for the legal tools that attempt to adequately balance advantages and challenges of this technique (Section 16.4). It closes with a look at scoring beyond the credit situation (Section 16.5).
The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society.
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
This informative Handbook provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI's impact on society and how it should be regulated. This title is also available as Open Access on Cambridge Core.
One of the key challenges of regulating internet platforms is international cooperation. This chapter offers some insights into platform responsibility reforms by relying on forty years of experience in regulating cross-border financial institutions. Internet platforms and cross-border banks have much in common from a regulatory perspective. They both operate in an interconnected global market that lacks a supranational regulatory framework. And they also tend to generate cross-border spillovers that are difficult to control. Harmful content and systemic risks – the two key regulatory challenges for platforms and banks, respectively – can be conceptualized as negative externalities.
One of the main lessons learned in regulating cross-border banks is that, under certain conditions, international regulatory cooperation is possible. We have witnessed that in the successful design and implementation of the Basel Accord – the global banking standard that regulates banks’ solvency and liquidity risks. In this chapter, I will analyze the conditions under which cooperation can ensue and what the history of the Basel Accord can teach to platform responsibility reforms. In the last part, I will discuss what can be done when cooperation is more challenging.
The conditional legal immunity for hosting unlawful content (safe harbour) provided by Section 79 of the Information Technology Act, 2000 (IT Act) is central to the regulation of online platforms in India for two reasons. First, absent this immunity, platforms in India risk being secondarily liable for a wide range of civil and criminal offences. Second, the Indian Government has recognised that legal immunity for user-generated content is key to platform operations and has sought to regulate platform behaviour by prescribing several pre-conditions to safe harbour. This chapter examines the different obligations set out in the Intermediary Guidelines and evaluates the efforts of the Indian government to regulate platform behaviour in India through the pre-conditions for safe harbour. This chapter finds that the obligations set out in the Intermediary Guidelines are enforced in a patchwork and inconsistent manner through courts. However, the Indian Government retains powerful controls over content and platform behaviour by virtue of its power to block content under Section 69A of the IT Act and the ability to impose personal liability on platform employees within India.
This paper considers the goals of regulators in different countries working on regulating online platforms and how those varied motivations influence the potential for international coordination and cooperation on platform governance. different policy debates and goals surrounding online platform responsibility. The analysis identifies different policy goals related to three different types of obligations that regulators may impose on online platforms: responsibilities to target particular categories of unwanted content, responsibilities for platforms that wield particularly significant influence, and responsibilities to be transparent about platform decision-making. Reviewing the proposals that have emerged in each of these categories across different countries, the paper examines which of these three policy goals present the greatest opportunities for international coordination and agreement and which of them actually require such coordination in order to be effectively implemented. Finally, it considers what lessons can be drawn from existing policy efforts for how to foster greater coordination around areas of common interest related to online platforms.
This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses US efforts to impose mandatory transparency obligations on Internet “platforms.”
Like information disseminated through online platforms, infectious diseases can cross international borders as they track the movement of people (and sometimes animals and goods) and spread globally. Hence, their control and management have major implications for international relations, and international law. Drawing on this analogy, this chapter looks to global health governance to formulate suggestions for the governance of online platforms. Successes in global health governance suggest that the principle of tackling low-hanging fruit first to build trust and momentum towards more challenging goals may extend to online platform governance. Progress beyond the low-hanging fruit appears more challenging: For one, disagreement on the issue of resource allocation in the online platform setting may lead to “outbreaks” of disinformation being relegated to regions of the world that may not be at the top of online platforms’ market priorities lists. Secondly, while there may be wide consensus on the harms of infectious disease outbreaks, the harms from the spread of disinformation are more contested. Relying on national definitions of disinformation would hardly yield coherent international cooperation. Global health governance would thus suggest that an internationally negotiated agreement on standards as it relates to disinformation may be necessary.
In order to manage the issue of diversity of regulatory vision, States may, to some extent, harmonize substantive regulation—eliminating diversity. This is less likely than States determining unilaterally or multilaterally to develop manageable rules of jurisdiction, so that their regulation applies only in limited circumstances. The fullest realization of this “choice of law” solution would involve geoblocking or other technology that divides up regulatory authority according to a specified, and a perhaps agreed, principle. Geoblocking may be costly and ultimately porous, but it would allow different communities to effectuate their different visions of the good in the platform context. To the extent that the principles of jurisdiction are agreed, and are structured to be exclusive, platforms would have the certainty of knowing the requirements under which they must operate in each market. Of course, different communities may remain territorial states, but given the a-territorial nature of the internet, it may be possible for other divisions of authority and responsibility to develop. Cultural affinity, or political perspective, may be more compelling as an organizational principle to some than territorial co-location.