We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter contains the first part of the book’s study on cloud computing contracts evaluating the organization and structure of cloud computing contracts in addition to their content. This includes an evaluation of Service Level Agreements (SLAs), the use of master-service and framework agreements, issues related to subcontractors and subcontracting, third-party rights, and liability considerations.
The study applies a qualitative analysis of based on both secondary and original data. Secondary data is derived from various research projects in the EU and elsewhere. Original study data is derived from contracts obtained by the author through Freedom of Information (FOI) requests. This study is original in its method and scope in the governmental context. Additionally, the chapter applies government cloud audits and other guidance form the UK G-Cloud and US FedRAMP programs.
In Government Cloud Procurement, Kevin McGillivray explores the question of whether governments can adopt cloud computing services and still meet their legal requirements and other obligations to citizens. The book focuses on the interplay between the technical properties of cloud computing services and the complex legal requirements applicable to cloud adoption and use. The legal issues evaluated include data privacy law (GDPR and the US regime), jurisdictional issues, contracts, and transnational private law approaches to addressing legal requirements. McGillivray also addresses the unique position of governments when they outsource core aspects of their information and communications technology to cloud service providers. His analysis is supported by extensive research examining actual cloud contracts obtained through Freedom of Information Act requests. With the demand for cloud computing on the rise, this study fills a gap in legal literature and offers guidance to organizations considering cloud computing.
Technological progress could constitute a huge benefit for law enforcement: greater efficiency, effectiveness and speed of operations as well as more precise risk analyses, including the discovery of unexpected correlations, which could feed nourish profiles. A number of new tools entail new scenarios for information gathering, as well as the monitoring, profiling and prediction of individual behaviours, thus allegedly facilitating crime prevention: algorithms, artificial intelligence, machine learning and data mining. Law enforcement authorities have already embraced the assumed benefits of big data. However, there is a great need for an in-depth debate about the appropriateness of using algorithms in machine-learning techniques in criminal justice, assessing how the substance of legal protection may be weakened. Given that big data, automation and artificial intelligence remain largely under-regulated, the extent to which data-driven surveillance societies could erode core criminal law principles such as reasonable suspicion and the presumption of innocence, ultimately depends on the design of the surveillance infrastructures. This contribution first addresses the so-called rise of the algorithmic society and the use of automated technologies in criminal justice to assess whether and how the gathering, analysis and deployment of big data are changing law enforcement activities. It then examines the actual or potential transformation of core principles of criminal law and whether the substance of legal protection may be weakened in a ‘data-driven society’.
Algorithmic decision-making fundamentally challenges legislators and regulators to find new ways to ensure algorithmic operators and controllers comply with the law. The European Union (EU) legal order is no stranger to those challenges. One of the ways to deal with the rise of automated and algorithmic decision-making, could be the introduction of by-design obligations. This chapter analyses to what extent EU law tolerates, enables or limits the introduction of such obligations. Conceptualising the notion of by-design regulation as a specific form of EU co-regulation, the chapter subsequently identifies the challenges EU constitutional law could present for the further development of those obligations. In doing so, it hopes to frame and further structure debates on this type of regulation for the algorithmic society.
This chapter presents an overview of how government, corporations and other actors are approaching the topic of Artificial Intelligence (AI) governance and ethics across China, Europe, India and the United States of America. Recent policy documents and other initiatives from these regions, both from public sector agencies and private companies such as Microsoft are documented and a brief analysis is offered.
With their ability of selecting content available, algorithms are used to automatically identify or flag potentially illegal content, and in particular hate speech. After the adoption of the Code of conduct on countering illegal hate speech online by the European Commission on 31 May 2016, the IT companies heavily relied on algorithms that can skim the hosted content. However, such intervention could not be completed without the collaboration of moderators in charge of verifying the doubtful content. The interplay between technological and human control, however, adds several questions. Under the technological dimension, the most important issues concern the discretion of private companies as regards the definition of the illegal content; the level of transparency as regards the translation of the legal concepts into code; the existence of procedural guarantees applicable to the system adopted to challenge automatic decisions. Under the human dimension, the most important issues concern the selection procedure to identify the so-called ‘trusted flaggers’ able to provide the final decision regarding the illegal nature of the online content, the existence of accreditation or verification process that would evaluate the quality of the notices provided by such trusted flaggers, the allocation of liability in case of mistake between the online intermediary and the trusted flagger.
If states begin to impose such contractual bargains for automated administrative determinations, the ‘immoveable object’ of inalienable due process rights will clash with the ‘irresistible force’ of legal automation and libertarian conceptions of contractual ‘freedom.’ This chapter explains why legal values must cabin (and often trump) efforts to ‘fast track’ cases via statistical methods, machine learning (ML), or artificial intelligence. Part I explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decisionmaker, an explanation for that judgment, and an ability to appeal. Part II demonstrates why legal automation threatens those rights. Part III critiques potential bargains for legal automation, and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be ‘better off’ by doing so.
This paper explores how algorithmic rationality may be considered a new bureaucracy according to Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making. It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions. The persuasive power of algorithms aims at predicting social behaviours that are expected to be repeated in time. Against this static model, the role of law and legal culture is relevant for individual emancipation and social change. The paper is divided into three sections: the first section describes commonalities and differences between legal bureaucracy and algorithms; the second part examines the linkage between a data-driven model of law production and algorithmic rationality; the third part questions the idea of law production by data as a product of legal culture.
Every day, millions of administrative decisions take place in the public sector: building permits, land use, tax deductions, social welfare support, and access to healthcare, etc. When such decisions affect the rights and duties of individual citizens and/or businesses, they must meet the requirements set out in administrative law. Of those is the requirement that the body responsible for the decision must provide an explanation of the decision to the recipient. As many administrative decisions are being considered for automation through algorithmic decision-making (ADM) systems, it raises questions about what kind of explanations they need to provide. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced, often of a very general character. Rather than adding yet another ethical consideration to what in our view is an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does the legal requirement to explain a decision in public administration actually entail in regards to both human and computer-aided decision-making? We argue that, instead of pursuing a new approach to explanation, retaining the existing standard (the human standard) for explanation already enshrined in administrative law will be more meaningful and safe. To add to this we introduce what we call an ‘administrative Turing test’ which could be used to continually validate and strengthen computationally assisted decision-making, providing a benchmark on which future applications of ADM can be measured.
Technologies have always led to turning points for social development. In the past, different technologies have opened the doors towards a new phase of growth and change while influencing social values and principles. Algorithmic technologies fit within this framework. Although these technologies have positive effects on the entire society by increasing the capacity of individuals to exercise rights and freedoms, they have also led to new constitutional challenges. The opportunities of new algorithmic technologies clash with the troubling opacity and lack of accountability. We believe that constitutional law plays a critical role to address the challenges of the algorithmic society. New technologies have always challenged, if not disrupted, the social, economic legal and, to an extent, the ideological status quo. Such transformations impact constitutional values, as the state formulates its legal response to the new technologies based on constitutional principles which meet market dynamics, and as it considers its own use of technologies in light of the limitation imposed by constitutional safeguards. The primary goal of this chapter is to introduce the constitutional challenges coming from the rise of the algorithmic society. The first part of this work examines the challenges for fundamental rights and democratic values with a specific focus on the right to freedom of expression, privacy and data protection. The second part looks at the role of constitutional law in relation to the regulation and policy of the algorithmic society. The third part examines the role and responsibilities of private actors underlining the role of constitutional law in this field. The fourth part deals with the potential remedies which constitutional law can provide to face the challenges of the information society.
This chapter discusses legal-ethical challenges posed by the emergence of emotional artificial intelligence (AI) and its manipulative capabilities. The focus lies on the European legal framework and on the use of emotional AI for commercial business-to-consumer purposes, although some observations are also valid for the public sector, or in the context of political micro-targeting or fake news. On the basis of a literature review, the chapter addresses privacy and data protection concerns, challenges to individual autonomy and human dignity as overarching values. It also presents a number of responses, specifically those suggesting the introduction of new (constitutional) rights to mitigate the potential negative effects of such developments and it provides the foundation for a future research agenda in that direction.
The main quality of a smart contract relies on the automation of contractual relationships, as the performance is triggered by an algorithm in turn triggered by the fulfilment of certain events. Most of the benefits arising from smart contracts are based on the ‘self-executing’ and ‘self-enforcing’ character, which represent a source of innovation for general contract law. Smart contracts use blockchain to ensure the transparency of the contractual relationship and to create trust in the capacity to execute the contract, which depends on the technology used. The aim of the present essay is to investigate whether and how blockchain technology platforms and smart contracts could be considered a modern form of private authority, which at least partially escapes the application of mandatory rules and traditional enforcement mechanisms. In particular, the authors will devote attention to innovative self-help mechanisms and dispute resolution systems, which can be depicted as ‘alternative’ insofar as they present themselves as independent from courts and other national state authorities.
Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. For instance, New York, Chicago, and Los Angeles use predictive policing systems built by private actors, such as PredPol, Palantir and Hunchlab. However, predictive policing is not a panacea to eradicate crime and many concerns raised on the inefficiency, risk of discrimination, as well as lack of transparency. The necessity of protecting fundamental rights has to be reiterated in the algorithmic society. To do it, adapted tools must be deployed to ensure proper enforcement of fundamental rights. Some ethical principles need to be put in place in order to effectively protect fundamental rights and to reinforce them. I argue that while the European constitutional and ethical framework is theoretically sufficient, other tools must be adopted to guarantee the enforcement of Fundamental Rights and Ethical Principles in practice to provide a robust framework, for keeping human rights a central place. Algorithmic Impact Assessment (AIA) constitutes an interesting way to provide a concrete governance of automated decision-making.
With new forms of private power in the Algorithmic Society – a bottom-up approach could help next to the top-down approach taken by Constitutional States. This chapter investigates to what extent principles of consumer law can serve as leverage when artificial intelligence (AI) used for mutual transactions on digital platforms leads to adverse consequences for consumers. Constitutional States are keen to advance the digital transformation. They look to find a balance between promoting innovation and freedom of contract with robust consumer protection. Following an introduction of technology notions, this Chapter explores whether the use of machine learning, AI and automated decision making (ADM) in private law transactions creates a dichotomy between digital platforms and consumer protection regulation. What is left of principles of contract law, if contracts are almost completely automated and the negotiations process leaves no room for divergence? Which legal principles could serve to provide trust to private individuals in the pre-contractual and the contractual phase of transactions on AI-driven digital platforms? The author discusses a toolkit for regulators to empower the weaker parties on algorithmic-driven digital platforms, in the contract negotiations and the governance phase. Given the extensive body of European consumer regulation, this Chapter applies EU regulation.