We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Are tech giants responsible for their actions along Global Value Chains (GVC) down to the ‘Global South’? The responsibilities of private actors, companies evolving in the algorithmic society do not mean there is no responsibility for national states. Stressing private responsibilities below the surface of the constitution directs the attention to the bulk of national, European and international rules that are and that have been developed in the last decades and that in one way or the other are dealing with responsibility or perhaps even better responsibilities of private and public actors. From the discussions on ‘double standards’ to the recent Recommendation of the European Parliament, it is not only the behavior of companies but of their commercial partners along the GVC are at the heart of the debate. From the 1970s sector-specific or product-specific rules were adopted. Today, it is a holistic approach reflecting the globalization of supply chains that are needed. Recent initiatives on the due diligence of large companies seem to go in this direction. Many of the puzzling aspects are discussed in this chapter.
This chapter focuses on the initiatives that have shaped the international regulatory framework for financial markets and financial innovation, and on how independent agencies have contributed to establishing checks and balances (to ensure an adequate level of disclosure, accountability, efficiency as well as investors and consumers’ protection) from the perspective of the continuous regulatory and administrative competition between states to attract business and achieve economic development. Notably, we focus on the most innovative financial regulatory events of the last century, i.e. the reforms of securities markets after the crash in 1929 and of the over-the-counter (OTC) markets and Too Big To Fail (TBTF) banks after 2008, to draw conclusions that can apply to crypto-finance (understood as finance based on Distributed Ledger Technology (DLT) systems like blockchain). This chapter proposes a comprehensive regulatory framework for crypto-finance; building on the tradition of US and EU administrative law. We propose a framework based on a specialised regulation for disclosure that considers the particularities of crypto-finance and the underlying technology; and the creation of a specialised agency with expert officers and administrative judges. Such an agency should be dynamic to protect investors from wrongdoers and be in constant interaction with service providers to facilitate innovation within controlled environments and anticipate emerging risks. This regulatory system should be decentralised as a guarantee of independence and to promote experimentation; i.e. supra-national organisations should identify the key regulatory goals and leave room for each competent authority to innovate in how to achieve them. As there is no absolute truth in regulatory matters, countries need to regulate in ‘co-opetition’, cooperating while competing.
This introductory chapter opens the book’s part on ‘Fundamental rights and rule of law in the Algorithmic society.’ By recalling the old prophecy of Herbert Marcuse, in the first and second sections, we outline pitfalls of the prevailing dominance of algorithmic decision-making on the pillars of Constitutional law. In the third section, we analyse how the fast-growing use of algorithms in the fields of justice, policing, public welfare, etc., could end in biased and erroneous decisions, boosting inequality, discrimination, unfair consequences, and undermining constitutional rights, such as privacy, freedom of expression, and equality. The final section is devoted to draw the roadmap of the entire book’s part, which covers chapters on ‘due process,’ ‘emotional Artificial Intelligence,’ ‘algorithmic administration,’ and ‘predictive policing.’
Technologies have always challenged, if not disrupted, the social, economic legal, and to an extent, the ideological status quo. Such transformations impact constitutional law, as the State formulates its legal response to the new technologies being developed and applied by the market, and as it considers its own use of the technologies. The development of data collection, mining, and algorithmic analysis, resulting in predictive profiling – with or without the subsequent potential manipulation of attitudes and behaviors of users – presents unique challenges to constitutional law at the doctrinal as well as theoretical levels.
Online human interactions are a continuous matching of data that affects both our physical and virtual life. How data are coupled and aggregated is the result of what algorithms constantly do through a sequence of computational steps that transform the input into the output. In particular, machine learning techniques are based on algorithms that identify patterns in datasets. The paper explores how algorithmic rationality may be considered a new bureaucracy according to Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making. It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions.
Humanity always moves forward. From the agricultural revolution, which substantially increased productivity with new tools and methods, and on to the industrial revolution with an unprecedented improvement of manufacturing processes. Another step forward is the recent transition from the industrial revolution to the information revolution. The information revolution has accelerated due to the growing computational power in combination with network connectivity, which allows every type of device to be connected to the Internet, while collecting and processing masses of data. Interestingly, big data and the Internet of Things has providing a bridge between the newer information economy and more traditional industries.1
The justice system is infamously slow in adopting technology.1 Although recent years saw an exponential increase in the role played by technology within the justice system,2 the legal industry has not kept pace with technical advancements to the same extent as other sectors. As put by former Australian High Court Justice, Michael Kirby, a Dickensian lawyer would still feel at home in the court halls of the 1990s courts, while a Dickensian doctor would not comprehend a contemporaneous hospital due to immense modernisation that had taken place at the same time.3 However, in the COVID-19 era, the courts and tribunals are forced to conduct remote hearings, which imposes a degree of technological awareness and proficiency on the justice system.
Legal tech (LT) products and services automate certain tasks that lawyers usually perform. The use of these tools in business-to-consumer (B2C) markets create many opportunities for consumers and the justice system in general, but also raises concerns in terms of access to justice, choice and information, quality, fairness, redress, and representation (Sections 11.1.1–11.1.4). This chapter deals with the question of whether the current legal framework in the EU (Section 11.2) is fit to meet the challenge LT poses in consumer markets, focusing especially on (national) legal services regulation (Section 11.3), EU consumer law (Section 11.4), and EU data protection law (Section 11.5). It concludes that applying the current legal norms to LT creates the risk of both under-regulation and over-regulation and discusses possible regulatory options that should be taken into account at the national and EU level to strike the right balance between innovation and protection (Section 11.6).
Artificial intelligence (AI) is one of many digital technologies currently under development.1 In recent years, it is having increasing repercussions in the field of law. These repercussions go beyond the traditional effect of an economic and industrial evolution. Indeed, the epochal industrial transformations and paradigmatic shifts it generates in many sectors have, from a legal perspective, a structural impact on legal rules and on legal practice. Moreover, the speed of these transformations also impacts on the regulatory response that a legislator is able to provide. In point of fact, rather than running the risk of new legislation rapidly becoming obsolete, regulators around the world have preferred so far to take their time to observe the changes unfolding in current technologies, and to assess their impacts from the legal point of view, before proposing any specific courses of action. Although legal experts, contrary to ethicists, have traditionally shown little interest in AI, algorithms, machine learning and so forth, it is now virtually impossible for them to ignore the impact of AI on the law, and more specifically, the question of whether actual legal rules and regulations can cope with the changes taking place in the economy and in the society, on one hand, and whether the use of AI tools in legal practice is compatible with the founding principles of our legal orders, on the other hand. If new rules are needed, lawyers will have to define their content and how to make sure they are suitable for the long term, in a context of rapidly changing technologies.
It is difficult to come up with any events after World War II that have led our entire global society to recognise that before The Times They Are A‐Changin’1 as clearly as today’s global crisis has done. The COVID-19 pandemic and its aftermath have revealed that almost everything we once considered stable and sustainable is actually built on quite shaky ground. But the crisis has also brought out the best in our coexistence, seeing that societies in many countries have shown that they are capable of finding creative solutions to overcome the current challenges. Digital technologies have played a crucial role in the world’s response to the COVID-19 crisis. Just think of modern methods of telecommunication such as video conferencing, which have made an immense contribution to maintaining the economy and work processes, or the various corona tracking apps, which try to help stopping the spread of the virus. It can be assumed that the harmful consequences of the pandemic would have grown even greater if those digital solutions had not been available. Just as almost every area of life is affected by the pandemic, so are the law itself and legal practice.
Digitalization in the legal domain is an amazing example of the way information technology (IT) can displace or enrich typically human tasks. Fueled by the recent progress in artificial intelligence (AI) (big data, machine learning, natural language processing, etc.), this phenomenon of digitalization affects more and more legal tasks and functions. Effective examples of digitalization in the legal domain are very diverse, ranging from exploration of patent classifications1 to prediction of legal cases’ outcomes (e.g., anticipation of foreseeable damages from an action).2 One can also mention e-discovery,3 as well as the digitalization of the organization and review of legal documents.4
Today, during the fourth industrial revolution, law firms are navigating in a somewhat changing landscape. Traditional legal practice and the ways of doing business in providing legal services is under pressure to change. The pressure comes from other law firms and increasingly self-sufficient in-house counsels who are gradually handling more and more legal matters internally. Companies and their in-house counsels are demanding more specialized services and alternative pricing structures other than the traditional practice of billing by the hour. This demand is one consequence of the fourth industrialization that includes the evolution of various forms of digitization, automation, machine learning, AI, and other technologies. As any other business, law firms are not exempt from the effects of such technologies. If anything, law firms will experience significant disruptive effects on how they do business and the types of services they provide. This is mostly due to the impact of emerging technologies on changing business models, as well as the content of the demand and needs of clients, in ways that were unthinkable twenty years ago.
By 2020 law firms will be faced with a “tipping point” for a new talent strategy. Now is the time for all law firms to commit to becoming AI-ready by embracing a growth mindset, set aside the fear of failure and begin to develop internal AI practices.1
Given that artificial intelligence (AI) and machine learning (ML) count among the key technologies of the digital age, the debate on whether and how to regulate this technology raises some of the most fundamental current questions of lawyering in the digital age.1 In fact, these issues are intensively debated and are particularly controversial. In Germany, for instance, two key institutional players have taken fundamentally different views. On the one hand, the influential ‘Initiative D21’, Germany’s largest non-profit network, dedicated to a digital society and comprising key actors in business, politics, civil society, science and academia, prominently rejects the introduction of any new regulations for algorithms.2 On the other hand, the Data Ethics Commission, a group of sixteen independent experts, created by the Federal Government, ‘holds the view that regulation is necessary, and cannot be replaced by ethical principles’.3 These positions seem to imply that an either-or decision needs to be taken with respect to AI – either ethical principles or legal regulation. At least, both the Initiative D21 and the report of the Data Ethics Commission are based on the understanding of ethical and legal rules as two entirely different categories, two categories that neither overlap nor interfere with one other. This chapter will query that understanding and argue that ethical guidelines and principles may in fact bring about significant legal implications, despite their ethical branding. If this is true, it seems misleading to disguise rules as purely ethical principles, thereby hiding their effective relevance and impact. The relevance of such a potential hardening of soft ethical principles cannot be overstated, given the current emergence of a multitude of such guidelines on AI, at various levels and by different players.