We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter will argue that the federal court data paywall—PACER fees—unduly hinders the production of research on America’s federal courts. This effective limitation on public access to data leaves us with less access to better justice. Worse, there is little to no offsetting benefit. Although PACER fee revenue is often described as high, it is actually tiny as an economic matter. Congress can and should enact legislation that both mandates free public access to PACER’s vast array of information and replaces the associated fee revenue. That would allow the judiciary to continue its current operations and also allow appropriate research on federal courts.
Shakespeare education is being reimagined around the world. This book delves into the important role of collaborative projects in this extraordinary transformation. Over twenty innovative Shakespeare partnerships from the UK, US, Australia, New Zealand, the Middle East, Europe and South America are critically explored by their leaders and participants. –Structured into thematic sections covering engagement with schools, universities, the public, the digital and performance, the chapters offer vivid insights into what it means to teach, learn and experience Shakespeare in collaboration with others. Diversity, equality, identity, incarceration, disability, community and culture are key factors in these initiatives, which together reveal how complex and humane Shakespeare education can be. Whether you are interested in practice or theory, this collection showcases an abundance of rich, inspiring and informative perspectives on Shakespeare education in our contemporary world.
New digital technologies, from AI-fired 'legal tech' tools to virtual proceedings, are transforming the legal system. But much of the debate surrounding legal tech has zoomed out to a nebulous future of 'robo-judges' and 'robo-lawyers.' This volume is an antidote. Zeroing in on the near- to medium-term, it provides a concrete, empirically minded synthesis of the impact of new digital technologies on litigation and access to justice. How far and fast can legal tech advance given regulatory, organizational, and technological constraints? How will new technologies affect lawyers and litigants, and how should procedural rules adapt? How can technology expand – or curtail – access to justice? And how must judicial administration change to promote healthy technological development and open courthouse doors for all? By engaging these essential questions, this volume helps to map the opportunities and the perils of a rapidly digitizing legal system – and provides grounded advice for a sensible path forward. This book is available as Open Access on Cambridge Core.
E-Prime is the leading software suite by Psychology Software Tools for designing and running Psychology lab experiments. The E-Primer is the perfect accompanying guide. It provides all the necessary knowledge to make E-Prime accessible to everyone. You can learn the tools of Psychological science by following the E-Primer through a series of entertaining, step-by-step recipes that recreate classic experiments. The updated E-Primer expands its proven combination of simple explanations, interesting tutorials and fun exercises, and makes even the novice student quickly confident to create their dream experiment. Featuring: Learn the basic and advanced features of E-Studio's flexible user interface. 15 step-by-step tutorials let you replicate classic experiments from all Psychology fields. Learn to write custom code in E-Basic without having any previous experience in programming. Second edition completely revised for E-Prime 3. Based on 10+ years of teaching E-Prime to undergraduates, postgraduates, and colleagues. Used by Psychology Software Tools to train their own staff.
This chapter explores the changes that AI brings about in corporate law and corporate governance, especially in terms of the challenges it poses for corporations. The law scholar Jan Lieder argues that whilst there is the potential to enhance the current system, there are also risks of destabilisation. Although algorithms are already being used in the board room, lawmakers should not consider legally recognizing e-persons as directors and managers. Rather, academia should evaluate the effects of AI on the corporate duties of boards and their liabilities. By critically examining three main topics, algorithms as directors, AI in a management board, and AI in a supervisory board, the author suggests the need for transparency in a company’s practices regarding AI for awareness-raising and the enhancement of overall algorithm governance, as well as the need for boards to report on their overall AI strategy and ethical guidelines relating to the responsibilities, competencies, and protective measures they established. Additionally, the author argues that a reporting obligation should require the boards to deal with questions of individual rights and explain how they relate to them.
This chapter by the law scholar Antje von Ungern-Sternberg focuses on the legality of discriminatory AI which is increasingly used to assess people (profiling). Intelligent algorithms – which are free of human prejudices and stereotypes – would prevent discriminatory decisions, or so the story goes. However, many studies show that the use of AI can lead to discriminatory outcomes. From a legal point of view, this raises the question whether the law as it stands prohibits objectionable forms of differential treatment and detrimental impact. In the legal literature dealing with automated profiling, some authors have suggested that we need a ‘right to reasonable inferences’, i.e. a certain methodology for AI algorithms affecting humans. von Ungern-Sternberg takes up this idea with respect to discriminatory AI and claims that such a right already exists in antidiscrimination law. She argues that the need to justify differential treatment and detrimental impact implies that profiling methods correspond to certain standards. It is now a major challenge for lawyers and data and computer scientists to develop and establish those methodological standards.
In this chapter, law and technology scholar Jonathan Zittrain warns of the danger of relying on answers for which we have no explanations. There are benefits to utilising solutions discovered through trial and error rather than rigorous proof: though aspirin was discovered in the late 19th century, it was not until the late 20th century that scientists were able to explain how it worked. But doing so accrues ‘intellectual debt’. This intellectual debt is compounding quickly in the realm of AI, especially in the subfield of machine learning. Whereas we know that ML models can create efficient, effective answers, we don’t always know why the models come to the conclusions they do. This makes it difficult to detect when they are malfunctioning, being manipulated, or producing unreliable results. When several systems interact, the ledger moves further to the red. Society’s movement from basic science towards applied technology that bypasses rigorous investigative research inches us closer to a world in which we are reliant on an oracle AI, one in which we trust regardless of our ability to audit its trustworthiness. Zittrain concludes that we must create an intellectual debt ‘balance sheet’ by allowing academics to scrutinise the systems.
In this chapter, Fruzsina Molnár-Gábor and Johanne Giesecke consider specific aspects of how the application of AI-based systems in medical contexts may be guided under international standards. They sketch the relevant international frameworks for the governance of medical AI. Among the frameworks that exist, the World Medical Association’s activity appears particularly promising as a guide for standardisation processes. The organisation has already unified the application of medical expertise to a certain extent worldwide, and its guidance is anchored in the rules of various legal systems. It might provide the basis for a certain level of conformity of acceptance and implementation of new guidelines within national rules and regulations, such as those on new technology applications within the AI field. In order to develop a draft declaration, the authors then sketch out the potential applications of AI and its effects on the doctor–patient relationship in terms of information, consent, diagnosis, treatment, aftercare, and education. Finally, they spell out an assessment of how further activities of the WMA in this field might affect national rules, using the example of Germany.
In this chapter, the law scholar Christoph Krönke focuses on the legal challenges faced by healthcare AI Alter Egos, especially in the European Union. Firstly, the author outlines the functionalities of AI Alter Egos in the healthcare sector. Based on this, he explores the applicable legal framework as AI Alter Egos have two main functions: collecting a substantive database and proposing diagnoses. The author spells out that concerning the database, European data protection laws, especially the GDPR, are applicable. For healthcare AI in general, the author analyses the European Medical Devices Regulation (MDR). He argues that MDR regulates the market and ensures high standards with regard to the quality of medical devices. Altogether, the author concludes that AI Alter Egos are regulated by an appropriate legal framework in the EU, but it has to be open for developments in order to remain appropriate.
In this chapter, Mathias Paul explores the topic of AI systems in the financial industry. After outlining different areas of application of AI in the financial sector and different regulatory regimes relevant to robo-finance, the author analyses the risks emerging from AI applications in the financial industry. He argues that AI systems applied in this sector usually do not create new risks. Instead, existing risks can actually be mitigated through AI applications. The author then analyses personal responsibility frameworks that have been suggested by scholars in the field of robo-finance, and shows why they are not a sufficient approach for regulation. He concludes by discussing the Draft AI Act proposed by the European Commission as a suitable regulatory approach based on the risks linked to specific AI systems and AI based practices.
In this chapter, the ethics and international law scholar Silja Voeneky and the mathematician Thorsten Schmidt propose a new adaptive regulation scheme for AI-driven products and services. To this end, the authors examine different regulatory regimes, including the European Medical Devices Regulation (MDR), and the proposed AI Act by the European Commission and analyse the advantages and drawbacks. They conclude that regulatory approaches in general and with regard to AI driven high risk products and services have structural and specific deficits. Hence, a new regulatory approach is suggested by the authors, which avoids these shortcomings. At its core, the proposed adaptive regulation requires that private actors, as companies developing and selling high risk AI driven products and services, pay a proportionate amount of money as a financial guarantee into a fund before the product or service enters the market. The authors lay down what amount of regulatory capital can be seen as proportionate and the accompanying rules and norms to implement adaptive regulation.
This chapter by the philosopher Johanna Thoma focuses on the ‘moral proxy problem’, which arises when an autonomous artificial agent makes a decision as a proxy for a human agent, without it being clear for whom specifically it does so. Thoma recognises that, in general, there are broadly two categories of agents an artificial agent can be a proxy for: low-level agents (individual users or the kinds of human agents artificial agents are usually replacing) and high-level agents (designers, distributors, or regulators). She argues that we do not get the same recommendations under different agential frames: whilst the former suggests the agents be programmed without risk neutrality, which is common in choices made by humans, the latter suggests the contrary, since the choices are considered part of an aggregate of many similar choices. The author argues that the largely unquestioned implementation of risk neutrality in the design of artificial agents deserves critical scrutiny. Such scrutiny should reveal that the treatment of risk is intimately connected with our answer to the questions about agential perspective and responsibility.
In this chapter, Thomas Burri, an international lawyer, examines how general ethical norms on AI diffuse into domestic law directly, without engaging international law. The chapter discusses various ethical AI frameworks and shows how they influenced the European Union Commission’s proposal for an AI Act. It reveals the origins of the EU proposal and explains the substance of the future EU AI regulation. The chapter concludes that, overall, international law has played a marginal role in this process; it was largely sidelined.
In this chapter, the philosopher Mathias Risse reflects on the medium and long-term prospects and challenges democracy faces from AI. Comparing the political nature of AI systems with traffic infrastructure, the author points out AI’s potential to greatly strengthen democracy, but only with the right efforts. The chapter starts with a critical examination of the relation between democracy and technology with a historical perspective before outlining the techno skepticism prevalent in several grand narratives of AI. Finally, the author explores the possibilities and challenges that AI may lead to in the present digital age. He argues that technology critically bears on what forms of human life get realised or imagined, as it changes the materiality of democracy (by altering how collective decision making unfolds) and what its human participants are like. In conclusion, Mathias Risse argues that both technologists and citizens need to engage with ethics and political thoughts generally to have the spirit and dedication to build and maintain a democracy-enhancing AI infrastructure.