We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Being Human in the Digital World is a collection of essays by prominent scholars from various disciplines exploring the impact of digitization on culture, politics, health, work, and relationships. The volume raises important questions about the future of human existence in a world where machine readability and algorithmic prediction are increasingly prevalent and offers new conceptual frameworks and vocabularies to help readers understand and challenge emerging paradigms of what it means to be human. Being Human in the Digital World is an invaluable resource for readers interested in the cultural, economic, political, philosophical, and social conditions that are necessary for a good digital life. This title is also available as Open Access on Cambridge Core.
To make sense of data and use it effectively, it is essential to know where it comes from and how it has been processed and used. This is the domain of paradata, an emerging interdisciplinary field with wide applications. As digital data rapidly accumulates in repositories worldwide, this comprehensive introductory book, the first of its kind, shows how to make that data accessible and reusable. In addition to covering basic concepts of paradata, the book supports practice with coverage of methods for generating, documenting, identifying and managing paradata, including formal metadata, narrative descriptions and qualitative and quantitative backtracking. The book also develops a unifying reference model to help readers contextualise the role of paradata within a wider system of knowledge, practices and processes, and provides a vision for the future of the field. This guide to general principles and practice is ideal for researchers, students and data managers.
This handbook offers an important exploration of generative AI and its legal and regulatory implications from interdisciplinary perspectives. The volume is divided into four parts. Part I provides the necessary context and background to understand the topic, including its technical underpinnings and societal impacts. Part II probes the emerging regulatory and policy frameworks related to generative AI and AI more broadly across different jurisdictions. Part III analyses generative AI's impact on specific areas of law, from non-discrimination and data protection to intellectual property, corporate governance, criminal law and more. Part IV examines the various practical applications of generative AI in the legal sector and public administration. Overall, this volume provides a comprehensive resource for those seeking to understand and navigate the substantial and growing implications of generative AI for the law.
For decades, American lawyers have enjoyed a monopoly over legal services, built upon strict unauthorized practice of law rules and prohibitions on nonlawyer ownership of law firms. Now, though, this monopoly is under threat-challenged by the one-two punch of new AI-driven technologies and a staggering access-to-justice crisis, which sees most Americans priced out of the market for legal services. At this pivotal moment, this volume brings together leading legal scholars and practitioners to propose new conceptual frameworks for reform, drawing lessons from other professions, industries, and places, both within the United States and across the world. With critical insights and thoughtful assessments, Rethinking the Lawyers' Monopoly seeks to help shape and steer the coming revolution in the legal services marketplace. This title is also available as open access on Cambridge Core.
As artificial intelligence continues to advance, it poses a threat to the very foundations of intellectual property. In AI versus IP, Robin Feldman offers a balanced perspective on the challenges we face at the intersections of AI and IP. The book examines how the advancement of AI threatens to undermine what we choose to protect with intellectual property, such as patents, trademarks, copyrights, and trade secrets, and how it derives its value. Using analogies such as the value of diamonds and the myths that support intangible rights, the book proposes potential solutions to ensure a peaceful co-existence between AI and IP. AI and IP can co-exist, Feldman argues, but only with effort and forethought.
Automated Agencies is the definitive account of how automation is transforming government explanations of the law to the public. Joshua D. Blank and Leigh Osofsky draw on extensive research regarding the federal government's turn to automated legal guidance through chatbots, virtual assistants, and other online tools. Blank and Osofsky argue that automated tools offer administrative benefits for both the government and the public in terms of efficiency and ease of use, yet these automated tools may also mislead members of the public. Government agencies often exacerbate this problem by making guidance seem more personalized than it is, not recognizing how users may rely on the guidance, and not disclosing that the guidance cannot be relied upon as a legal matter. After analyzing the potential costs and benefits of the use of automated legal guidance by government agencies, Automated Agencies charts a path forward for policymakers by offering detailed policy recommendations.
Technologists frequently promote self-tracking devices as objective tools. This book argues that such glib and often worrying assertions must be placed in the context of precarious industry dynamics. The author draws on several years of ethnographic fieldwork with developers of self-tracking applications and wearable devices in New York City's Silicon Alley and with technologists who participate in the international forum called the Quantified Self to illuminate the professional compromises that shape digital technology and the gap between the tech sector's public claims and its interior processes. By reconciling the business conventions, compromises, shifting labor practices, and growing employment insecurity that power the self-tracking market with device makers' often simplistic promotional claims, the book offers an understanding of the impact that technologists exert on digital discourse, on the tools they make, and on the data that these gadgets put out into the world.
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
One of the key challenges of regulating internet platforms is international cooperation. This chapter offers some insights into platform responsibility reforms by relying on forty years of experience in regulating cross-border financial institutions. Internet platforms and cross-border banks have much in common from a regulatory perspective. They both operate in an interconnected global market that lacks a supranational regulatory framework. And they also tend to generate cross-border spillovers that are difficult to control. Harmful content and systemic risks – the two key regulatory challenges for platforms and banks, respectively – can be conceptualized as negative externalities.
One of the main lessons learned in regulating cross-border banks is that, under certain conditions, international regulatory cooperation is possible. We have witnessed that in the successful design and implementation of the Basel Accord – the global banking standard that regulates banks’ solvency and liquidity risks. In this chapter, I will analyze the conditions under which cooperation can ensue and what the history of the Basel Accord can teach to platform responsibility reforms. In the last part, I will discuss what can be done when cooperation is more challenging.
The conditional legal immunity for hosting unlawful content (safe harbour) provided by Section 79 of the Information Technology Act, 2000 (IT Act) is central to the regulation of online platforms in India for two reasons. First, absent this immunity, platforms in India risk being secondarily liable for a wide range of civil and criminal offences. Second, the Indian Government has recognised that legal immunity for user-generated content is key to platform operations and has sought to regulate platform behaviour by prescribing several pre-conditions to safe harbour. This chapter examines the different obligations set out in the Intermediary Guidelines and evaluates the efforts of the Indian government to regulate platform behaviour in India through the pre-conditions for safe harbour. This chapter finds that the obligations set out in the Intermediary Guidelines are enforced in a patchwork and inconsistent manner through courts. However, the Indian Government retains powerful controls over content and platform behaviour by virtue of its power to block content under Section 69A of the IT Act and the ability to impose personal liability on platform employees within India.
This paper considers the goals of regulators in different countries working on regulating online platforms and how those varied motivations influence the potential for international coordination and cooperation on platform governance. different policy debates and goals surrounding online platform responsibility. The analysis identifies different policy goals related to three different types of obligations that regulators may impose on online platforms: responsibilities to target particular categories of unwanted content, responsibilities for platforms that wield particularly significant influence, and responsibilities to be transparent about platform decision-making. Reviewing the proposals that have emerged in each of these categories across different countries, the paper examines which of these three policy goals present the greatest opportunities for international coordination and agreement and which of them actually require such coordination in order to be effectively implemented. Finally, it considers what lessons can be drawn from existing policy efforts for how to foster greater coordination around areas of common interest related to online platforms.
This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses US efforts to impose mandatory transparency obligations on Internet “platforms.”
Like information disseminated through online platforms, infectious diseases can cross international borders as they track the movement of people (and sometimes animals and goods) and spread globally. Hence, their control and management have major implications for international relations, and international law. Drawing on this analogy, this chapter looks to global health governance to formulate suggestions for the governance of online platforms. Successes in global health governance suggest that the principle of tackling low-hanging fruit first to build trust and momentum towards more challenging goals may extend to online platform governance. Progress beyond the low-hanging fruit appears more challenging: For one, disagreement on the issue of resource allocation in the online platform setting may lead to “outbreaks” of disinformation being relegated to regions of the world that may not be at the top of online platforms’ market priorities lists. Secondly, while there may be wide consensus on the harms of infectious disease outbreaks, the harms from the spread of disinformation are more contested. Relying on national definitions of disinformation would hardly yield coherent international cooperation. Global health governance would thus suggest that an internationally negotiated agreement on standards as it relates to disinformation may be necessary.
In order to manage the issue of diversity of regulatory vision, States may, to some extent, harmonize substantive regulation—eliminating diversity. This is less likely than States determining unilaterally or multilaterally to develop manageable rules of jurisdiction, so that their regulation applies only in limited circumstances. The fullest realization of this “choice of law” solution would involve geoblocking or other technology that divides up regulatory authority according to a specified, and a perhaps agreed, principle. Geoblocking may be costly and ultimately porous, but it would allow different communities to effectuate their different visions of the good in the platform context. To the extent that the principles of jurisdiction are agreed, and are structured to be exclusive, platforms would have the certainty of knowing the requirements under which they must operate in each market. Of course, different communities may remain territorial states, but given the a-territorial nature of the internet, it may be possible for other divisions of authority and responsibility to develop. Cultural affinity, or political perspective, may be more compelling as an organizational principle to some than territorial co-location.
On October 27, 2022, the Digital Services Act (DSA) was published in the Official Journal of the European Union (EU). The DSA, which has been portrayed as the Europe’s new “Digital Constitution”, sets out a cross-sector regulatory framework for online services and regulates the responsibility of online intermediaries for illegal content. Against this background, this chapters provides a brief overview of recent regulatory developments regarding platform responsibility in the EU. The chapter seeks to add a European perspective to the global debate about platform regulation. Section 3.1 provides an overview of the regulatory framework in the EU and recent legislative developments. Section 3.2 analyses different approaches regarding the enforcement of rules on platform responsibility. Section 3.3 takes a closer look at the regulation of content moderation by digital platforms in the EU. Finally, Section 3.4 adds some observations on the international effects of EU rules on platform responsibility.
While the social media and digital platforms started with an objective to enhance social connectivity and information sharing, they also present a significant challenge in content moderation resulting in spreading disinformation. Disinformation Paradox is a phenomenon where an attempt to regulate harmful content online can inadvertently amplifies it. The social media platforms often serve as breeding grounds for disinformation. This chapter discusses the inherent difficulties in moderating content at a large scale, different responses of these platforms and potential solutions.
This chapter examines China’s approach to platform responsibility for content moderation. It notes that China’s approach is rooted in its overarching goal of public opinion management, which requires platforms to proactively monitor, moderate, and sometimes censor content, especially politically sensitive content. Despite its patchy and iterative approach, China’s platform regulation is consistent and marked by its distinct characteristics, embodied in its defining of illegal and harmful content, its heavy platform obligations, and its strong reliance on administrative enforcement measures. China’s approach reflects its authoritarian nature and the asymmetrical power relations between the government and private platforms. This chapter also provides a nuanced understanding of China’s approach to platform responsibility, including Chinese platforms’ "conditional liability" for tort damages and the regulators’ growing emphasis on user protection and personal information privacy. This chapter includes a case study on TikTok that shows the interplay between the Chinese approach, oversees laws and regulations and the Chinese online platform’s content moderation practices.