We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, I discuss the role of personalisation in a wider narrative of the development of democratic societies, in terms of digital modernity, driven by a vision of data-driven innovation over networked structures facilitating socio-environmental control. This chapter deals with narratives of how modernity plays out and is implemented by institutions and technologies, which are inevitably partial, and selective in what they foreground and ignore. It begins with a discussion of digital modernity, showing how data-driven personalisation is central to it, and how privacy not only loses its traditional role as a support for individuality, but becomes a blocker for the technologies that will realise the digitally modern vision. The chapter develops the concept of the subjunctive world, in which individuals’ choices are replaced by what they would have chosen if only they had sufficient data and insight. Furthermore, the notions of what is harmful for the individual, and the remedies that can be applied to these, become detached from the individual’s lived experience, and reconnected, in the policy space, to the behaviour and evolution of models of the individual and his or her environment.
A core claim of big-data-algorithm enthusiasts – producers, champions, consumers – is that big-data algorithms are able to deliver insightful and accurate predictions about human behaviour. This chapter challenges this claim. I make three contributions: First, I perform a conceptual analysis and argue that big-data analytics is by design a-theoretical and does not provide process-based explanations of human behaviour, making it unfit to support insight and deliberation, which is transparent to both legal experts and non-experts. Second, I review empirical evidence from dozens of data sets, which suggests that the predictive accuracy of mathematically sophisticated algorithms is not consistently higher than that of simple rules (rules that tap on available domain knowledge or observed human decision-making); rather, big-data algorithms are less accurate across a range of problems, including predicting election results and criminal profiling (this work presented here refer to understanding and predicting human behaviour in legal and regulatory contexts). Third, I synthesize the above points in order to conclude that simple, process-based, domain-grounded theories of human behaviour should be put forth as benchmarks, which big-data algorithms, if they are to be considered as tools for personalization, should match in terms of transparency and accuracy.
Predictive technologies are now used across the criminal justice system to inform risk-based decisions regarding bail, sentencing and parole as well as offender-management in prisons and in the community. However, public protection and risk considerations also provoke enduring concerns about ensuring proportionality in sentencing and about preventing unduly draconian, stigmatising and marginalising impacts on particular individuals and communities. If we are to take seriously the principle of individualised justice as desert in the liberal retributive sense, then we face serious (potentially intractable) difficulties in justifying any sort of role for predictive risk profiling and assessment, let alone sentencing based on automated algorithms drawing on big data analytics. In this respect, predictive technologies present us, not with genuinely new problems, but merely a more sophisticated iteration of established actuarial risk assessment (ARA) techniques. This chapter describes some of the reasons why principled and social justice objections to predictive, risk-based sentencing make so elusive any genuinely synthetic resolution or compromise. The fundamental question as regards predictive technologies therefore is how it might even be possible to conceive such a thing without seriously undermining fundamental principles of justice and fairness.
There are various definitions of privacy, and for some time now, privacy harms have been characterized as intractable and ambiguous. In this chapter, I argue that regardless of how one conceptualizes privacy the ubiquitous nature of IoT devices and the data they generate, together with corporate data business models and programs, create significant privacy concerns for all of us. The brisk expansion of the IoT has increased “the volume, velocity, variety and value of data.”1 The IoT has made new types of data that were never before widely available to organizations more easily accessible. IoT devices and connected mobile apps and services observe and collect many types of data about us, including health-related and biometric data.
The IoT allows corporate entities to colonize and obtain access to traditionally private areas and activities while simultaneously reducing our public and private anonymity.
The IoT raises several questions germane to traditional products liability law and the UCC’s warranty provisions. These include how best to evaluate and remedy consumer harms related to insecure devices, malfunctioning devices, and the termination of services and software integral to a device’s operations. Consider that the modern IoT vehicle with an infotainment system generates massive quantities of data about drivers, and that mobile applications can be used to impact the operations of these vehicles.
Over recent years, economists, lawyers and regulators have become increasingly interested in the role played by ‘network effects’ in the digital economy: namely, the phenomenon whereby a platform becomes increasingly valuable to its users, the more users it succeeds in recruiting. Whether user-generated content on Youtube and Facebook, proprietorial messaging services such as Whatsapp, or two-sided markets such as Uber and Airbnb, it is now widely recognised that many of today’s most successful technology businesses enjoy a dominance based upon achieving a critical mass of users, which makes it near-impossible for less well-used platforms to compete. What is less widely recognised is that data-driven personalisation operates in a comparable (albeit not identical) manner: as the volume of users increases, personalisation becomes ever more sophisticated, generating a ‘second-order’ network effect that can also have significant implications for the viability of competition. This paper unpacks the distinction between first-order and second-order network effects, showing how both can create significant barriers to competition. It analyses what second-order network effects imply for how governments can and should regulate data-driven personalisation, and how states might help their citizens to regain control over the value that they create.
In 2015, the US Senate passed a resolution recommending the adoption of a national strategy for IoT development (IoT Resolution).1 Currently, the proposed Developing Innovation and Growing the Internet of Things Act (DIGIT) would establish a federal working group and a steering committee within the Department of Commerce.2 If the act is adopted, the working group, under the guidance of the steering committee, would be charged with evaluating and providing a report containing recommendations to Congress on multiple IoT aspects.3 These areas include identifying federal statutes and regulations that could inhibit IoT growth and impact consumer privacy and security.4
The argument set out in this chapter is that personalisation technologies are fundamentally inimical to the way we have built our legal and political traditions: the building blocks, or the raw materials if you will, that make up the sources of the ‘self’. The advances in the use personalisation technologies and the implications for how we understand our political and social lives through law (constitutionalism) hinge on the importance of language and the risks posed by personalisation technologies to the building of personality and forms of social solidarities. This chapter explores the centrality of language to agency – how this relationship builds our legal and political traditions and the risks posed by personalisation technologies.
Privacy and information security are distinct but related fields.1 Security focuses on questions surrounding the extent to which related products, systems, and processes can effectively defend against “attacks on confidentiality, integrity and availability of code and information.”2 The field of information security often involves inquiries about the legal consequences of security failures.3 In 2018, The Economist reported that “more than ninety percent of the world’s data appeared in just the past two years.”4 In the last decade there have been multiple large-scale data breaches and inadvertent data exposures that have resulted in the disclosure of millions of our data.
We now live in a world where we can obtain current information about a global pandemic from our smartphones and Internet of Things (IoT) devices.1 The recent novel coronavirus (COVID-19) outbreak is not just a public health emergency. The pandemic has forced us to further evaluate the extent to which privacy should give way to public health threats and resulting technological innovations.2 It directly raises questions about whether legal frameworks governing our privacy should be relaxed to address public health concerns, and if any such relaxation will continue post pandemic to permanently undermine our privacy.3
As we have seen, the law wields considerable influence over the rights and remedies available to us as consumers. Several areas of commercial law are ill-equipped to sufficiently protect our consumer interests in the IoT age. This is because various legal frameworks governing commercial practices have not been sufficiently reformulated to account for the growing connections between the world of privacy and the world of commercial law. As earlier sections of this book have demonstrated, there are multiple legal frameworks impacting commercial practices at the federal and state level that are ripe for significant legal reform. These sources of law include contract law, the FAA, products liability law, the CDA, debt collection law, the Bankruptcy Code, and secured financing laws.