We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Intended for researchers and practitioners in interaction design, this book shows how Bayesian models can be brought to bear on problems of interface design and user modelling. It introduces and motivates Bayesian modelling and illustrates how powerful these ideas can be in thinking about human-computer interaction, especially in representing and manipulating uncertainty. Bayesian methods are increasingly practical as computational tools to implement them become more widely available, and offer a principled foundation to reason about interaction design. The book opens with a self-contained tutorial on Bayesian concepts and their practical implementation, tailored for the background and needs of interaction designers. The contributed chapters cover the use of Bayesian probabilistic modelling in a diverse set of applications, including improving pointing-based interfaces; efficient text entry using modern language models; advanced interface design using cutting-edge techniques in Bayesian optimisation; and Bayesian approaches to modelling the cognitive processes of users.
The chapter focuses the attention, firstly, on the origins of the right to die and its intersections with the development of life-maintaining medical technologies. Then, the analysis goes on by distinguishing between a right to refusal (current or by an advance directive) medical supports, and the recognition of some form of active aid in suiciding, taking into account the principal elements of the American, Canadian, European and Chinese legal frameworks.
While dealing with the issue at the heart of this paper a fundamental question has to be tackled in greater depth: is the right to access to the Internet a human right (or a fundamental right: below is my attempt to introduce a terminological clarification in this regard) which enjoys a semantic, conceptual and constitutional autonomy? In other words, is access to the Internet an autonomous right or only a precondition for enjoying, among others, freedom of expression? Why does the classification as a free-standing or derived right matter? Does it carry normative implications or is it primarily a rhetorical tool? In trying to answer those questions, it may perhaps be beneficial to resist the temptation to rely on a “rhetoric” of fundamental rights and human rights, which is widespread throughout the various debates concerning the relationship between law and technology after the rise of the Internet. The language of rights (especially new rights) in Internet law is more than (rhetorically) appealing.
Neurorights are novel human rights that specify areas of protection from potential abuses of neurotechnologies. They protect mental privacy, mental freedom and fair access to neuroenhancement. We discuss neurorights research and advocacy, including the Chilean constitutional amendment and neuroprotection bill of law, which explicitly protect neurorights and adopt a medical model for the regulation of all neurotechnologies, defining them as medical devices. These Chilean bills could serve as a model for legislation elsewhere.
This chapter focuses on m-Health, i.e. technologies offered through mobile devices with particular regard to those having a specific health purpose. The contribution highlights that the mass use of these technologies is raising many challenges to national and European legislators, who are now facing a twofold task: assuring safety and reliance of the data generated by these products and protecting patients/consumers’ privacy and confidentiality. From the first perspective, such software may sometimes be classified as medical devices, although this classification is not always easy since there could be “border-line products”. If a software is classified as a medical device, then its safety and efficacy are guaranteed by the applicability of relevant regulations, which dictate specific prerequisites, obligations and responsibilities for manufacturers as well as distributors. From a data protection perspective, the mass use of these technologies allows the collection of huge amounts of personal data, both sensitive data (as relating to health conditions) and data that can nonetheless contribute to the creation of detailed user profiles.
Telemedicine is the delivery of healthcare services by means of information and communication technologies. Although it was initially conceived as a means of overcoming geographical barriers and dealing with emergency situations, the spread of telemedicine in daily practice is reshaping the innermost features of medical practice and shifting organisational patterns in healthcare. Advocates of telemedicine argue that it will redesign healthcare accessibility, improving service quality and optimising costs. However, the use of telemedicine raises a number of ethical, legal and social issues, an overview of which is given in this chapter. The second section deals with the EU policy for the promotion of telemedicine, and reference is made to the provisions offered by the European Telehealth Code. In the third section, some of the major ethical concerns raised by telemedicine are discussed. In the fourth, room is given to the role of telemedicine within the management of the CoViD-19 health emergency. In the conclusions, it is argued that adequate policies and rules are required to ensure a consistent spread and a safe use of telemedicine in alternative to in-person healthcare.
Algorithmic transparency is the basis of machine accountability and the cornerstone of policy frameworks that regulate the use of artificial intelligence techniques. The goal of algorithmic transparency is to ensure accuracy and fairness in decisions concerning individuals.AI techniques replicate bias, and as these techniques become more complex, bias becomes more difficult to detect. But the principle of algorithmic transparency remains important across a wide range of sectors. Credit determinations, employment assessments, educational tracking, as well as decisions about government benefits, border crossings, communications surveillance and even inspections in sports stadiums increasingly rely on black box techniques that produce results that are unaccountable, opaque, and often unfair. Even the organizations that rely on these methods often do not fully understand their impact or their weaknesses.
Although lay participation has long been a feature of scientific research, the past decades have seen an explosion in the number of citizen science projects. Simultaneously, the number of low-cost network connected devices collectively known as Internet of Things devices has proliferated. The increased use of Internet of Things devices in citizen science exists has coincided with a reconsideration of the right to science under international law. Specifically, the Universal Declaration of Human Rights and the International Covenant on Economic Social and Cultural Rights both recognise a right to benefit and participate in the scientific process. Whilst it is unclear whether this right protects participation by citizen scientists, it provides a useful framework to help chart the ethical issues raised by citizen science. In this chapter, we first describe the origins and boundaries of the right to science, as well as its relevance to citizen science. We then use the findings of a scoping review to examine three main ethical and legal issues for using Internet of Things devices in citizen science.
Human behaviour is increasingly governed by automated decisional systems based on machine learning (ML) and ‘Big Data’. While these systems promise a range of benefits, they also throw up a congeries of challenges, not least for our ability as humans to understand their logic and ramifications. This chapter maps the basic mechanics of such systems, the concerns they raise, and the degree to which these concerns may be remedied by data protection law, particularly those provisions of the EU General Data Protection Regulation that specifically target automated decision-making. Drawing upon the work of Ulrich Beck, the chapter employs the notion of ‘cognitive sovereignty’ to provide an overarching conceptual framing of the subject matter. Cognitive sovereignty essentially denotes our moral and legal interest in being able to comprehend our environs and ourselves. Focus on this interest, the chapter argues, fills a blind spot in scholarship and policy discourse on ML-enhanced decisional systems, and is vital for grounding claims for greater explicability of machine processes.