We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Much of this book has explored the timing and effects of various processes which are initiated as a result of reading particular sentences or types of sentence. To understand the meaning of a sentence, we make use of information from sentence structure and content, but we also make use of information which the reader already has about the situation described. However, the role of such background knowledge in theories of natural language semantics varies between approaches to the problem. In this chapter, we will argue that semantic processing is at least partially driven by the inferences which the interpreter makes, and that the knowledge state of language users is therefore of paramount importance in the process of understanding discourse.
One possible approach to modelling the semantic processes involved in sentence comprehension is exemplified by the Discourse Representation Theory (DRT) account, which is based on a set-theoretic approach to reference (Kamp and Reyle, 1993). Solving reference and scope problems is central to DRT, because inadequate reference resolution leads to incoherence in the representation of a discourse. Thus, in DRT, reference resolution is the process which occurs earliest. There is little if any emphasis on inference and the utilisation of world knowledge in comprehension within such a framework, because the spirit of the approach is to try to capture the facts of language, independent of world knowledge.
From the outset, the idea was to try to draw together parts of three vast areas of inquiry – ethics, computing, and medicine – and produce a document that would be accessible by and useful to scholars and practitioners in each domain. Daunting projects are made possible by supportive institutions and colleagues. Fortunately in this case, the support exceeded the daunt.
The idea for the book came while I was at Carnegie Mellon University's Center for Machine Translation and Center for the Advancement of Applied Ethics, and the University of Pittsburgh's Center for Medical Ethics. These three centers of excellence fostered a congenial and supportive environment in which to launch a novel project. Deep thanks are due Jaime Carbonell, Preston Covey, Peter Madsen, Alan Meisel, and Sergei Nirenburg.
In 1992 I organized a session on “Computers and Ethics in Medicine” at the annual meeting of the American Association for the Advancement of Science (AAAS) in Chicago. Four of the contributors to this volume – Terry Bynum, Randy Miller, John Snapper, and I – made presentations. A special acknowledgment is owed to Elliot R. Siegel of the National Library of Medicine and the AAAS Section on Information, Computing, and Communication for encouraging this effort, and for his wise counsel.
The original idea for a book blossomed as it became increasingly clear that there was a major gap in the burgeoning literatures in bioethics and medical informatics.
The birth and evolution of a scientific method is an exciting development. Meta-analysis, described in this chapter as “one of the most important and controversial methodological developments in the history of science,” has changed aspects of scientific inquiry in ways that have not been fully calculated. The technique is nearly as old as this century but as fresh, immediate, and important as this week's journal articles and subsequent lay accounts. In a meta-analysis, the results of previous studies are pooled and then analyzed by any of a number of statistical tools. Meta-analyses are performed on data stored in computers and subjected to computational statistics. The technique grew rapidly in psychology beginning two decades ago and since has become a fixture in the observational investigations of epidemiologists, in reviews of clinical trials in medicine, and in the other health sciences. It has engendered extraordinarily heated debate about its quality, accuracy, and appropriate applications. Meta-analysis raises ethical issues because doubts about its accuracy raise doubts about (1) the proper protections of human subjects in clinical trials; (2) the proper treatment of individual patients by their physicians, nurses, and psychologists; and (3) the correct influence on public policy debates. This chapter lays out ethical and policy issues, and argues for high educational standards for practitioners in each domain.
Introduction
The growth of knowledge presents some of the most interesting and demanding problems in all human inquiry.
Despite more than 30 years of development of computer-based medical information systems, the medical record remains largely paper-based. A major impediment to the implementation and use of these systems continues to be the lack of evaluation criteria and evaluation efforts. It is becoming apparent that the successful implementation and use of computer-based medical information systems depends on more than the transmission of technical details and the availability of systems. In fact, these systems have been characterized as radical innovations that challenge the internal stability of health care organizations. They have consequences that raise important social and ethical issues. This chapter provides a thorough historical and sociological context and analyzes how computer-based medical information systems affect (1) professional roles and practice patterns, (2) professional relations between individuals and groups, and (3) patients and patient care. In a point that is crucial for the development of health information systems, the authors argue that, aside from quality control, risk management, or fiscal efficiency, there is an ethical imperative for conducting system evaluations. This means that no commitment to computational advancement or sophistication is sufficient unless it includes a well-wrought mechanism for evaluating health computing systems in the contexts of their use. Failure to perform such evaluations becomes a shortcoming that is itself ethically blameworthy.
Introduction and history
Medical computing is not merely about medicine or computing. It is about the introduction of new tools into environments with established social norms and practices.
Privacy and confidentiality rights to nonintrusion and to enjoyment of control over personal information are so well known as to be regarded by some as obvious and beyond dispute. In unhappy fact, though, confidentiality protections are often meager and feckless because of the ease with which information is shared and the increasing number of people and institutions demanding some measure of access to that information. Health data are increasingly easy to share because of improvements in electronic storage and retrieval tools. These tools generally serve valid and valuable roles. But increased computing and networking power are changing the very idea of what constitutes a patient record, and this increases the “access dilemma” that was already a great challenge. The challenge may be put as follows: How can we maximize appropriate access to personal information (to improve patient care and public health) and minimize inappropriate or questionable access? Note that “personal information” includes not only electronic patient records, but also data about providers – physicians, nurses, and others – and their institutions. This chapter reviews the foundations of a right to privacy and seeks out an ethical framework for viewing privacy and confidentiality claims; identifies special issues and problems in the context of health computing and networks; considers the sometimes conflicting interests of patients, providers, and third parties; and sketches solutions to some of the computer-mediated problems of patient and provider confidentiality.
Telling right from wrong often requires appeal to a set of values. Some values are general or global, and they range across the spectrum of human endeavor. Identifying and ranking such values, and being clear about their conflicts and exceptions, is an important philosophical undertaking. Other values are particular or local. They may be special cases of the general values. So when “freedom from pain” is offered in the essay by Professors Bynum and Fodor as a medical value, it is conceptually linked to freedom, a general value. Local values apply within and often among different human actions: law, medicine, engineering, journalism, computing, education, business, and so forth. To be consistent, a commitment to a value in any of these domains should not contradict global values. To be sure, tension between and among local and global values is the stuff of exciting debate in applied ethics. And sometimes a particular local value will point to consequences that are at odds with a general value. Debates over these tensions likewise inform the burgeoning literature in applied or professional ethics. In this chapter, Bynum and Fodor apply the seminal work of James H. Moor in an analysis of the values that apply in the health professions. What emerges is a straightforward perspective on the way to think about advancing health computing while paying homage to those values.
The intersection of bioethics and health informatics offers a rich array of issues and challenges for philosophers, physicians, nurses, and computer scientists. One of the first challenges is, indeed, to identify where the interesting and important issues lie, and how best, at least initially, we ought to address them. This introductory chapter surveys the current ferment in bioethics; identifies a set of areas of ethical importance in health informatics (ranging from standards, networks, and bioinformatics to telemedicine, epidemiology, and behavioral informatics); argues for increased attention to curricular development in ethics and informatics; and provides a guide to the rest of the book. Perhaps most importantly, this chapter sets the following tone: that in the face of extraordinary technological changes in health care, it is essential to maintain a balance between “slavish boosterism and hyperbolic skepticism.” This means, in part, that at the seam of three professions we may find virtue both by staying up-to-date and by not overstepping our bounds. This stance is called “progressive caution.” The air of oxymoron is, as ever in the sciences, best dispelled by more science.
A conceptual intersection
The future of the health professions is computational.
This suggests nothing quite so ominous as artificial doctors and robonurses playing out “what have we wrought?” scenarios in future cyberhospitals. It does suggest that the standard of care for information acquisition, storage, processing, and retrieval is changing rapidly, and health professionals need to move swiftly or be left behind.
Sophisticated machines to assist human cognition, including decision making, are among the most interesting, important, and controversial machines in the history of civilization. Debates over the foundations, limits, and significance of artificial intelligence, for instance, are exciting because of what we learn about being human, and about what being human is good for. Decision-support systems in the health professions pose similarly exciting challenges for clinicians, patients, and society. If humans have had to accept the fact that machines drill better holes, paint straighter lines, have better memory … well, that is just the way the world is. But to suggest that machines can think better or more efficiently or to greater effect is to issue an extraordinary challenge. If it were clear that this were the case – that computers could replicate or improve the finest or most excellent human decisions – then claims to functional uniqueness would need to be revised or abandoned. Clinical decision making enjoys or tries to enjoy status at the apex of rational human cognition, in part because of the richness of human biology and its enemies, and in part because of the stakes involved: An error at chess or chessboard manufacture is disappointing or costly or vexing, but generally not painful, disabling, or fatal. This chapter explores the loci of key ethical issues that arise when decision-support systems are used, or their use is contemplated, in health care.
If only they could predict the future, health professionals would know in advance who will live, who will die, and who will benefit from this or that treatment, drug, or procedure. Foreknowledge would come in very handy indeed in hospitals and, in fact, has been a goal of medicine since antiquity. Computers dramatically improve our ability to calculate how things will turn out. This means we can use them in clinical decision making and, at the other end of the health care spectrum, in deciding which policy, method, or budget will produce the desired results. This chapter takes as its starting point the use of prognostic scoring systems in critical care and reviews their applications and limitations. It concludes that such systems are inadequate in themselves for identifying instances of clinical futility, in part because it is not logically appropriate to apply outcome scores to individual patients; such scores should be regarded as a point in an evidentiary constellation, and should not alone be allowed to defeat other considerations in the care of critically ill patients. Similarly, the rapid increase in the use of computers to derive practice guidelines across the health care spectrum represents an important extension of requirements that decisions be informed by the best available evidence. But computers cannot determine whether guidelines are applicable in individual cases, or develop guidelines that are. These, like other tasks in health care, belong to humans, especially when resource allocation is at stake.
When errors are made or things go wrong or decisions are beguiled, there is a very powerful and common human inclination to assess and apportion responsibility: Who's to blame? Whether any particular act of commission or omission is morally blameworthy is determined against a broad background of shared values and in the context of more or less deep understandings of causation and responsibility. In this chapter, Professor John W. Snapper develops his distinctive model for evaluating responsibility for computer-based medical decisions. In this model, the computer emerges as an agent that can be deemed responsible for certain kinds of errors. With the goal of promoting appropriate computer use by clinicians, Professor Snapper's distinctive analysis avoids conundrums about causes of harms, and eschews puzzles that often arise in attempting to identify foreseeable risks. Rather, it presents an argument for spreading responsibility and diversifying duty. Professor Snapper argues that appropriately attributing responsibility and duty to computers for certain actions will maximize the social goods that would follow from broader use of machines for decision support. What is increasingly clear is that legal approaches to error and liability in medical computing are inadequate in the absence of thoroughgoing conceptual and ethical analyses of responsibility, accountability, and blame.
Mechanical judgments and intelligent machines
Modern medical practice makes use of a variety of “intelligent machines” that evaluate patient conditions, help in literature searches, interpret laboratory data, monitor patients, and much more.
Categorical structures suitable for describing partial maps, viz. domain structures, are introduced and their induced categories of partial maps are defined.
The representation of partial maps as total ones is addressed. In particular, the representability (in the categorical sense) and the classifiability (in the sense of topos theory) of partial maps are shown to be equivalent (Theorem 3.2.6).
Finally, two notions of approximation, contextual approximation and specialisation, based on testing and observing partial maps are considered and shown to coincide. It is observed that the approximation of partial maps is definable from testing for totality and the approximation of total maps; providing evidence for taking the approximation of total maps as primitive.
Categories of Partial Maps
To motivate the definition of a partial map, observe that a partial function u : A ⇀ B is determined by its domain of definition dom(u) ⊆ A and the total function dom(u) → B induced by the mapping a ↦ u(a). Thus, every partial function A ⇀ B can be described by a pair consisting of an injection D ↣ A and a total function D → B with the same source.
In this chapter we study the categorical constructions for interpreting data types. We start by observing that the notion of pairing in a category of partial maps (with a minimum of structure) cannot be the categorical product. The appropriate interpretation for product types (partial products) is the categorical product in the category of total maps endowed with a pairing operation on partial maps extending the pairing of total maps. Once the notion of product is established, partial exponentials are defined as usual, and some properties of Poset-partial-exponentials are presented. Next colimits are studied. The situation is completely different from that of limits. For example, an object is initial in the category of total maps if and only if it is so in the category of partial maps. A characterisation of certain colimits (including coproducts) in a category of partial maps, due to Gordon Plotkin, is given. We further relate colimits in the category of total maps and colimits in the category of partial maps by means of the lifting functor. Finally, we provide conditions on a Cpo-category of partial maps under which ω-chains of embeddings have colimits. This is done in the presence of the lifting functor, and for arbitrary categories of partial maps.
Partial Binary Products
The data type for pairing in pΚ cannot be the categorical product because, under reasonable assumptions, this would lead to inconsistency.
We have initiated an abstract approach to domain theory as needed for the denotational semantics of deterministic programming languages. To provide an explicit semantic treatment of non-termination, we decided to make partiality the core of our theory. Thus, we focussed on categories of partial maps. We have studied the representability of partial maps and shown its equivalence with classifiability. We have observed that, once partiality is taken as primitive, a notion of approximation may be derived. In fact, two notions of approximations based on testing and observing partial maps have been considered and shown to coincide. Further we have characterised when the approximation relation between partial maps is domain-theoretic in the (technical) sense that the category of partial maps Cpo-enriches with respect to it.
Concerning the semantics of type constructors in categories of partial maps we have: presented a characterisation of colimits of diagrams of total maps due to Gordon Plotkin; studied order-enriched partial cartesian closure; and provided conditions to guarantee the existence of the limits needed to solve recursive type equations. Concerning the semantics of recursive types we have: made Peter Freyd's notion of algebraic compactness the central concept; motivated the compactness axiom; established the fundamental property of parameterised algebraically compact categories (slightly extending a previous result of Peter Freyd); and shown that in algebraically compact categories recursive types reduce to inductive types. Special attention has been paid to Cpo-algebraic compactness, leading to the identification of a 2-category of kinds with very strong closure properties.
We thoroughly study the semantics of inductive and recursive types. Our point of view is that types constitute the objects of a category and that type constructors are bifunctors on the category of types. By a bifunctor on a category we mean a functor on two variables from the category to itself, contravariant in the first, covariant in the second.
First, following Peter Freyd, the stress is on the study of algebraically complete categories, i.e. those categories admitting all inductive types (in the sense that every endofunctor on them has an initial algebra—this is understood in a setting in which the phrase “every endofunctor” refers to a class of enriched endofunctors—see Definition 6.1.4). After observing that algebraic completeness guarantees the existence of parameterised initial algebras, we identify, under the name of parameterised algebraically complete categories, all those categories which are algebraically complete and such that every parameterised inductive type constructor gives rise to a parameterised inductive type (see Definition 6.1.7). Type constructors on several variables are dealt with by Bekič's Lemma, from which follow both the Product Theorem for Parameterised Algebraically Complete Categories (Theorem 6.1.14) and also the dinaturality of Fix (the functor delivering initial algebras).
Second, again following Peter Freyd, algebraic completeness is refined to algebraic compactness by imposing the axiom that, for every endofunctor, the inverse of an initial algebra is a final coalgebra. The compactness axiom is motivated with a simple argument showing that every bifunctor on an algebraically compact category admits a fixed-point.