Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T17:01:50.418Z Has data issue: false hasContentIssue false

Assessing health technologies in a changing world

Published online by Cambridge University Press:  01 July 2009

Stuart S. Blume*
Affiliation:
University of Amsterdam
Rights & Permissions [Opens in a new window]

Abstract

The author argues that a “narrowing down” in the scope of HTA that occurred at the end of the 1970s was paralleled by developments in bioethics at the same time. Both disciplines responded to changes in the institutional and political field in which they operated. Over the past 20 years, decision making in the health field has changed again. To remain relevant, HTA must evolve further. Building in mechanisms for consultation with stakeholders will be an important element in this adjustment.

Type
Commentaries, Views, and Developments in Hta
Copyright
Copyright © Cambridge University Press 2009

THE “NARROWING DOWN” OF HTA

In February 1975, the U.S. Congress invited its recently established Office of Technology Assessment (OTA) to conduct a study of the kinds of justifications that should be required before costly new medical technologies are introduced into practice. An important stimulus to this request was Archie Cochrane's influential book Effectiveness and Efficiency (Reference Cochrane5). The reports that the OTA produced, starting in the mid-1970s, began to define the objectives, scope, and methods of what was to become an important tool of health policy analysis in the years to come.

OTA staff, led by David Banta, interpreted their mandate broadly. Assessing health technologies, as they initially conceived the objective of the new field, should entail consideration not only of the clinical consequences of the diffusion and use of new procedures or techniques, but also the economic, ethical, and social consequences. It would “provide facts as a basis not only for clinical decision making, but also for policy making in health care as a societal endeavour” (Reference Banta and Perry2, p 431). This was the approach mapped out in the OTA's first report to Congress on health technology (15). It argued that the likely implications of a new medical technology should be assessed during the research and development phase, and that these implications should be broadly conceived, including any questions of justice, fairness, and access that might arise.

It was agreed that the first technology to be assessed would be the CT scanner: an expensive device then being widely adopted, but for the utility of which in patient care little evidence was available. As Banta and Perry (Reference Banta and Perry2) put it, “the prototype of a high technology device,” the CT scanner, “was visible, exciting, and expensive. . ..” “It was a public policy issue during the mid-1970s [in various countries]. It surely stimulated the beginnings of interest in health care TA in many countries.” (p 433).

Banta and his staff appreciated that assessing the different consequences of health technologies required distinctive methodological approaches. Safety and efficacy, that could be established using epidemiological data and the results of controlled trials, would be the most straightforward. Moreover these, together with costs, were the aspects that principally concerned Congress. OTA was discouraged by the staff of its Congressional Board from pursuing the wider social implications, viewed as a matter of politics (Banta, personal communication, January 13, 2009). Reflecting Congressional concerns, the second report, dealing with the CT scanner, focused on efficacy, safety, and financial costs alone (16). Efficacy, safety, and cost-effectiveness were the characteristics of medical technologies on which HTA gradually came to focus, the earlier and broader agenda gradually abandoned.

Several authors have pointed to and regretted this narrowing down of the HTA agenda. There has been some discussion of why it occurred. The availability of methods and data played a part: “the methods for assessing social implications are relatively undeveloped” (Reference Banta and Luce1). Political priorities and pressures also played a part, certainly as far as the early work of OTA was concerned. Rather differently, reflecting on the early years of HTA in the United Kingdom, Faulkner has argued that it was the concern of HTA practitioners to produce generalizable and value-free conclusions that precluded consideration of social and ethical issues (Reference Faulkner and Elston9).

A few years ago Pascale Lehoux and I developed an argument related to that of Faulkner. We started from the claim that HTA had neglected all that social scientists had established regarding the nature of technology and of technological change (Reference Lehoux and Blume13). We, too, tried to explain why the field had evolved as it had, despite the concerns of leading practitioners. Was it, we asked, due to the fact that practitioners lacked the skills that the ambitious broad assessments would have required? To some degree, this was so. More important, we believed, was an early commitment to establishing the scientific legitimacy of the new field, bearing in mind the methodological standards of the medical profession. It was because the randomized controlled trial (RCT) was taken to be the gold standard in medicine's “hierarchy of evidence” that “meta-analysis of published results of RCTs became the most popular method for HTA agencies to draw recommendations for policy makers.” “The scope of assessments, and their subsequent interpretation, is typically limited to the available clinical or epidemiological evidence and to the costs and benefits associated with a particular health technology once it has reached the stage of clinical application” (Reference Lehoux and Blume13 p 1099).

The principal claim of this study was that the institutional context within which HTA functioned enabled us to understand why the focus had become so narrow. First, data on which assessments were based incorporate and generally reflect the perspectives of powerful actors in the healthcare system: medical professionals, healthcare managers, third party payers. Second, the status and legitimacy of HTA professionals as expert advisors in the policy process depends on their expertise being accepted as objective and uncompromised by value judgments or political commitments. “To the extent that they recognize this, and to the extent that their ultimate influence on the political process is regarded as a mark of professional achievement, we can expect experts to stress the apolitical or value-free nature of their work and to seek refuge behind “hard” evidence” (Reference Lehoux and Blume13, p 1101). The typical form of assessments, we suggested, and the field's unwillingness to address “potentially controversial nonquantifiable issues,” reflected practitioners' sense of the configuration of power and influence within which they operated.

Interventions such as prenatal genetic diagnosis (PGD), organ and xenotransplantation, and stem cell therapy are controversial. Different sets of values, religious beliefs, and interests lead to conflicting views regarding their acceptability and the conditions under which they should be made available. Evidence-gathering and synthesizing approaches such as HTA have avoided dealing with medical interventions such as these that raise questions not resoluble on the basis of empirical evidence. Policy makers, in need of guidance, recognized that determining what should be allowed and what not in cases like these demanded a broad consideration of the public good. Bioethics was to help.

LESSONS FROM BIOETHICS

Like HTA, bioethics also emerged in the mid- to late 1970s. Although there is a good deal of disagreement about its origins, it was initially conceived as having the task of determining, in so far as possible, the rights and wrongs of new advances in biomedicine and their technological deployment (Reference Fox and Swazey10). Rights and wrongs would be debated in terms of the fundamental principles of “respect for persons,” “beneficence,” and “justice.” Over the course of time, the philosophical complexities of these concepts were set aside. Bioethics was reduced to a set of simple principles that demanded no skills in philosophical reasoning and could be applied in practice. Thus, “respect for persons” became “autonomy”, and “autonomy” became “informed consent”(Reference Fox and Swazey10, p 135–140). The “stripped-down, monolithic version of principlism,” which by the 1980s dominated bioethics in the United States and in much of the world, can be seen as a response of the emerging field to the cultural, legal, and institutional environment in which it was practiced.

Although what mattered here were issues of legal and political regulation, and the protection of individual rights, rather than healthcare planning and expenditures, the evolution of bioethics parallels that of HTA in the same historical period. Here too, as in the development of HTA, there was a retreat from the field's initial aspirations. The broader concerns, with the likely effects of biomedical knowledge and its application on the human condition, and with how biomedicine could best be deployed in the interests of human welfare, were lost. As with HTA, there are bioethicists too who regret what has happened to their discipline: its loss of concern with questions of distributive justice (Reference Daniels7), with the broader impacts of biomedical advance on human welfare (Reference Callahan4) and on communities (Reference Weijer17). They regret the emphasis that, as in medicine itself, has come to be placed on the autonomy of the individual patient and on the responsibility of the clinician to the single patient.

Some social science critics have gone further, arguing that bioethics has been intended to deflect attention and criticism from the more fundamental problem of medical practice, namely, the inequalities of power, privilege, and authority within which medical encounters take place. In the past few years, positions have softened, and some on both sides would now subscribe to the view that ethical principles have to be interpreted in a socially and culturally sensitive (and thus contingent) manner.

The parallels are striking. They help us better understand what has happened. Both HTA and bioethics, in their different ways, were intended as tools for the political regulation of biomedical advance. Both emerged at a time at which technology was subject to widespread social and political critique, and both sought, initially, to provide answers to critical questions posed. In the course of time, as criticism of technology waned and technological innovation was increasingly embraced, each discipline was narrowed down and stripped of its critical function. In other words, each field adjusted to the changing political and institutional climate to remain credible and relevant.

The comparison with bioethics is also instructive. In 1995, psychologist and bioethicist William Gardner posed the question of whether it would be possible to prohibit parents from producing genetically enhanced children with desired characteristics, even if ethicists were agreed that they should be (Reference Gardner11). He was doubtful. “Prohibition of genetic enhancement is likely to fail because it will be undermined by the dynamics of competition among parents and among nations.” The more other parents do it, the more it becomes attractive to any single new parent, both because they will wish to give their child this advantage and because they will by then be reassured regarding risks. “Both nations and parents have strong incentives to defect from a ban on human genetic enhancement, because enhancements would help them in competition with other parents and nations.” Gardner's argument, in other words, was that interests were so powerful that ethicists (but one might also add other experts in analysis such as statisticians and epidemiologists) would be unable to “hold the line.”

LEGITIMACY AND CONSULTATION

Where decisions are made within a medical arena and (ostensibly) on medical grounds, then the quality of the evidence becomes crucial. Rigorous and robust trial data, taking costs and quality-adjusted life-years or disability-adjusted life-years into account, should provide adequate guidance and justification. However, this depends on decision making being “contained” in an expert, medical forum, and not “leaking out” into politics or the courts. When such “leakage” occurs, and decision making becomes a matter of political or juridical deliberation, the status of evidence changes. The following example shows this clearly.

In the early 1990s, a Canadian National Breast Screening Study had shown that annual mammograms were of value to women in their fifties, but not to women in their forties. The conclusion, that younger women should, therefore, not be offered annual mammograms, infuriated radiologists. In the medical and popular press, on television, and at medical conferences, radiologists attacked the quality of the study. There were even hints of fraud. They in turn were accused of trying to defend a lucrative practice. However, more was at stake than status and income alone. Sociologist Patricia Kaufert interviewed some of the radiologists (Reference Kaufert, Lock, Young and Cambrosio12). She found that, for them “being able to see and show where the tumor lies has an intense reality, besides which the epidemiologists' statistics on changing mortality rates become mere manipulations of a series of numbers.” In the end, after a long and bitter struggle, and against the advice of the National Institutes of Health, screening for younger women was restored by a vote in the U.S. Senate. In a purely medical arena, epidemiological evidence may have had the greater authority. However, in a political arena, influenced no doubt by the demands of women arguing for broader coverage, clinical experience proved the more powerful.

The politics of health care are changing and situations like this becoming more common: a result of growing demands for wider participation and transparency, declining trust in experts, and the growing economic interests involved in medical goods and services. What happens when an expert forum loses its legitimacy: when its approval or guarantee cannot reckon on widespread trust? For many regulatory bodies in the medical and health fields gaining the trust of the public is crucial to their legitimacy. Regulatory bodies concerned with controversial fields like genetics or stem cell research have been forced to accommodate the wide range of conflicting perspectives set out by patient groups, religious groups, industrial associations, and scientists. To do so, seeking to establish the legitimacy of their conclusions, they have broadened their memberships to include members of the public, and they have developed consensus conferences and other consultative mechanisms. It appears that legitimacy is to be derived from a process of democratic deliberation. Because I believe this change in the politics, and specifically the regulation, of health care has major implications for HTA, I would like to elaborate on this.

I can illustrate my point by reference to my own long-term study of cochlear implantation in children: an intervention that has been surrounded by controversy since its origins in the 1970s (Reference Blume3). As is well known, signing deaf people do not regard deafness as a defect to be treated but as the basis of a distinctive culture and way of life. From the 1970s onward, deaf communities in various countries have protested at a procedure that, in their view, threatens their community and violates their rights as a linguistic minority. The arguments on both sides are complex and not relevant here. Relevant to the purposes of this essay is the following: Over the past decade or so, the subject of pediatric implantation has been taken up by forums in several countries that, on the face of it, appeared to be concerned with producing consensus in the face of conflicting views.

NIH Consensus Development Conference 1995

NIH Consensus Development Conferences, a standard element of NIH practice, are organized according to an established procedure. An independent panel of experts is appointed and this has the job of preparing a consensus statement. It meets for a day or two in public, hears presentations by investigators in the field, and discusses questions raised by the audience. In May 1995, NIH convened a second Consensus Development Conference on the subject of cochlear implants. In this case, of the fourteen member expert panel, ten were specialists in otolaryngology, hearing science/audiology, and biomedical engineering. The consensus document subsequently produced notes that: “The conference was convened to summarize current knowledge about the range of benefits and limitations of cochlear implantation that have accrued to date. Such knowledge is an important basis for informed choices for individuals and their families whose philosophy of communication is dedicated to spoken discourse” (14). The introduction goes on to explain that “Issues relating to the acquisition of sign language were not directly addressed by the panel, because the focus of the conference was on new information on cochlear implantation technology and its use.” Having limited itself in this way, this conference avoided the controversy totally.

French National Consultative Ethics Committee on Health and Life Sciences “Opinion” 1994

Established in 1983 by the President of the French Republic, the National Consultative Ethics Committee on Health and Life Sciences (CCNE) is a consultative body with a broad mandate. Its mission, as redefined in a law of 2004 is “to give opinions on ethical problems and societal issues raised by progress in the fields of biology, medicine, and health.” The Committee takes up issues referred to it by the country's President, by a member of the government, by a public institution working in the research or health field. In addition, if it so decides, the Committee may take up an issue referred to it by a private individual or by one of its own members. The subject of pediatric implantation was referred to the CCNE by a radical movement (Sourds en Colère) together with a group of psychologists, sociologists, linguists, and educators. This group invited the Committee to consider whether, given the uncertainties surrounding the social psychological and linguistic implications of implanting children, the practice should be ruled as experimental under French law. The Committee agreed to take the issue up, and issued its Opinion at the end of 1994 (6). Although the CCNE rejected the claim that the procedure should be defined as experimental (on the ground that practice was too far institutionalized), it did conclude that, to avoid the risk of compromising children's social and psychological development, all deaf children should be offered sign language from an early age, whether or not they would later be candidates for implantation.

Dutch Platform 1995–99

The Dutch Platform, an ad hoc consultative forum, emerged from a 2-day meeting at which representatives of the implant teams, parents of deaf children, and members of the Deaf community had debated the issues raised. The Platform that was then established, under an independent chair, contained representatives of each group: a form of deliberation that is well established in the Netherlands. The Platform initially functioned well as a place at which differences of opinion could be debated. The need for long-term research on the effects of implantation, for example, was an issue on which agreement could be found. However, it turned out that consensus was possible only as long as nothing was at stake. When (in 1997) the search for consensus appeared to be holding up Ministerial approval of reimbursement, unbridgeable tensions emerged, and when (in 1999) reimbursement was approved, the implant teams were no longer interested in participating in the Platform and it collapsed.

The NIH Consensus Development Conference defined “consensus” so narrowly as to exclude all matters of controversy from its remit. The French National Ethics committee came to a conclusion that pleased the French Deaf community but not the medical profession. In the event its report had little or no influence on the course of events: a clear example of ethicists being unable to “hold the line.” Although lacking any official standing, the Dutch Platform did bring representatives of all stakeholders together over a considerable period of time. Whereas it seems to approach most clearly the deliberative democratic ideal, a precondition for legitimacy in the regulatory field, it too failed. In none of these cases was sufficient attention paid to the critically important nature of the consultation process.

The National Institute of Clinical Excellence (NICE) in the United Kingdom shows something very different. NICE was established in 1999 with the task of synthesizing and reviewing evidence regarding clinical practice and making recommendations regarding the effectiveness and cost-effectiveness of specific interventions. Quite rapidly, NICE concluded that its ways of working, its processes, were of critical importance if guidelines it formulated were to count on a sufficient degree of support. The result has been a complex and continuously evolving process involving multiple interactions with stakeholder groups (Reference Davies8).

HTA IN AN ERA OF DELIBERATIVE DEMOCRACY

The task of HTA remains, as it was from the start, that of providing the evidence on which decisions can rationally and legitimately be based. What needs to be acknowledged is that the nature of decision making in the health field has changed substantially in the past two or three decades. The range of involved and informed stakeholders has grown, and as a result the possibility of controversy. This means that, for some technologies at least, a wider range of aspects of a technology need to be considered: an initial “scan” should show in how far this applies in the case of any specific technology to be assessed (Banta, personal communication, January 13, 2009). In other words, so far as some health technologies are concerned, the initial agenda has to be revisited (as is happening in bioethics too). In these cases at least, the claim of HTA to provide universally valid conclusions regarding healthcare technologies must be abandoned. After all, to continue with my example of the cochlear implant, if the experience of deafness differs so greatly from one country to the other (as it does), then the significance and the value of any kind of an aural prosthesis must also differ from one to the other. The contingent nature of technologies has to be acknowledged in the assessment process. Finally, drawing on the experience of NICE, it will be essential for HTA to build mechanisms of consultation with stakeholders into its practice at each step, from the identification of technologies to be assessed, to the determination of dimensions to be considered, and to the interpretation of results. In many countries now establishing HTA in its existing form, but with no tradition of public consultation, this will be a particularly difficult step to take.

CONTACT INFORMATION

Stuart S. Blume, MA, D.Phil (), Emeritus Professor, Department of Sociology & Anthropology, University of Amsterdam, o.z. Achterburgwal 185, 1012 DK Amsterdam, The Netherlands

References

REFERENCES

1.Banta, HD, Luce, BR. Health technology and its assessment: An international perspective. New York: Oxford University Press; 1993.CrossRefGoogle Scholar
2.Banta, HD, Perry, S. A history of ISTAHC. A personal perspective on its first ten years. Int J Technol Assess Health Care. 1997;13:430-453.CrossRefGoogle Scholar
3.Blume, SS. The rhetoric and counter rhetoric of a ‘bionic’ technology. Sci Technol Hum Values. 1997;22:3156.CrossRefGoogle ScholarPubMed
4.Callahan, D. Individual good and common good: A communitarian approach to bioethics. Perspect Biol Med. 2003;46:496507.CrossRefGoogle ScholarPubMed
5.Cochrane, AL.Effectiveness and efficiency. London: Nuffield Provincial Hospitals Trust; 1971.Google Scholar
6.Comité Consultatif National d'Ethique pour les Sciences de la Vie et de la Santé (CCNE). Avis sur l'implant cochléaire chez l'enfant sourd pré-lingual. Paris; 1994.Google Scholar
7.Daniels, N. Equity and population health: Toward a broader bioethics agenda. Hastings Cent Rep. 2006;36:2235.Google Scholar
8.Davies, C. Grounding governance in dialogue? Discourse, practice and the potential for a new public sector organizational form in Britain. Public Adm. 2007;85:4766.Google Scholar
9.Faulkner, A. ‘Strange bedfellows’ in the laboratory of the NHS? An analysis of the new science of health technology assessment in the United Kingdom. In: Elston, ME, ed. The sociology of medical science and technology. Oxford: Blackwell; 1997:183207.Google Scholar
10.Fox, RC, Swazey, JP. Observing bioethics. New York and Oxford: Oxford University Press; 2008.Google Scholar
11.Gardner, W. Can human genetic enhancement be prohibited? J Med Philos. 1995;20:6584.CrossRefGoogle ScholarPubMed
12.Kaufert, PA. Screening the body: The pap smear and the mammogram. In: Lock, M, Young, A, Cambrosio, A, eds. Living and working with the new medical technologies. Cambridge: Cambridge University Press; 2000:165183.CrossRefGoogle Scholar
13.Lehoux, P, Blume, SS. Technology assessment and the sociopolitics of health technologies. J Health Polit Policy Law. 2000;25:10831120.Google Scholar
14.National Institutes of Health. NIH Consensus Statement Cochlear implants in adults and children. Bethesda, MD: NIH; 1995:13.Google Scholar
15.US Congress Office of Technology Assessment (OTA). Development of medical technology: Opportunities for assessment. Washington, DC: US Government Printing Office; 1976.Google Scholar
16.US Congress Office of Technology Assessment (OTA). Assessing the efficacy and safety of medical technologies. Washington, DC: US Government Printing Office; 1978.Google Scholar
17.Weijer, C. Protecting communities in research: Philosophical and pragmatic challenges. Camb Q Healthc Ethics. 1999;8:501513.CrossRefGoogle ScholarPubMed