16.1 Introduction
Failure in health research regulation is nothing new. Indeed, the regulation of clinical trials was developed in response to the Thalidomide scandal, which occurred some fifty years ago.Footnote 1 Yet, health research regulation is at the centre of recent failures.Footnote 2 Metal-on-metal hip replacements,Footnote 3 and, more recently, mesh implants for urinary incontinence and pelvic organ prolapse in women – often referred to as ‘vaginal mesh’ – have been the subject of intense controversy.Footnote 4 Some have even called the latter controversy ‘the new Thalidomide’.Footnote 5 In these cases, previously licensed medical devices were used to demonstrate the safety of supposedly analogous new medical devices, and obviate the need for health research involving humans.Footnote 6
In this chapter, I use health research regulation for medical devices to look at the regulatory framing of harm through the language of technological risk, i.e. relating to safety. My overall argument is that reliance on this narrow discourse of technological risk in the regulatory framing of harm may marginalise stakeholder knowledges of harm to produce a limited knowledge base. The latter may underlie harm, and in turn lead to the construction of failure.
I understand failure itself in terms of this framing of harm.Footnote 7 Failure is taken to be ontologically and normatively distinct from harm, and as implicating the design and functioning of the system or regime itself. Failure is understood as arising when harm is deemed to thwart expectations of safety built into technological framings of regulation. This usually occurs from stakeholder perspectives. Stakeholders include research participants, patients and other interested parties. However, the new force of failure in public discourse and regulation,Footnote 8 apparent in the way it ‘now saturates public life’,Footnote 9 ensures that the language of failure provides a means to integrate stakeholder knowledges of harm with scientific-technical knowledges.
In the next section, I use health research relating to medical devices to reflect on the role of expectations and harm in constructing failure. This sets the scene for the third section, where I outline the roots of failure in the knowledge base for regulation. Subsequently, I explain how the normative power of failure may be used to impel the integration of expert and stakeholder knowledges, improving the knowledge base and, in turn, providing a better basis on which to anticipate and prevent future failures. The chapter thus appreciates how failure can amount to a ‘failure of foresight’, which may mean it is possible to ‘organise’ failure and the harm it describes out of existence.Footnote 10
16.2 Expectations and Failure in Health Research
Failure has long been understood, principally though not exclusively, in Kurunmäki and Miller’s words, ‘as arising from risk rather than sin’.Footnote 11 Put differently, failure can be understood in principally consequentialist, rather than deontological, terms.Footnote 12 This understanding does not exclude legal conceptualisations of failure in tort law and criminal law, in which the conventional idea of liability is one premised on ‘sin’ or causal contribution.Footnote 13 However, within contemporary society and regulation, such deontological understandings are often overlaid with a consequentialist view of failure.Footnote 14
This is apparent in recent work by Carroll and co-authors. Through their study of material objects and failure, they describe failure as ‘a situation or thing as [sic] not being in accord with expectation’.Footnote 15 According to van Lente and Rip, expectations amount to ‘prospective structures’ that inform ‘statements, brief stories and scenarios’.Footnote 16 It is expectation, rather than anticipation or hope, then, that is central to failure. Unlike expectation, anticipation and hope do not provide a sense of how things ought to be, so much as how they could be or an individual or group would like them to be.Footnote 17 Indeed, as Bryant and Knight explain: ‘We expect because of what the past has taught us to expect … [Expectation] awakens a sense of how things ought to be, given particular conditions’.Footnote 18
This normative dimension distinguishes expectation from other future-oriented concepts and furnishes ‘a standard for evaluation’, for whether a situation is ‘good or bad, desirable or undesirable’,Footnote 19 and, relatedly, a failure. Indeed, for Appadurai ‘[t]he most important thing about failure is that it is not a fact but a judgment’.Footnote 20 Expectations rely on the past to inform a normative view of some future situation or thing, such as that it will be safe. When, through the application of calculative techniques that determine compliance with the standard for evaluation, this comes to be seen as thwarted, there is a judgment of failure.Footnote 21 Expectations, and hence a key ground for establishing failure, are built into regulatory framingsFootnote 22 and the targets of regulation.Footnote 23
These insights can be applied and developed through the example of health research regulation for medical devices. In this instance, technological risk, i.e. safety, provides the framing for medical devices within the applicable legislation and engenders an expectation of safety.Footnote 24 However, in respect of metal-on-metal hips and vaginal mesh, harm occurred, and the expectation of safety was thwarted downstream once these medical devices were in use.
Harm was consequent, seemingly in large part, on the classification of metal-on-metal hips and vaginal mesh as Class IIb devices. IIb devices are medium to high-risk devices, which are usually devices installed within the body for thirty days or longer. This meant that it was possible for manufacturers to rely on substantial equivalence to existing products to demonstrate conformity with general safety and performance requirements. These requirements set expectations for manufacturers and regulators to demonstrate safety, both for the device and the person within which it was implanted. Substantial equivalence obviates the need for health research involving humans via a clinical investigation.
It is noted in one BMJ editorial that this route ‘failed to protect patients from substantial harm’.Footnote 25 Heneghan et al. point out that in respect of approvals by the Food and Drug Administration in the USA, which are largely mirrored in the European Union (EU): ‘Transvaginal mesh products for pelvic organ prolapse have been approved on the basis of weak evidence over the last 20 years’.Footnote 26 This study traced the origins of sixty-one surgical mesh implants to just two original devices approved in the USA in 1985 and 1996. The reliance on substantial equivalence meant that safety and performance data came from implants that were already on the market, sometimes for decades, and that were no longer an accurate predicate. In other words, on the basis of past experience – specifically, of ‘substantially equivalent’ medical devices – there was an unrealistic expectation that safety would be ensured through this route, and that further research involving human participants was unnecessary.
Stakeholders reported adverse events including: ‘Pain, impaired mobility, recurrent infections, incontinence/urinary frequency, prolapse, fistula formation, sexual and relationship difficulties, depression, social withdrawal or exclusion/loneliness and lethargy’.Footnote 27 On this basis, stakeholders, including patient groups, demanded regulatory change. Within the EU, new legislation was introduced, largely in response to these events. The specific legislation applicable to the examples considered in this chapter, the Medical Devices Regulation (MDR),Footnote 28 came into force on 26 May 2020 (Article 123(2) MDR).
In respect of metal-on-metal hips and vaginal mesh, the legislation reclassifies them as Class III. Class III devices are high risk and invasive long-term devices. Future manufacturers of these devices will, in general, have to carry out clinical investigations to demonstrate conformity with regulatory requirements (Recital 63 MDR). The EU’s new legislation takes up a whole chapter on clinical investigations and thus safety. The legislation is deemed to provide a ‘fundamental revision’ to ‘establish a robust, transparent, predictable and sustainable regulatory framework for medical devices which ensures a high level of safety and health whilst supporting innovation’ (Recital 1 MDR). One interpretation of the legislation is that it is a direct response to problems in health research for medical devices, and intended to provide ‘a better guarantee for the safety of medical devices, and to restore the loss of confidence that followed high profile scandals around widely used hip, breast, and vaginal mesh devices’.Footnote 29
As regards metal-on-metal hips and vaginal mesh, however, there has been little or no suggestion of failure by those formally responsible, and who might be held accountable if there were – perhaps especially if it could be said there were any plausible causal contribution by them towards harm. Instead, the example of medical devices demonstrates how the construction of failure does not necessarily hinge on official accounts of harm as amounting to ‘failure’. This is apparent in the various quotations from non-regulators noted above. As Hutter and Lloyd-Bostock put it, these are ‘terms in which events are construed or described in the media or in political discourse or by those involved in the event’. As they continue, what matters is an ‘event’s construction, interpretation and categorisation’.Footnote 30
Failure is an interpretation and judgment of harm. Put differently, ‘failure’ arises through an assessment of harm undertaken through calculative techniques and judgments. Harm becomes refracted through these. At a certain point, the expectations of safety built into framing are understood by stakeholders as thwarted, and the harm becomes understood as a failure.Footnote 31 Official discourses are significant, not least because they help to set expectations of safety. But these discourses do not necessarily control stakeholder interpretations and knowledge of harm, or how they thwart expectations of safety, and lead to the construction of failureFootnote 32
In what follows, I shift attention to the lacunae and blind spots in the knowledge base for the regulation of medical devices, which are made apparent by the harm and failure just described. I outline these missing elements before turning to discuss the significance of failure for improving health research regulation.
16.3 Using Failure to Address the Systemic Causes of Harm
Failure, at its root, emerges from the limited knowledge base for health research regulation: for medical devices, and other areas framed by technological risk, it is derived from an archive of past experience and scientific-technical knowledge. The focus on performance (i.e. the device performs as designed and intended, in line with a predicate) marginalised attention to effectiveness (i.e. producing a therapeutic benefit) and patient knowledge on this issue. Moreover, in relation to vaginal mesh implants, female knowledges and lived experiences of the devices implanted within them have tended to be sidelined or even overlooked. The centrality of the male body within research and models of pain, and gender-based presumptions about pain,Footnote 33 help to explain the time taken to recognise a safety problem in respect of medical devices, and the gaping hole in research and knowledge.
Another part of the explanation for the latter problem is that there was a lengthy delay in embodied knowledge and experiences of pain being reported and recognised – effectively sidelining and ignoring those experiences. New guidance on vaginal mesh in the United Kingdom (UK) has faced criticism on gender-based lines. Safety concerns are cited and it is recommended that vaginal mesh should not be used to treat vaginal prolapse. However, as the UK Parliament’s All Party Parliamentary Group on Surgical Mesh Implants said, the guidelines: ‘disregard mesh-injured women’s experiences by stating that there is no long-term evidence of adverse effects’.Footnote 34
The latter may amount to epistemic injustice, what Fricker describes as a ‘wrong done to someone specifically in their capacity as a knower’.Footnote 35 More than a harm in itself, epistemic injustice may limit stakeholder ability to contribute towards regulation, leading to other kinds of harm and failure. This is especially true in the case of health research regulation, where stakeholders may be directly or indirectly harmed by practices and decisions that are grounded on a limited knowledge base. Moreover, even in respect of the EU’s new legislation on medical devices, doubts remain whether these will prevent future harms and thus failures similar to those mentioned above. Indeed, the only medical devices that are required to evidence therapeutic benefit or efficacy in controlled conditions before marketing are those that incorporate medicinal products.Footnote 36
A deeper explanation for the marginalisation of stakeholder knowledges of harm, and a key underpinning for failure, lies in the organisation of knowledge production. Hurlbut describes how: ‘Framed as epistemic matters – that is, as problems of properly assessing the risks of novel technological constructions – problems of governance become questions for experts’.Footnote 37 This framing constructs a hierarchy of knowledge that privileges credentialised knowledge and expertise, while marginalising those deemed inexpert or ‘lay’. Bioethics plays a key role here. As a field, bioethics tends to focus on technological development within biomedicine and principles of individual ethical conduct or so-called ‘quandary ethics’, rather than systemic issues related to epistemic – or social – justice. Consequently, bioethics often privileges and bolsters scientific–technical knowledge, erases social context and renders ‘social’ elements as little more than ‘epiphenomena’.Footnote 38 In this setting, stakeholder knowledges and forms of expertise relating to harm are, as Foucault explained, ‘disqualified … [as] naïve knowledges, hierarchically inferior knowledges, knowledges that are below the required level of erudition or scientificity’.Footnote 39
The specific contemporary cultural resonance of the language of failure means that it can be used as a prompt to overcome this marginalisation and improve the knowledge base for regulation. Specifically, the language of failure can be used to generate a risk to organisational standing and reputation. Adverse public perceptions may cast failure as regulatory failure, effectively framing regulators as ‘part of the cause of disasters and crises’.Footnote 40 A perception of regulatory failure thus has key implications for the accountability and legitimacy of regulation and regulators – and such perception is therefore to be avoided by them. Relatedly, regulators want to avoid the shaming and blaming that often accompany talk of failure. Blaming can even amplifyFootnote 41 or extend the duration of an institutional risk to standing and reputation. This may produce a crisis for regulation, including for its legitimacy, quite apart from any interpretation and judgment of failure or regulatory failure.
The risk posed by failure to standing and reputation may prompt the integration of stakeholder knowledges with the scientific–technical knowledges that currently underpin regulation. The potential to use failure in this way is already apparent in the examples above, and perhaps especially vaginal mesh. Stakeholders have been largely successful in presenting their knowledges of harm, placing a spotlight on health research regulation and demanding change to prevent future failure.
Despite the limitations within much bioethics scholarship, there is a growing plethora of approaches to injustice, most recently and notably vulnerability, within which embodied risk and experiential knowledge are central.Footnote 42 These approaches are buttressed by a developing scientific understanding of the significance of environmental factors to genetic predisposition to vulnerability and embodied risk.Footnote 43 Further, within such approaches, the centrality of the human body and experience is foregrounded precisely to recast the objects of bioethical concern. The goal: to prompt a response from the state to fulfil its responsibilities in respect of rights.Footnote 44 In the context of health research, this research can be leveraged to counter the lack of alertness and communicative failures for which institutions and powerful people must take responsibility,Footnote 45 and expand the knowledges that count in regulation.
There are mechanisms to facilitate the integration of stakeholder with scientific–technical knowledges and improve health research for medical devices. Further attention to effectiveness could yield important additional data (i.e. on producing a therapeutic benefit) on top of performance (i.e. the device performs as designed and intended). Similar to clinical trials for medicines, which produce data to demonstrate safety, quality and efficacy, this would require far more involvement and data from device recipients. Recipient involvement and data could come pre- or post-marketing – or both. Involvement pre-marketing seems both desirable and possible:
The manufacturers’ argument that [randomised controlled trials] are often infeasible and do not represent the gold standard for [medical device] research is clearly refuted. As high-quality evidence is increasingly common for pre-market studies, it is obviously worthwhile to secure these standards through the [Medical Devices Regulation] in Europe and similar regulations in other countries.Footnote 46
One proposed model for long-term implantable devices, such as those discussed in this chapter, involves providing limited access to them through temporary licences that restrict use to within clinical evaluations, with long follow-up at a minimum of five years. Wider access could be provided once safety, performance and efficacy have been adequately demonstrated. In addition, wider public access to medical device patient registries, including the EU’s Eudamed database, could be provided so as to ensure transparency, open up public discourse around safety and tackle epistemic injustice.Footnote 47
16.4 Conclusion
In this chapter, I described how failure is constructed and becomes recognised through processes that determine whether harm has thwarted the expectation of safety built into technological framings of regulation. Laurie is one of the few scholars to illuminate, not only how health research regulation transforms its participants into instruments, but how this may underlie failure:
if we fail to see involvement in health research as an essentially transformative experience, then we blind ourselves to many of the human dimensions of health research. More worryingly, we run the risk of overlooking deeper explanations about why some projects fail and why the entire enterprise continues to operate sub-optimally.Footnote 48
By looking at the organisation of knowledge that supports regulatory framings of medical devices, it becomes clear how the marginalisation of stakeholder knowledge may provide a deeper explanation for harm and failure. Failure can be used to prompt the take-up of stakeholder knowledges of harm in regulation, by recasting regulation or using its mechanisms differently in light of those knowledges, so as to better anticipate and prevent future harm and failure, and enable success. See further on users’ experiences, Harmon, Chapter 39, this volume.
Why, then, has more not been done to ensure epistemic integration as a way to enhance regulatory capacities to anticipate and prevent failure? Epistemic integration would involve bringing stakeholders within regulation via their knowledges. As such, epistemic integration would seem to undermine the dominant position of those deemed expert within extant processes. Knowledge of harm becomes re-problematised: what knowledges from across society are required by regulation in order to ensure its practices are ethical and legitimate? Integration of diverse knowledges might reveal to society at large the limits of current regulation to deal with risk and uncertainty. More deeply, epistemic integration would challenge modernist values on the import of empirically derived knowledge, and the efficacy of society’s technological ‘fixes’ in addressing its problems. However, scientific–technical knowledge and expertise would still be necessary in order to discipline ‘lay’ knowledges and ensure their integration within the epistemic foundations of decision-making. To resist epistemic integration is, therefore, essentially to bolster extant power relations. As the analysis in this chapter suggests, these relations are actually antithetical to addressing failure and maintaining the protections that are central to ethical and legitimate health research and regulation more generally.