Clinicians and health service managers have long been held responsible not only for delivering high quality care but also for continuously improving it. When the latter proves to be challenging – which is the case for most of the time – it is tempting to frame the problem as one of a failure of professionalism or managerialism. However, rectifying deeply embedded deficiencies in the ways that care is organised and delivered, and in clinical and experiential outcomes for patients, is not an easy task. Too often efforts to improve care do not work or are only partially successful, resources are wasted, the unintended consequences of improvement interventions are ignored and the enthusiasm of practitioners is dissipated. In recent decades, attempts to meet these challenges have led to a growing interest in the role that research can play in guiding improvement activities, and in the concept of Evidence Informed Improvement (Walshe and Rundall, Reference Walshe and Rundall2001).
Even a quick review of the literature in the field will reveal no shortage of theory and empirical evidence describing how best to improve care (Grol et al., Reference Grol, Wensing, Eccles and Davis2013). A recent report from The Health Foundation summarised the learning from evaluations of improvement initiatives carried out by the charity over the last decade. In all, 10 practical challenges are presented, together with evidence-based solutions that should be considered when designing and delivering improvement programmes (The Health Foundation, 2012). The problem, however, is less about what is known or not known, than about what is done in practice. We know from the evidence that most interventions have a small effect size and that effective improvement requires a number of interventions to be combined and yet the quest for a single silver bullet remains undimmed (Grol et al., Reference Grol, Wensing, Eccles and Davis2013). We know that reporting comparative performance data can result in gaming behaviours and yet league tables abound and their risks are often ignored rather than predicted and managed (Shekelle et al., Reference Shekelle, Lim, Mattke and Damberg2008).
There are many reasons for this so-called ‘know-do’ gap, one of which is the traditional separation of those responsible for creating empirical knowledge (the research community, sometimes characterised as working in the ivory towers of academia) from those who should be making use of the knowledge (practitioners working in the aptly named ‘swampy lowlands’ of front line care) (Marshall, Reference Marshall2013). This separation has led to the process of implementation being conceptualised as one of knowledge transfer from producer to user. The focus of activity has been on ‘pushing’ evidence from academic journals into the consciousness of practitioners, or ‘pulling’ evidence by informed clinicians and managers (Rycroft-Malone, Reference Rycroft-Malone2004). Both approaches are being addressed in increasingly sophisticated ways, for example by using on-line evidence summaries and guidelines and a strong focus on building the skills of practitioners to utilise evidence.
Progress has, however, been disappointingly slow and some academics and practitioners have suggested that it might help to re-conceptualise the problem as one of knowledge creation, rather than of knowledge transfer (Davies et al., Reference Davies, Nutley and Walter2008). The question then changes from ‘how do academics produce evidence that is so scientifically robust that its value to practitioners is self-evident’? to ‘how do academics and practitioners work together to produce evidence that adds value to practitioners’ current thinking about how to improve what they do’?. The idea that the traditional separation of science and practice might be unhelpful and that the most effective knowledge is ‘co-created’ is hardly new. More than 100 years ago a commentator in a leading medical journal suggested that ‘the scientific man has been too scientific and the practical man too practical, and the result has been unfortunate for both’ (Barton, Reference Barton1912). Participatory approaches to conducting research have deep philosophical and historical roots (Lewin, Reference Lewin1946) and models such has ‘Engaged Scholarship’ (Van De Ven, Reference Van De Ven2007) and ‘Community Based Participatory Research’ (Cornwall and Jewkes, Reference Cornwall and Jewkes1995) have been used successfully in a number of sectors but have largely failed to gain traction in the health field. Hence the search for new models of participatory research that engage both practitioners and academics in a shared endeavour to address the challenges of improving care for patients.
One emerging approach is the Researcher-in-Residence Model (Marshall et al., Reference Marshall, Pagel, French, Utley, Allwood, Fulop, Pope, Banks and Goldmann2014). An in-residence researcher works as an integrated member of a service-based improvement team rather than as a dispassionate observer of improvement activities. They actively negotiate their academic knowledge rather than just present or impose it, integrating it with the more applied expertise of the practitioners. The researcher brings new expertise to the table; an understanding of established research evidence (which may be general or specialised in nature) and a willingness to interpret that evidence for the local context; an understanding of the effectiveness and unintended consequences of interventions and implementation methods; theory-based expertise in models of change; and expertise in evaluating the effectiveness of improvement efforts and advice on the relative merits of process or outcome evaluations and self- or independent evaluations.
A number of different examples of the model are being developed (Marshall et al., Reference Marshall, Pagel, French, Utley, Allwood, Fulop, Pope, Banks and Goldmann2014); an ethnographer working as an Anthropologist-in-Residence in an executive team of a large teaching hospital helping to address problems with clinician engagement; an operational researcher working as a Modeller-in-Residence in a paediatric cardiac surgery team helping to find solutions to long operative waiting lists; a Health Service Researcher-in-Residence working in an integrated care organisation to improve the impact of a number of quality improvement programmes. Other examples are emerging, each of which will allow the model to be tested and further developed.
The in-residence model is just one way of addressing the challenge of encouraging researchers to be more useful to practitioners, and practitioners to be more responsive to scientific evidence. The focus on solving practical problems experienced by decision makers in the health service is likely to make it attractive to health system leaders at a time of intense pressure to deliver more and better care with limited resources. There are outstanding questions about the required level of experience, the most useful forms of content knowledge, and the required personal skills of in-residence researchers. The institutional barriers to embedding the model need to be better understood and the facilitators better aligned. In addition there is a trade-off which would have to be managed between a sense of ownership on the part of an embedded researcher for successfully delivering a programme of work, and the independent judgement of an academic contributing to the evaluation of the work. Some people might regard these challenges as insurmountable but people working in the health service need help and applied academics have a precious skill set to offer. The time is ripe for participative models of research like never before.