Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-30T02:40:17.507Z Has data issue: false hasContentIssue false

Clinical standards in psychiatry

How much evidence is required and how good is the evidence base?

Published online by Cambridge University Press:  02 January 2018

John Geddes
Affiliation:
Centre for Evidence-Based Mental Health, Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford OX3 7JX
Simon Wessely
Affiliation:
Department of Psychological Medicine, Guy's, King’s and St Thomas' Hospital Medical School and Institute of Psychiatry, 103 Denmark Hill, London SE5 8AF
Rights & Permissions [Opens in a new window]

Extract

It is impossible to avoid the plethora of clinical practice guidelines and other forms of practice policy and protocols that have been showered on psychiatrists and other mental health clinicians over the last decade. Several motivations lie behind this phenomenon – reducing the amount of unnecessary variation in clinical practice, improving clinician's access to research evidence and summarising available evidence to assist individual patient and clinician decision-making. With the arrival of the National Service Framework for Mental Health, it is timely to take stock of the evidence requirements for developing valid clinical standards.

Type
Opinion and debate
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Royal College of Psychiatrists, 2000

It is impossible to avoid the plethora of clinical practice guidelines and other forms of practice policy and protocols that have been showered on psychiatrists and other mental health clinicians over the last decade. Several motivations lie behind this phenomenon - reducing the amount of unnecessary variation in clinical practice, improving clinician's access to research evidence and summarising available evidence to assist individual patient and clinician decision-making. With the arrival of the National Service Framework for Mental Health, it is timely to take stock of the evidence requirements for developing valid clinical standards.

The architecture of policy statement

Eddy (Reference Eddy1990) has proposed an architecture for this kind of policy statement. It is broadly possible to distinguish three levels of statement that vary on two main dimensions: the degree of certainty about what will happen if the policy is followed (i.e. how convincing is the evidence on which they are based?); and the extent to which the patient's and clinician's preferences are both known and consistent with the likely outcomes. The three levels of statement are options, clinical practice guidelines and standards.

  1. (a) Options are systematically derived statements, based on systematic review that do not attempt to make general recommendations, recognising that implementation will depend on individual and local circumstances. The value of options is that they provide decision-makers with a summary of up-to-date evidence and highlight current uncertainties.

  2. (b) Clinical practice guidelines are systematically derived statements that are aimed at helping individual patient and clinician decisions (Reference Eccles, Clapp and GrimshawEccles et al, 1996). They usually apply to the average patient and therefore need to be applied flexibly and tailored according to local circumstances and needs, including patient preferences. To be valid for the average patient, clinical practice guidelines need to be based on a certain standard of evidence. Treatment recommendations are usually graded according to the strength of the evidence and confident statements are usually only made that are reasonably supported by appropriate, randomised (in the case of treatment decisions) evidence. The key issue here is that, even with a reasonable level of evidence to make fairly general statements, there are likely to be occasions when adhering to a guidelines recommendation would do more harm than good.

  3. (c) Standards on the other hand, need to be applied rigidly. Adherence to standards is one way of measuring the quality of a clinical service. To be valuable, there must be a high degree of confidence about the result of applying a standard and patients and clinicians must agree about the desirability of the outcomes. The level of evidence must usually be very high. Sometimes, it is self-evident that a proposed standard is a good thing. For example professionals and patients may agree on some aspects of electroconvulsive therapy suites, and may be able to create standards that should be uniformly adhered to. Similarly, there may be general agreement that patients with mental illnesses should have access to a general practitioner. The construction of valid and useful standards is much more difficult when there is less confidence about the results of applying them or unanimity about the desirability of the outcomes. Most psychiatric treatments fall into this category. We are often reasonably sure that a treatment offers some overall benefit, on average. But we are less certain that the treatment should always be used for all patients.

Is there enough high quality evidence to produce clinical standards in psychiatry?

There is no shortage of evidence in psychiatry. Surveys of the proportion of treatment decisions for which there is randomised evidence have found a similar proportion as in other areas of medicine (Reference Geddes, Game and JenkinsGeddes et al, 1996; Reference Summers and KehoeSummers & Kehoe, 1996). New high quality research is emerging at a sufficient rate to ensure the viability of a journal such as Evidence-Based Mental Health. However, the quality of the existing evidence is often poor and the primary studies are disorganised. The process of systematically and comprehensively grading and synthesising evidence in psychiatry has been enhanced by the development of the systematic review and meta-analysis. Since 1993, the Cochrane Collaboration has taken this process further and we now have some idea of the broad scope of evidence available in some fields. A survey of 2000 randomised-controlled trials including participants suffering from schizophrenia documented some of the methodological problems (Reference Thornley and AdamsThornley & Adams, 1998). Despite this, the North American Schizophrenia Patients Outcomes Research Team (PORT) study concluded that there was sufficient evidence to devise some practice standards for the treatment of schizophrenia (Reference Lehman and SteinwachsLehman & Steinwachs, 1998b ). The PORT survey of concordance with these standards found that adherence was low in many areas of care. This may imply that clinical practice is poor, or that there is too much justifiable clinical uncertainty to allow the construction of precise standards in this area (Reference Lehman and SteinwachsLehman & Steinwachs, 1998a ). This is particularly likely in a clinical area such as the pharmacological treatment of schizophrenia in which there is substantial clinical uncertainty following the introduction of the new atypical antipsychotics.

It is easy to identify standards that have been created, but which should have been no more than guidelines, and more probably, options. The introduction of the Care Programme Approach (CPA) in the UK clearly stated that case management should be used for patients with severe mental illness and was therefore a clinical standard that could be audited (Department of Health, 1994). The test for this would be to ask how easy it would be for a clinician to justify the failure to utilise the CPA if an adverse event occurred. The independent reviews indicate how rigidly these standards are likely to be applied in practice. The CPA was not explicitly based on a systematic review of the evidence - later it was found that the overall outcomes from case management are rather unclear, although it does seem to increase admission to hospital (and there is probably no unanimity of preference for this outcome!) (Reference Marshall, Gray, Lockwood, Adams, Anderson and De Jesus MariMarshall et al, 1996). Another example in a different area is the compulsory psychological debriefing following trauma that is required by some organisations - which is probably ineffective and possibly harmful (Reference Wessely, Rose and BissonWessely et al, 1998).

Standards probably have a part to play in mental health services, and it is understandable that those who are trying to monitor the quality of services are keen to get something tangible to measure. But we consider that they need to be set at a fairly minimal level, at least at the national level. They should only be attempted following adequate systematic review. It is paramount that devisers of standards do not go beyond the evidence of effectiveness and preference. Such is the current state of uncertainty that there seem to be relatively few situations when standards relating to provision of specific interventions would be appropriate. Examples probably do exist - for example, “patients with depression should be offered an effective treatment”, but they are rare. We agree with Eddy that:“… it is dangerous to call something a standard unless the outcomes are truly known, the preferences are truly known and the preferences are truly virtually unanimous” (Reference EddyEddy, 1990). Systematic review of the evidence is a necessary first step in all cases. On the other hand, less rigid forms of policy statement such as clinical practice guidelines and intervention options seem a helpful way of keeping people informed of the current state of the evidence. There is also the option of creating local standards, if appropriate, from these less rigid forms of policy statement.

References

Department of Health (1994) The Health of the Nation. Key Area Handbook: Mental Illness. London: HMSO.Google Scholar
Eccles, M., Clapp, Z., Grimshaw, J., et al (1996) North of England evidence-based guidelines development project: methods of guideline development. British Medical Journal, 312, 760762.Google Scholar
Eddy, D. M. (1990) Clinical decisionmaking: from theory to practice. Designing a practice policy. Standards, guidelines, and options. Journal of the American Medical Association, 263, 30773084.Google Scholar
Geddes, J. R., Game, D., Jenkins, N.E. et al (1996) What proportion of primary psychiatric interventions are based on randomised evidence? Quality in Health Care, 5, 215217.Google Scholar
Lehman, A. F. & Steinwachs, D. M. (1998a) Patterns of usual care for schizophrenia: initial results from the Schizophrenia Patient Outcomes Research Team (PORT) Client Survey. Schizophrenia Bulletin, 24, 1120.Google Scholar
Lehman, A. F. & Steinwachs, D. M. (1998b) Translating research into practice: the Schizophrenia Patient Outcomes Research Team (PORT) treatment recommendations. Schizophrenia Bulletin, 24, 110.Google Scholar
Marshall, M., Gray, A., Lockwood, A., et al (1996) Case management for people with severe mental disorders. In Schizophrenia Module (eds Adams, C., Anderson, J., De Jesus Mari, J.). In The Cochrane Library. Oxford: Update Software.Google Scholar
Summers, A. & Kehoe, R. F. (1996) Is psychiatric treatment evidence-based? Lancet, 347, 409410.Google Scholar
Thornley, B. & Adams, C. (1998) Content and quality of 2000 controlled trials in schizophrenia over 50 years. British Medical Journal, 317, 11811184.Google Scholar
Wessely, S., Rose, S. & Bisson, J. (1998) A systematic review of brief psychological interventions (“debriefing”) for the treatment of immediate trauma related symptoms and the prevention of post-traumatic stress disorder. In The Cochrane Library. Oxford: Update Software.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.