Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-19T18:46:28.244Z Has data issue: false hasContentIssue false

Assessing the performance of mental health service facilities for meeting patient priorities and health service responsiveness

Published online by Cambridge University Press:  25 May 2016

A. Bramesfeld*
Affiliation:
Institute for Epidemiology, Social Medicine and Health System Research, Hannover Medical School, Hannover, Germany
C. Stegbauer
Affiliation:
AQUA Institute for Applied Quality Improvement and Research in Health care GmbH, Maschmühlenweg 8-10, 37073 Göttingen, Germany
*
*Address for correspondence: A. Bramesfeld, Institute for Epidemiology, Social Medicine and Health System Research, Hannover Medical School, Hannover, Germany. (Email: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

The World Health Organisation has defined health service responsiveness as one of the key-objectives of health systems. Health service responsiveness relates to the ability to respond to service users’ legitimate expectations on non-medical issues when coming into contact with the services of a healthcare system. It is defined by the areas showing respect for persons and patient orientation. Health service responsiveness is particularly relevant to mental health services, due to the specific vulnerability of mental health patients but also because it matches what mental health patients consider as good quality of care as well as their priorities when seeking healthcare. As (mental) health service responsiveness applies equally to all concerned services it would be suitable as a universal indicator for the quality of services’ performance. However, performance monitoring programs in mental healthcare rarely assess health service performance with respect to meeting patient priorities. This is in part due of patient priorities as an outcome being underrepresented in studies that evaluate service provision. The lack of studies using patient priorities as outcomes transmits into evidence based guidelines and subsequently, into underrepresentation of patient priorities in performance monitoring. Possible ways out of this situation include more intervention studies using patient priorities as outcome, considering evidence from qualitative studies in guideline development and developing performance monitoring programs along the patient pathway and on key-points of relevance for service quality from a patient perspective.

Type
Editorials
Copyright
Copyright © Cambridge University Press 2016 

What is good healthcare? More than 15 years ago the World Health Organization (WHO) tackled this question within the World Health Report 2000, ‘Health Systems: Improving Performance’ (WHO, 2000). In this report, health systems were evaluated and benchmarked for characteristics that WHO defined as contributing to the three key-objectives of healthcare systems:

  • ‘improving health’, as clearly the main objective of health systems;

  • fairness, as the ability of health systems to treat patients equally and not discriminate against them as well as protected them from the financial risks of illness; and

  • the quality of health services within a system in terms of health service responsiveness.

This editorial focuses on one of these key-objectives of health systems: health service responsiveness. It was defined as a parameter relating to a healthcare system's ability (as the sum of its services) to respond to service users’ legitimate expectations on non-medical issues when coming into contact with the health system (Valentine et al. Reference Valentine, De Silva, Kawabata, Darby, Murray, Evans, Murray and Evans2003). The concept of health service responsiveness is built by two major areas: showing respect for persons and patient orientation. These areas are sub-divided into eight domains (Valentine et al. Reference Valentine, De Silva, Kawabata, Darby, Murray, Evans, Murray and Evans2003).

The domains related to showing respect for persons include:

  • Confidentiality of personal information: talking privately to healthcare providers, personal information are kept confidential.

  • Autonomy and participation in decisions: involvement of patients in decisions about their healthcare or treatment, getting information about other treatments and tests.

  • Clarity of communication: how clearly healthcare providers explain things to patients, being granted enough time for asking questions about health problems or treatment.

  • Dignity: respectful treatment and communication, e.g., being greeted and talked to respectfully, privacy during physical examinations and treatments is respected.

The domains belonging to patient orientation imply:

  • Prompt attention: travelling time to hospital and other services, amount of time waiting for being attended.

  • Choice of healthcare provider: freedom to choose the healthcare providers that the patient is confident with.

  • Access to family and community support when in hospital: contact with outside world and maintaining of regular activities, e.g., ease of having family and friends visiting, staying in contact with the outside world.

  • Quality of basic amenities: cleanliness of the rooms inside the facility, including toilets, amount of space patients have.

In addition to these eight domains identified by WHO, research on the applicability of the health service responsiveness concept to different health system contexts revealed, that for chronic and mental healthcare the concept would need to be enhanced by a domain related to coordination of services (Bramesfeld et al. Reference Bramesfeld, Klippel, Seidel, Schwartz and Dierks2007; Röttger et al. Reference Röttger, Blümel, Fuchs and Busse2014; Forouzan et al. Reference Forouzan, Padyab, Rafiey, Ghazinour, Dejman and San Sebastian2016). This domain would also belong to the area of patient orientation. Not only that the health service responsiveness concept was proven to be applicable to mental healthcare, moreover its domains, including that of coordination, match priorities in healthcare of patients using mental health services (Stegbauer et al. Reference Stegbauer, Szecsenyi and Bramesfeld2015).

Health service responsiveness measures distinct patient experiences with non-medical health issues in the domains listed above. It thus seeks to put patients’ experience in relation to a common set of standards, of what patients’ legitimately expect when coming in contact with the system and its services (Zastowny et al. Reference Zastowny, Stratmann, Adams and Fox1995). This distinguishes health service responsiveness from patient satisfaction: while patient satisfaction asks for patients’ subjective judgment of the services they are receiving, health service responsiveness assesses patients’ objective experience with these services, regardless of whether they are satisfied with what they receive or not (Valentine et al. Reference Valentine, De Silva, Kawabata, Darby, Murray, Evans, Murray and Evans2003).

Mental health services being responsive to service users’ expectations are highly relevant as mentally ill patients are specifically vulnerable for not being treated as they wish to and for human rights violations due to the characteristics of their illness and the stigma attached to it (Loh et al. Reference Loh, Leonhart, Wills, Simon and Härter2007). In addition, what is considered good quality of care in long-term mental healthcare matches the health service responsiveness domains (Taylor et al. Reference Taylor, Killaspy, Wright, Turton, White, Kallert, Schuster, Cervilla, Brangier, Raboch, Kališová, Onchev, Dimitrov, Mezzina, Wolf, Wiersma, Visser, Kiejna, Piotrowski, Ploumpidis, Gonidakis, Caldas-al-Almeida, Cardoso and King2009). Only health services, that act according to patients’ priorities, meet their expectations and respect their rights, will be used optimally. Research has shown that low health service responsiveness is being related to patients avoiding medical care despite being in need of it (Röttger et al. Reference Röttger, Blümel, Köppen and Busse2016).

The approach of WHO to introduce health service responsiveness as a quality parameter for the performance of health services and systems seems to be more relevant today than ever before: Evidence is growing in all medical specialties including mental healthcare, that there is a gap between what is considered optimal care and what is in fact provided (‘chasm’) (Institute of Medicine, 2001; Gaebel et al. Reference Gaebel, Janssen and Zielasek2009). In response to the awareness of this ‘chasm’, more and more programmes to measure performance of healthcare services are developed and implemented across healthcare systems. Scrutinising health service performance by quality monitoring programmes can be as well voluntarily done, as it can be mandatorily requested by health systems. Whether voluntary or mandatory, quality monitoring programmes aim to measure quality of care by using quantifiable indicators that are assessed at the level of the single health service providing facility. Quantifiable indicators offer an important possibility to compare performance of health service facilities longitudinally over time as well as between service facilities.

In the last 10 years increasingly quality monitoring programmes have been implemented that measure the performance of mental health service facilities using indicators. On a national level this is usually done in the context of a broader national quality monitoring programme, designed to monitor performance for different health problems (CIHI, 2013; NICE, 2013; Willms et al. Reference Willms, Bramesfeld, Pottkämper, Broge and Szecsenyi2013; ACHS, 2014; BMA and NHS Employers, 2014). Although most quality monitoring programmes in mental healthcare take some aspects of health service responsiveness such as showing respect for persons and patient orientation into consideration, most of them focus on indicators related to clinical treatment in terms of adherence to clinical guidelines. These indicators, such as for example treatment rates or complications, usually relate only to specific patient populations. They may require adjustment for morbidity and other risks, to allow for a fair comparison between service facilities. In contrast to this, assessing and comparing the performance of (mental) health service facilities by their health service responsiveness means assessing and comparing them by a universal indicator, that does not need any adjustment for specific risks. Most of the health service responsiveness domains relate to all mental health patients and facilities regardless of their specific characteristics and risks; e.g., being respectful treated and talked to (domain dignity) or the cleanliness of the rooms inside the facility, including the toilets (domain basic amenities), applies equally to all (mental) health facilities and to all patient populations.

It is no coincidence that quality monitoring programmes in (mental) healthcare focuses on clinical outcomes, such as symptom reduction, functionality or use of special healthcare services rather than on patient expectations or priorities. Quality monitoring programmes are usually developed with reference to clinical guidelines. Clinical guidelines in turn, are developed on the basis of the best available scientific evidence. The strongest evidence for interventions to be included in guidelines usually stems from controlled or – even better – randomised controlled trials. Common outcomes of these trials include symptom reduction, improvement of daily functioning and use of healthcare services, mainly hospital services (Catts et al. Reference Catts, O'Toole, Carr, Lewin, Neil, Harris, Frost, Crissman, Eadie and Evans2010; Stegbauer et al. Reference Stegbauer, Szecsenyi and Bramesfeld2015). Outcome measures related to patient experiences or priorities are underrepresented in studies that evaluate the effectiveness of (mental) healthcare interventions. This shows a concept of healthcare effectiveness that orients along the priorities of healthcare providers and cost carriers rather than patients’ priorities. The underrepresentation of patient priorities and expectations in outcome studies is being explained by patient priorities being ‘soft’ parameters, by a lack of evidence, in particular in respect to quantitative studies, a lack of reliable instruments to assess together with a high effort for assessing patient priorities, mainly if this has to be done by patients’ surveys (Pohontsch et al. Reference Pohontsch, Herzberg, Joos, Welti, Scherer and Blozik2015). Further, referring in a scientific way to patients priorities in mental healthcare is a rather new field with the first studies being published only 20 years ago (Stegbauer et al. Reference Stegbauer, Szecsenyi and Bramesfeld2015).

Despite these all being good reasons, it needs to be acknowledged that the lack of recognition of patient priorities and experiences as outcomes in studies that evaluate (mental) healthcare interventions, transmits into guidelines and from guidelines into quality monitoring programmes. Without studies that consider patient experiences or priorities as outcomes, patient priorities and expectations will continue to be underrepresented in guidelines and subsequently in quality monitoring.

There are possibilities how to deal with this situation:

Firstly, effectiveness studies that use patient experiences or priorities, as important outcomes need to be promoted. There are examples that controlled studies can be done with the key outcomes relating to patient experiences, such as empowerment (Stierlin et al. Reference Stierlin, Herder, Helmbrecht, Prinz, Walendzik, Holzmann, Becker, Schutzwohl and Kilian2014). However, this requires a change in the perspective of what matters in healthcare from a providers’ and funding agencies’ perspective towards the perspective of patients.

Secondly, evidence from qualitative studies should be used more as a source in the development of clinical guidelines. A transparent method to evaluate the strength of evidence provided by qualitative studies has been proposed with the CERQual (Confidence in the Evidence from Reviews of Qualitative Research) approach (Lewin et al. Reference Lewin, Glenton, Munthe-Kaas, Carlsen, Colvin, Gülmezoglu, Noyes, Booth, Garside and Rashidian2015). Experience with the application of the CERQual methodology for processing qualitative evidence in guideline development on women's health issues in developing countries showed, that considering systematically qualitative evidence in guideline development leads to them being better implementable and more patient centred (Bohren et al. Reference Bohren, Vogel, Hunter, Lutsiv, Makh, Souza, Aguiar, Saraiva Coneglian, Diniz, Tunçalp, Javadi, Oladapo, Khosla, Hindin and Gülmezoglu2015; Tunçalp, Reference Tunçalp2015).

Thirdly, performance measurement of (mental) healthcare services should not only orient at clinical guidelines, but should consider the patient pathway. The patient pathway visualises the typical way of a patient with a specific medical problem through medical and attached services. Along the patient pathway key-points of relevance for quality of care from a patient perspective can be identified (AQUA, 2015). Using the patients’ pathway as the basis for the systematic development of requirements and indicators ensures that quality of services is not only viewed for those issues with a traditional evidence base, but also for those ‘softer’ ones, which do matter from a patient perspective.

Finally, considering the patient perspective as represented by the health service responsiveness domains, requires assessing patients' experiences through a patient survey. At present, only a minority of quality monitoring programmes in (mental) healthcare also assess quality of care by asking patients for their experience with a specific service or facility. By not asking patients for their experience, an important information source for assessing quality of care is left out. Patients are known to be clear definers of good quality, good evaluators of the healthcare they receive and good rapporteurs of their experiences with healthcare (Donabedian, Reference Donabedian1992). Patients’ views are crucial for judging patient-relevant outcomes and they are often the only tie between different healthcare sectors and services (Blum et al. Reference Blum, Satzinger, Buck, Satzinger, Trojan and Kellermann-Mühlhoff2001; Ludt et al. Reference Ludt, Heiss, Glassen, Noest, Klingenberg, Ose and Szecsenyi2014).

On the other side, it needs to be acknowledged how difficult it is to implement patient surveys that produce reliable and valid outputs within quality monitoring programmes (Willms et al. Reference Willms, Bramesfeld, Pottkämper, Broge and Szecsenyi2013). Therefore, despite a patient survey always being a golden standard in performance assessment, also more indirect measures of respecting patients’ priorities need to be considered. This could include peer reviews (checking basic amenities) and service provider reported indicators. Indicators do not only function by measuring and benchmarking, but also by drawing attention to specific processes.

Taking into consideration the quality chasm which is present in (mental) healthcare and remembering the United Nations’ claim that individuals with physical and mental disabilities have the right to the highest quality health services and the right of mentally ill persons for perceiving care of highest quality (Gostin et al. Reference Gostin, Hodge, Vanlentine and Nygren-Krug2003), it is the prime time to start measuring quality of care of mental health service facilities systematically. A core feature for assessing and benchmarking the quality of mental health service facilities is their performance on showing respect for persons and patient orientation. Incentives for patient orientations within mental healthcare are usually low (Zechmeister et al. Reference Zechmeister, Oesterle, Denk and Katschnig2002); however, evidence suggests that including aspects of patient orientation and showing respect for persons in systematic performance measurements can be an effective means to improve performance (Killaspy et al. Reference Killaspy, Marston, Omar, Green, Harrison, Lean, Holloway, Craig, Leavey and King2013).

Conflict of interest

None.

Acknowledgement

We are gratefull to Sharon Janicki for editing the English.

Financial support

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

References

ACHS (2014). Australasian Clinical Indicator Report: 2006–2013, 15th edn. The Australian Council on Healthcare Standards, Health Services Research Group, University of Newcastle: Ultimo, New South Wales.Google Scholar
AQUA (2015). Allgemeine Methoden im Rahmen der sektorenübergreifenden Qualitätssicherung im Gesundheitswesen nach §137a SGB V Version 4.0 (Stand: 17. Februar 2015). AQUA – Institut für angewandte Qualitätsförderung und Forschung im Gesundheitswesen: Göttingen.Google Scholar
Blum, K, Satzinger, W, Buck, R (2001). Paientenbefragungen und Qualitätsmanagement. Eine Einführung in die Thematik. In Patientenbefragungen in Krankenhäusern (ed. Satzinger, W, Trojan, A and Kellermann-Mühlhoff, P), pp. 2540. Asgard-Verlag: St. Augustin.Google Scholar
BMA & NHS Employers (2014). 2014/15 General Medical Services (GMS) Contract Quality and Outcomes Framework (QOF): Guidance for GMS Contract 2014/15. British Medical Association, National Health Service Confederation: London, UK.Google Scholar
Bohren, MA, Vogel, JP, Hunter, EC, Lutsiv, O, Makh, SK, Souza, JP, Aguiar, C, Saraiva Coneglian, F, Diniz, AL, Tunçalp, Ö, Javadi, D, Oladapo, OT, Khosla, R, Hindin, MJ, Gülmezoglu, AM (2015). The mistreatment of women during childbirth in health facilities globally: a mixed-method systematic review. PLoS Medicine 12, e1001847.Google Scholar
Bramesfeld, A, Klippel, U, Seidel, G, Schwartz, FW, Dierks, ML (2007). How do patients expect the mental health service system to act? Testing the WHO responsiveness concept for its appropriateness in mental health care. Social Science and Medicine 65, 880885.Google Scholar
Catts, SV, O'Toole, BI, Carr, VJ, Lewin, T, Neil, A, Harris, MG, Frost, ADJ, Crissman, BR, Eadie, K, Evans, RW (2010). Appraising evidence for intervention effectiveness in early psychosis: conceptual framework and review of evaluation approaches. Australian and New Zealand Journal of Psychiatry 44, 195219.Google Scholar
CIHI (2013). Health Indicators 2013. Canadian Institute for Health Information: Ottawa, ON.Google Scholar
Donabedian, A (1992). The Lichfield Lecture. Quality assurance in health care: consumers’ role. Quality in Health Care 1, 247251.Google Scholar
Forouzan, S, Padyab, M, Rafiey, H, Ghazinour, M, Dejman, M, San Sebastian, M (2016). Measuring the mental health-care system responsiveness: results of an outpatient survey in Tehran. Frontiers in Public Health 3, 285.Google Scholar
Gaebel, W, Janssen, B, Zielasek, J (2009). Mental health quality, outcome measurement, and improvement in Germany. Current Opinion in Psychiatry 22, 636642.Google Scholar
Gostin, L, Hodge, J, Vanlentine, N, Nygren-Krug, H (2003). The Domains of Health Responsiveness. A Human Rights Analysis. World Health Organization. Retrieved from 14 March 2016 http://www.who.int/hhr/Series_2%20Responsiveness.pdf.Google Scholar
Institute of medicine (2001). Crossing the Quality Chasm: a New Health System for the 21th Century. National Academy Press: Washinton, DC.Google Scholar
Killaspy, H, Marston, L, Omar, RZ, Green, N, Harrison, I, Lean, M, Holloway, F, Craig, T, Leavey, D, King, M (2013). Service quality and clinical outcomes: an example from mental health rehabilitation services in England. British Journal of Psychiatry 202, 2834.Google Scholar
Lewin, S, Glenton, C, Munthe-Kaas, H, Carlsen, B, Colvin, CJ, Gülmezoglu, M, Noyes, J, Booth, A, Garside, R, Rashidian, A (2015). Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Medicine 12, e1001895.Google Scholar
Loh, A, Leonhart, R, Wills, CE, Simon, D, Härter, M (2007). The impact of patient participation on adherence and clinical outcome in primary care of depression. Patient Education and Counseling 65, 6978.Google Scholar
Ludt, S, Heiss, F, Glassen, K, Noest, S, Klingenberg, A, Ose, D, Szecsenyi, J (2014). Die Patientenperspektive jenseits ambulant-stationärer Sektorengrenzen – Was ist Patientinnen und Patienten in der sektorenübergreifenden Versorgung wichtig? Das Gesundheitswesen 76, 359365.Google Scholar
NICE (2013). Clinical Commissioning Group Outcomes Indicator Set Indicator Rationale. National Institute for Health and Care Excellence. Retrieved from 4 April 2016 https://www.nice.org.uk/Media/Default/standards-and-indicators/ccgois%20indicators%20key%20documents/NICE%20CCG%20OIS%20indicator%20rationale%202013-1.pdf.Google Scholar
Pohontsch, NJ, Herzberg, H, Joos, S, Welti, F, Scherer, M, Blozik, E (2015). The professional perspective on patient involvement in the development of quality indicators: a qualitative analysis using the example of chronic heart failure in the German health care setting. Patient Prefer Adherence 9, 151159.Google Scholar
Röttger, J, Blümel, M, Fuchs, S, Busse, R (2014). Assessing the responsiveness of chronic disease care – is the World Health Organziation's concept of health system responsiveness applicable? Social Science and Medicine 113, 8794.Google Scholar
Röttger, J, Blümel, M, Köppen, J, Busse, R (2016). Forgone care among chronically ill patients in Germany-results from a cross-sectional survey with 15,565 individuals. Health Policy 120, 170178.Google Scholar
Stegbauer, C, Szecsenyi, J, Bramesfeld, A (2015). Studien zur Evaluation ambulanter psychiatrischer Versorgung: Werden die Prioritäten psychisch kranker Menschen berücksichtigt? Psychiatr Prax 42, 18.Google Scholar
Stierlin, AS, Herder, K, Helmbrecht, MJ, Prinz, S, Walendzik, J, Holzmann, M, Becker, T, Schutzwohl, M, Kilian, R (2014). Effectiveness and efficiency of integrated mental health care programmes in Germany: study protocol of an observational controlled trial. BMC Psychiatry 14, 163.Google Scholar
Taylor, TL, Killaspy, H, Wright, C, Turton, P, White, S, Kallert, TW, Schuster, M, Cervilla, JA, Brangier, P, Raboch, J, Kališová, L, Onchev, G, Dimitrov, H, Mezzina, R, Wolf, K, Wiersma, D, Visser, E, Kiejna, A, Piotrowski, P, Ploumpidis, D, Gonidakis, F, Caldas-al-Almeida, J, Cardoso, G, King, MB (2009). A systematic review of the international published literature relating to quality of institutional care for people with longer term mental health problems. BMC Psychiatry 9, 55.Google Scholar
Tunçalp, Ö (2015). Incorporating qualitative research into guideline development: the way forward. In ECIBC Plenary Conference 2015. European Commission.Google Scholar
Valentine, N, De Silva, A, Kawabata, K, Darby, C, Murray, CJL, Evans, DB (2003). Health system responsiveness: concepts, domains and operationalization. In Health Systems Performance Assessment. Debates, Methods and Empiricism (ed. Murray, CJL and Evans, DB). World Health Organization: Geneva.Google Scholar
WHO (2000). World Health Report 2000. Health Systems: Improving Performance. World Health Organization: Geneva.Google Scholar
Willms, G, Bramesfeld, A, Pottkämper, K, Broge, B, Szecsenyi, J (2013). Aktuelle Herausforderungen der externen Qualitätssicherung im deutschen Gesundheitswesen. Z Evid Fortbild Qual Gesundhwes 107, 523527.Google Scholar
Zastowny, TR, Stratmann, WC, Adams, EH, Fox, ML (1995). Patient satisfaction and experience with health services and quality of care. Quality Management in Health Care 3, 5061.Google Scholar
Zechmeister, I, Oesterle, A, Denk, P, Katschnig, H (2002). Incentives in financing mental health care in Austria. Journal of Mental Health Policy and Economics 5, 121129.Google Scholar