Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-23T20:54:01.497Z Has data issue: false hasContentIssue false

Epistemic injustice and the psychiatrist

Published online by Cambridge University Press:  05 January 2023

Brent M. Kious*
Affiliation:
Department of Psychiatry, University of Utah, 501 Chipeta Way, Salt Lake City, UT 84108, USA
Benjamin R. Lewis
Affiliation:
Department of Psychiatry, University of Utah, 501 Chipeta Way, Salt Lake City, UT 84108, USA
Scott Y. H. Kim
Affiliation:
Department of Bioethics, National Institutes of Health, 10 Center Drive, Bethesda, MD 20814, USA
*
Author for correspondence: Brent M. Kious, E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Background

Psychiatrists depend on their patients for clinical information and are obligated to regard them as trustworthy, except in special circumstances. Nevertheless, some critics of psychiatry have argued that psychiatrists frequently perpetrate epistemic injustice against patients. Epistemic injustice is a moral wrong that involves unfairly discriminating against a person with respect to their ability to know things because of personal characteristics like gender or psychiatric diagnosis.

Methods

We review the concept of epistemic injustice and several claims that psychiatric practice is epistemically unjust.

Results

While acknowledging the risk of epistemic injustice in psychiatry and other medical fields, we argue that most concerns that psychiatric practice is epistemically unjust are unfounded.

Conclusions

The concept of epistemic injustice does not add significantly to existing standards of good clinical practice, and that it could produce changes in practice that would be deleterious. Psychiatrists should resist calls for changes to clinical practice based on this type of criticism.

Type
Editorial
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

Practicing psychiatrists balance many considerations when evaluating information provided by their patients: a general need to believe what patients say, a recognition that any person's account of events can be mistaken, and the importance of maintaining a solid therapeutic relationship. Recently, commentators from within and outside of psychiatry have tried to complicate this balance by arguing that psychiatrists should strive to avoid epistemic injustice in their interactions with their patients, but that they are frequently guilty of perpetrating it (Bueter, Reference Bueter2019; Crichton, Carel, & Kidd, Reference Crichton, Carel and Kidd2017; Drożdżowicz, Reference Drożdżowicz2021; Gagné-Julien, Reference Gagné-Julien2021; Harcourt, Reference Harcourt2021; Kurs & Grinshpoon, Reference Kurs and Grinshpoon2018; Sanati & Kyratsous, Reference Sanati and Kyratsous2015).

Epistemic injustice (EI) involves unfairly discriminating against a person with respect to their ability to know things. The concept was developed by the philosopher Miranda Fricker to articulate how persons in minoritized groups can be further marginalized in the public and private exchange of information (Fricker, Reference Fricker2007). But this is not merely a philosophical debate. The concept has been utilized in multiple domains to advocate for social change, including in medicine. Calls for epistemic justice in psychiatry have appeared in practice-oriented journals (Crichton et al., Reference Crichton, Carel and Kidd2017), been touted as a foundation for clinical reform (Zisman-Ilani, Roth, & Mistler, Reference Zisman-Ilani, Roth and Mistler2021), and are closely allied with other efforts to revamp traditional approaches to care (Carrotte, Hartup, Lee-Bates, & Blanchard, Reference Carrotte, Hartup, Lee-Bates and Blanchard2021; Daya, Maylea, Raven, Hamilton, & Jureidini, Reference Daya, Maylea, Raven, Hamilton and Jureidini2020; Groot, Haveman, & Abma, Reference Groot, Haveman and Abma2020; Harcourt, Reference Harcourt2021; Tate, Reference Tate2019).

We think EI is illuminating in some domains, and are sympathetic to the concerns that presumably motivate those applying it to psychiatry: there is much our field has done poorly, and more it could do better. In what follows, however, we hope to persuade readers that, as a way of regulating practice, EI adds little to ordinary standards of good clinical care, while too much emphasis on it could entail serious harm to patients, providers, and their relationships.

Are psychiatrists epistemically unjust?

Fricker identifies two kinds of EI. Testimonial injustice arises when an individual's factual report about some issue is ignored or taken to be unreliable because of individual characteristics that are not related to her epistemic (knowledge-having) ability (Fricker, Reference Fricker2007). A patient who reports that she is in pain but who is disregarded because of her race or gender would be a victim of testimonial injustice, since her race and gender have nothing to do with her knowledge. The second is hermeneutic injustice. There, an individual's knowledgeable reports fail to receive adequate attention because she, her listeners, or society as a whole lack the conceptual resources to interpret them. Fricker gives the compelling example of sexual harassment: prior to the development of the concept, women's concerns about certain kinds of sexually oriented abuse in the workplace were difficult for others to credit, but became easier to recognize once the concept had been articulated (Fricker, Reference Fricker2007).

There are multiple levels at which one could claim that psychiatry, as a field, is prone to EI – psychiatric concepts and diagnostic criteria, the function of institutions, and what happens in actual clinical encounters. Here, because we are concerned primarily with how individual psychiatrists should comport themselves, we focus on the last of these. For similar reasons, we focus on allegations that psychiatry – or individual psychiatrists – are often guilty of testimonial injustice, and eschew further consideration of claims that the field is guilty of hermeneutic injustice (Ritunnano, Reference Ritunnano2022).

A clear example of a claim that psychiatrists perpetrate EI in their interactions with patients is given by Sanati and Kyratsous (Reference Sanati and Kyratsous2015), who focus on EI affecting persons with delusions. They write:

When the patient is legitimately diagnosed with having delusions, the label of delusion… is working as heuristics (sic) in his social interactions, with his testimony receiving less credibility than she (sic) would otherwise have. This can result in that (1) her true statements and non-delusional belief states are also seen as delusional, clearly missing out on some valuable knowledge about her or; (2) her delusional states being denied any intelligibility at all, at the expense of establishing therapeutic trust and having a grasp of various meaningful connections between delusions and other non-delusional beliefs and mental states. (Sanati & Kyratsous, Reference Sanati and Kyratsous2015, p. 481)

They describe two cases where they see EI: a young woman with psychosis whose fears that her partner is going to abandon her are wrongly thought to be delusional, and a young man who is regarded as delusional because he threatens a person he claims had abused a relative, even though the abuse really happened (Sanati & Kyratsous, Reference Sanati and Kyratsous2015). They argue that:

[T]he cases…can reflect a more systematic process. We think that the patients are not given the same credibility as a non-patient on the basis of having an illness that is so often associated with attributions of irrationality, bizarreness and incomprehensibility. (Sanati & Kyratsous, Reference Sanati and Kyratsous2015, p. 483)

Crichton et al. (Reference Crichton, Carel and Kidd2017) also warn against EI in psychiatry, and similarly motivate their argument through cases. In their first case, a person with psychosis reports something that initially seems implausible (namely, that he is the son of a famous Soviet leader) but it turns out to be true. In their second case, a former nun who is found to be burning incense to ward off demons is admitted involuntarily for psychosis, but later determined to be acting only according to her religious beliefs. In their third case, a young man with persistent suicidal ideation is admitted involuntarily because he had been standing near a cliff, seemingly preparing to attempt suicide. The mental health tribunal later declined to commit him because he had frequently engaged in similar behavior, which the clinical team could have learned if they had sought collateral information, implying that his risk of suicide was not acutely elevated.

Following from these cases, Crichton et al. argue that because persons with psychiatric illnesses are stigmatized, because psychiatrists subscribe to that stigma, because physicians are already predisposed by their training to take ‘hard’ evidence more seriously than ‘soft’ evidence, and because physicians have ‘epistemic power’ over patients, that persons with psychiatric illnesses are likely to face EI in clinical encounters. They conclude that psychiatrists should place more emphasis on believing their patients, make efforts to overcome their own intrinsic biases against patients, and that medical education should emphasize the importance of epistemic justice.

Most recently, Houlders, Bortolotti, and Broome (Reference Houlders, Bortolotti and Broome2021) have suggested that by failing to engage directly with patients’ statements about unusual experiences, including symptoms like delusions, psychiatrists can perpetrate EI, and ultimately cause persons who have those experiences – especially young persons – to question their own agency. They emphasize that failure to engage with delusions (e.g. by exploring their content) can be unjust because it elides the possibility that delusions can be meaningful:

[I]n recent inquiries into the potential protective role of some delusional beliefs…it is found that the delusion is not an incomprehensible speech act and can make reference to actual life events. It can arise after trauma or an otherwise significant event that has emotionally affected the person, such as abuse or bullying, and reflects the need for the person to respond to such disruptive experiences – it is meaningful and ‘makes narrative sense’….Such contributions…suggest that challenging or dismissing users’ reports of their own experiences without engagement is both an instance of epistemic injustice, and a missed opportunity to understand at least in part what they are going through and how best they can be helped. The idea that mental illness and in particular psychotic symptoms are evidence of global irrationality is culturally well-entrenched but very poorly supported by the evidence available to us.…[C]hallenging or dismissing users’ reports without engagement has significant risks for their sense of agency at a time when their sense of agency is already potentially undermined…. (Houlders et al., Reference Houlders, Bortolotti and Broome2021, p. 14)

We take Houlders et al. to mean that because delusions can serve a function for a patient, they can be reasonable (or even rational), so that to discredit or discount them is to unfairly treat the person with such beliefs as irrational. Treating them that way is thus a form of EI, and involves undermining their agency; because a strong sense of one's agency is important in many respects, this kind of EI can be especially harmful.

Problems with allegations of epistemic injustice in psychiatry

Claims that psychiatry is epistemically unjust are, as the examples above suggest, focused on psychiatrists’ epistemic stance toward their patients in virtue of their psychiatric diagnoses: fundamentally, the fear is that because our patients often have a mental illness, and because we think that mental illnesses tend to compromise epistemic ability, we tend to trust patients’ testimony less than we should. It's worth noting, however, that critics could instead worry that psychiatrists are guilty of EI on other grounds, such as race or gender. These kinds of EI are extant throughout medicine and are revealed in well-documented race- and gender-biases in diagnosis and treatment (see, e.g. Hoffman, Trawalter, Axt, & Oliver, Reference Hoffman, Trawalter, Axt and Oliver2016). There is no reason to think that psychiatry is immune to such biases, but also no reason to think it is especially prone to them, compared to other fields. It's clear to us that EI based on race or gender is epistemically and morally wrong, and it's obvious that providers should correct such biases whenever they can. We think, however, that claims about specifically psychiatric EI, which is perpetrated on the basis of psychiatric diagnosis per se, are meaningfully different. This is because they demand revision in the epistemic significance attributed to patient characteristics that, unlike race or gender, often do have epistemic significance; such revisions would, if widely made, significantly alter clinical practice.

We have a number of worries about allegations of psychiatric EI, which we will try to describe in broad strokes. First, many such allegations move from a few cases to undue pessimism about psychiatric practice generally. While it's undoubtedly true that EI sometimes occurs in psychiatric work (as in any other human endeavor), and while it's hard to know what psychiatrists actually do behind closed doors, psychiatrists shouldn't change their practice on the basis of a few cases; what's needed is evidence that the cases are representative. None of the authors we cite provide such evidence.

It's also important to emphasize that psychiatry already has tools for criticizing cases like these, contained within standards for good clinical practice. This makes such cases, if not statistical outliers, at least non-normative. Consider, for example, Crichton's incense-burning former nun. If things played out as the authors report, this case involves straightforwardly inadequate clinical practice. Skilled clinicians can distinguish even highly idiosyncratic religious or spiritual beliefs from psychosis, and should look for independent signs of psychosis that would assist in diagnosis. Similarly, in the case of the chronically suicidal young man, although the clinical team should have acquired additional information from the medical record or the patient's outpatient providers, enabling them to better assess his risk of suicide, this is a straightforward clinical failing. Good practice already dictates reviewing relevant available medical records and getting collateral data. So, while these cases are problematic, we don't need to appeal to EI to understand why they are. Standards of good clinical practice are enough.

We'd wonder, too, whether all of the cases presented are as ethically problematic as they initially seem. In Sanati and Kyratsous's first case, the patient (who was rightly afraid that her partner was going to leave her) is incorrectly discredited. But is this a moral wrong or just a mistake? Though the clinical details provided are thin, we would speculate that this error was understandable, because the patient really did have psychosis and her claim, though true, was antecedently improbable. Similarly, in Crichton et al.'s case of the chronically suicidal young man, the clinical team's decision to admit him involuntarily could have (setting aside their failure to get collateral information) been a reasonable response to their obligation to prevent suicide in seemingly high-risk individuals, even when those persons may otherwise downplay that risk. We're not told what the patient did to affect the clinical team's perception of his risk: did he say he wasn't acutely suicidal, or instead lead them to believe that he was at high risk? In our experience, patients in this sort of scenario often give mixed messages, making it morally appropriate to err toward suicide prevention.

We also worry that EI critiques reflect a subtle misunderstanding of how psychiatric care is provided.Footnote Footnote 1 The authors mentioned above seem to assume that psychiatrists usually regard their patients (especially those with psychotic disorders) as unreliable reporters about most things. But this is simply false; psychiatrists are generally very interested in what their patients tell them and typically regard their testimony as reliable. Indeed, they must do so, since the very foundation of psychiatric information-gathering is patient testimony (this is also an indication that, contra Crichton et al., psychiatrists tend to value ‘soft’ evidence). To be sure, psychiatrists sometimes bring skepticism to bear: the patient says he is the scion of a famously wealthy family, when his mannerisms, dress, and general circumstances suggest otherwise, so we ask extra questions or try to get verification, recognizing that the veracity of his claim has implications for diagnosis. Still, we know of no psychiatrists who assume that because a patient seems to be delusional about his ancestry, he is incapable of providing reliable information about anything else in his life.

Proponents of EI also overlook the fact that psychiatry incorporates epistemic checks and balances. Suppose a patient claiming to be Elon Musk's adult son is admitted to the hospital. We will probably give low credence to his claim at the start. But in addition to serially interviewing him, we will try to contact his relatives, review available medical records, and call his previous medical providers. If the testimony about which we are skeptical is true, we will likely discover it. If it is false, we will confirm that it is. Thus, there are safeguards against discrediting a patient incorrectly by the end of our work with him. Allegations of EI seem to reflect an idea that believing is something a psychiatrist does all at once, from the very start of an encounter. But in practice belief (really, the ascription of probability to a claim) is something that evolves over time.

We may also have frank disagreements with proponents of EI about when psychiatrists are obligated to ‘believe’ their patients. Suppose we are consulting in the ER on a patient who is suicidal and depressed. He denies having made any suicide attempt in recent days, but we note that his acetaminophen level is nearly toxic and that there are circumferential abrasions on his neck that suggest a recent hanging attempt. We'd initially be inclined to doubt his testimony, and to look for additional information (alternative explanations for the abrasions) to help us discern whether his denial is accurate. We worry that many advocates for epistemic justice would regard this as morally wrong, and say we should simply take the patient at his word. We suspect they think our obligation to believe him outweighs our obligation to critically review all of the evidence. In contrast, we think our obligations to patients sometimes demand skepticism instead of credence.

We agree with Houlders that psychiatrists should recognize that patients’ unusual symptoms, such as delusions, can be meaningful and even helpful to them, as Bortolotti argues elsewhere with the concept of epistemic innocence (Bortolotti, Reference Bortolotti2015). But we disagree that this makes such beliefs epistemically reliable such that doubting them is a form of EI (as opposed to simply not conducive to good rapport). Houlders cites a small study (McCabe, Skelton, Heath, Burns, & Priebe, Reference McCabe, Skelton, Heath, Burns and Priebe2002) describing how psychiatrists in a clinic setting frequently did not acknowledge patients’ attempts to discuss their symptoms. They infer from this that psychiatrists are not particularly interested in engaging patients in such discussions, and that they do not see the ways symptoms can be meaningful. Houlders does not mention, however, that psychiatrists in the study asked patients about their symptoms just as often as patients brought them up (though often with different objectives). We also wonder whether there could be good reasons for the psychiatrists’ failures to show interest in their patients’ remarks – was it old terrain, was the remark was made near the end of the appointment? In our experience, most psychiatrists recognize that delusions can serve important functions for patients, such as making sense of otherwise confusing experiences (Kapur, Reference Kapur2003), facilitating denial of painful facts, or even preserving self-esteem (Smith, Freeman, & Kuipers, Reference Smith, Freeman and Kuipers2005). We would guess that practicing psychiatrists usually think it is important not only to avoid challenging patients’ delusions, but to validate and explore the feelings they have about them and the difficulties associated with them – while avoiding outright agreement (Amador & Johanson, Reference Amador and Johanson2000). Finally, we'd note that our field has already tried its hand at making clinical use of the meaning of delusions; most evidence to date suggests, however, that psychoanalytic approaches that engage deeply with delusions are of limited benefit (Yakeley, Reference Yakeley2018).

The risks of using epistemic injustice as a tool for psychiatric reform

Accordingly, we do not think psychiatrists should, as a rule, alter their practice in response to claims that psychiatry is epistemically unjust. Such claims either involve misrepresentations of typical psychiatric practice, invoke epistemic principles that are unreasonable, or pinpoint outlying problems that are addressed by standards for information-gathering, inference, and relationship-building that psychiatry (like other medical fields) already accepts. To that extent, the concept – especially the suggestion that psychiatrists often perpetrate testimonial injustice – is not clinically helpful. The concept of hermeneutic injustice, in contrast, may have important roles in other domains, such as supporting the engagement of patients and patient advocacy groups in setting policy and formulating psychiatric diagnostic categories. For instance, Anke Bueter (Reference Bueter2019) has argued that the exclusion of service users from the design processes for psychiatric classification, such as the DSM-5, is a form of hermeneutic injustice. She emphasizes that because psychiatric diagnoses are value-laden, and because patient experiences are crucial to ascertaining correct diagnoses, the perspectives of patients are vital. But a detailed evaluation of such arguments is beyond the scope of this paper.

There are also risks associated with becoming too concerned with epistemic justice in the clinic. One is that it could inadvertently sideline principles of good clinical reasoning, replacing them with a quest for social justice that makes little room for skepticism or critical thinking. Psychiatrists have to work hard to balance truth-finding, respect for patients’ agency, maintaining rapport, and their own countertransference. In an environment where epistemic justice is emphasized, we worry the scales could be tilted too far away from truth-finding. It would be all too easy, harried by the fear of even appearing to be a perpetrator of injustice, to feel that we ought to act as though we believe everything patients tell us – which would be antithetical to appropriate clinical skepticism.

Similarly, we fear that an emphasis on EI would problematically alter the expectations patients have regarding their relationship with their psychiatrist. Would increasing attention to EI lead patients to expect uniform acceptance of their ideas about diagnosis and treatment – an expectation that has already manifested itself in other domains in some countries, as with patient demands to receive treatments for COVID-19 that are not evidence-based? It is not obvious to us that the remedy for declining trust in the medical profession is for the profession to abdicate its claims to expertise. On the contrary, we think a critical role for psychiatrists is to help our patients recognize things they might otherwise not see, if only through gentle nudges, applied gradually over time.

Admittedly, calls for epistemic justice in psychiatry are animated by real concerns. Patients are sometimes dissatisfied with their care and with the relationship they have with their psychiatrist. They sometimes feel they are treated disrespectfully (and sometimes are treated disrespectfully). They may even feel that they are victims of EI per se (Harris, Andrews, Broome, Kustner, & Jacobsen, Reference Harris, Andrews, Broome, Kustner and Jacobsen2022). It is also possible that, when EI does arise, it leads to real harms – misdiagnosis, inappropriate care, and distress, among others. Psychiatric conditions are still unduly stigmatized. Patients are not always eager to participate in treatment. Psychiatric treatments are not as efficacious as anyone would like and still not as available as they should be. Like all other humans, psychiatrists are prone to implicit biases. Some think the field is too focused on pharmacotherapy and not enough on psychosocial interventions or personal connection. Psychiatric work involves value judgments that are often controversial. Psychiatry can be very intrusive and sometimes involves coercion and the deprivation of specific liberties; it sometimes even harms patients. The field desperately needs to address these issues, and others (Gardner & Kleinman, Reference Gardner and Kleinman2019).

Though it will not resolve all of these problems, we would propose that psychiatrists should, rather than concerning themselves particularly with epistemic justice, continue to seek the delicate balance between trusting their patients most of the time, doubting their patients when they have good clinical reason to do so, and building relationships that are resilient enough to contain disagreement, doubt, misunderstanding, and even the occasional error. There's one claim made by proponents of epistemic justice that we heartily endorse: the patient–physician relationship is powerful, sometimes life-changing, occasionally even identity-defining. But we think psychiatrists already know this: our approach to our field is, despite its issues, founded in an abiding respect for our patients as persons who matter.

Author contributions

Each author made substantial contributions to the conception of the work, drafting the work, revising it critically for important intellectual content, had final approval of the version to be published, and agrees to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Financial support

Dr Kious received funding from the Greenwall Foundation's Faculty Scholars Program in Bioethics. Dr Kim is supported by the Intramural Research Program of the NIH.

Conflict of interest

None of the authors identifies any relevant conflicts of interest. Dr Kim is a federal employee but the opinions expressed are his and do not represent the views or policies of any part of US government.

Ethical standards

Not applicable; no human subjects participated in this study.

Consent statement

Not applicable; no human subjects participated in this study.

Footnotes

The notes appear after the main text.

1 To the objection that psychiatrists are the ones writing some of these articles, we would suggest that being a psychiatrist does not protect one from misunderstandings – or more likely, misrepresentations – of one's own field when in the grip of an idea. This should be no more surprising than the possibility of an anti-psychiatric psychiatrist, a familiar figure in the philosophy of psychiatry.

References

Amador, X., & Johanson, A.-L. (2000). I am not sick, I don't need help! Helping the seriously mentally ill accept treatment. New York, NY: Vida Press.Google Scholar
Bortolotti, L. (2015). The epistemic innocence of motivated delusions. Consciousness and Cognition, 33, 490499. doi: 10.1016/j.concog.2014.10.005.CrossRefGoogle ScholarPubMed
Bueter, A. (2019). Epistemic injustice and psychiatric classification. Philosophy of Science, 86(5), 10641074. doi: 10.1086/705443.CrossRefGoogle Scholar
Carrotte, E. R., Hartup, M. E., Lee-Bates, B., & Blanchard, M. (2021). ‘I think that everybody should be involved’: What informs experiences of shared decision-making in supporting people living with schizophrenia spectrum disorders? Patient Education and Counseling, 104(7), 15831590. doi: 10.1016/j.pec.2020.11.012.CrossRefGoogle ScholarPubMed
Crichton, P., Carel, H., & Kidd, I. J. (2017). Epistemic injustice in psychiatry. BJPsych Bulletin, 41(2), 6570. doi: 10.1192/pb.bp.115.050682.CrossRefGoogle ScholarPubMed
Daya, I., Maylea, C., Raven, M., Hamilton, B., & Jureidini, J. (2020). Defensive rhetoric in psychiatry: An obstacle to health and human rights. The Lancet. Psychiatry, 7(3), 231. doi:10.1016/S2215-0366(19)30534-6CrossRefGoogle ScholarPubMed
Drożdżowicz, A. (2021). Epistemic injustice in psychiatric practice: Epistemic duties and the phenomenological approach. Journal of Medical Ethics, 47(12), e69. doi: 10.1136/medethics-2020-106679.CrossRefGoogle Scholar
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. New York, NY: Oxford University Press.CrossRefGoogle Scholar
Gagné-Julien, A.-M. (2021). Wrongful medicalization and epistemic injustice in psychiatry: The case of premenstrual dysphoric disorder. European Journal of Analytic Philosophy, 17(2), S4S36. doi: 10.31820/ejap.17.3.3.CrossRefGoogle Scholar
Gardner, C., & Kleinman, A. (2019). Medicine and the mind – the consequences of psychiatry's identity crisis. New England Journal of Medicine, 381(18), 16971699. doi: 10.1056/NEJMp1910603.CrossRefGoogle ScholarPubMed
Groot, B., Haveman, A., & Abma, T. (2020). Relational, ethically sound co-production in mental health care research: Epistemic injustice and the need for an ethics of care. Critical Public Health, 32(2), 230240. doi: 10.1080/09581596.2020.1770694.CrossRefGoogle Scholar
Harcourt, E. (2021). Epistemic injustice, children and mental illness. Journal of Medical Ethics, 47(11), 729735. doi: 10.1136/medethics-2021-107329.CrossRefGoogle ScholarPubMed
Harris, O., Andrews, C., Broome, M. R., Kustner, C., & Jacobsen, P. (2022). Epistemic injustice amongst clinical and non-clinical voice-hearers: A qualitative thematic analysis study. British Journal of Clinical Psychology, 61, 947963. doi: 10.1111/bjc.12368.CrossRefGoogle ScholarPubMed
Hoffman, K. M., Trawalter, S., Axt, J. R., & Oliver, M. N. (2016). Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences, 113(16), 42964301.CrossRefGoogle ScholarPubMed
Houlders, J. W., Bortolotti, L., & Broome, M. R. (2021). Threats to epistemic agency in young people with unusual experiences and beliefs. Synthese, 199(3), 76897704. doi: 10.1007/s11229-021-03133-4.CrossRefGoogle ScholarPubMed
Kapur, S. (2003). Psychosis as a state of aberrant salience: A framework linking biology, phenomenology, and pharmacology in schizophrenia. American Journal of Psychiatry, 160(1), 1323. doi: 10.1176/appi.ajp.160.1.13.CrossRefGoogle ScholarPubMed
Kurs, R., & Grinshpoon, A. (2018). Vulnerability of individuals with mental disorders to epistemic injustice in both clinical and social domains. Ethics and Behavior, 28(4), 336346. doi: 10.1080/10508422.2017.1365302.CrossRefGoogle Scholar
McCabe, R., Skelton, J., Heath, C., Burns, T., & Priebe, S. (2002). Engagement of patients with psychosis in the consultation: Conversation analytic study. BMJ, 325(7373), 11481151. doi: 10.1136/bmj.325.7373.1148.CrossRefGoogle ScholarPubMed
Ritunnano, R. (2022). Overcoming hermeneutical injustice in mental health: A role for critical phenomenology. Journal of the British Society for Phenomenology, 53(3), 118. doi: 10.1080/00071773.2022.2031234.CrossRefGoogle Scholar
Sanati, A., & Kyratsous, M. (2015). Epistemic injustice in assessment of delusions. Journal of Evaluation in Clinical Practice, 21(3), 479485. doi: 10.1111/jep.12347.CrossRefGoogle ScholarPubMed
Smith, N., Freeman, D., & Kuipers, E. (2005). Grandiose delusions: An experimental investigation of the delusion as defense. Journal of Nervous and Mental Disease, 193(7), 480487. doi: 10.1097/01.nmd.0000168235.60469.cc.CrossRefGoogle ScholarPubMed
Tate, A. J. M. (2019). Contributory injustice in psychiatry. Journal of Medical Ethics, 45(2), 97100. doi: 10.1136/medethics-2018-104761.CrossRefGoogle Scholar
Yakeley, J. (2018). Psychoanalysis in modern mental health practice. The Lancet. Psychiatry, 5(5), 443450. doi: 10.1016/S2215-0366(18)30052-X.CrossRefGoogle ScholarPubMed
Zisman-Ilani, Y., Roth, R. M., & Mistler, L. A. (2021). Time to support extensive implementation of shared decision making in psychiatry. JAMA Psychiatry, 78(11), 11831184. doi: 10.1001/jamapsychiatry.2021.2247.CrossRefGoogle ScholarPubMed