Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-25T07:11:06.222Z Has data issue: false hasContentIssue false

Kaleidoscope

Published online by Cambridge University Press:  24 September 2019

Rights & Permissions [Opens in a new window]

Abstract

Type
Kaleidoscope
Copyright
Copyright © The Royal College of Psychiatrists 2019 

Cognitive change over time in psychosis: is decline continuous, generalised and specific to schizophrenia? Despite recognition of the profound impact of cognitive dysfunction on prognosis in psychotic illness, these questions have largely been unanswered, with relatively few studies tracking individuals longitudinally over many years. Jolanta Zanelli et al address this by following up just over 100 participants with first-episode psychotic illnesses (65 with schizophrenia), using a broad neuropsychological battery at initial presentation and again a decade later. Compared with a matched healthy cohort, all those with psychosis had baseline cognitive deficits in IQ.Reference Zanelli, Mollon, Sandin, Morgan, Dazzan and Pilecka1 Those with schizophrenia showed deterioration in IQ over time, with increased deficits in verbal knowledge and memory, but no further changes evident on executive functioning or processing speed. In those with ‘other psychoses’, subsequent change was limited to verbal learning.

The findings support the ‘IQ decline hypothesis’– namely that there is a drop in functioning over time. However, they go against the ‘generalised decline’ theory: changes were not equal in magnitude across domains tested, and varied between schizophrenia and other psychoses. Symptom severity was associated with the degree of change, but only in those with schizophrenia, and interestingly, the use, duration or type of antipsychotic medication had no effect on changes in cognition. The results remind us that cognitive functioning is a key factor for clinicians to consider, especially that some aspects are more prone to decline and might have an impact on the support individuals require. In a world moving away from ‘schizophrenia’ to a ‘psychosis spectrum’, it is also a prompt that not all psychoses are the same.

‘Non-specific effects’ is a common throwaway phrase in research, yet, like ‘placebo-effect’, something positive is happening to patients so shouldn't we better understand this? The phrase applies to anything not directly intended by a theoretical model or treatment, for example the manner in which we engage or speak to a person. Priebe et al reviewed the literature across a diverse range of psychiatric treatments.Reference Priebe, Conneely, McCabe and Bird2 Although the research assayed was quite heterogeneous, clinician communication was a key non-specific aspect, clustering into verbal and non-verbal components. The former included initial contacts, empathy, clear communication and clinicians picking up cues about unspoken worries; the latter, factors such as clinician warmth, listening, a positive tone of voice, and pro-social postures. How treatments were framed emerged as important, although there were interesting differences here insofar as there was some evidence that patients new to services appreciated a more optimistic pitch, and those already in contact with services favoured a more tempered approach. Shared decision-making about treatment and care was important, and encouragingly there were data showing this to be viable and productive even in those detained involuntarily.

These non-specific factors have more of an impact on what the authors call ‘process measures’, which are issues such as the therapeutic relationship, patient satisfaction and adherence, rather than clinical measures such as symptom relapse. Crucially, the small literature that exists on the topic suggests that brief training courses can enhance these non-specific elements in clinical contacts, leading to better outcomes. The paper taps into a collective wisdom we all share from our own practice, but highlights how little this is subjected to scientific scrutiny in measuring impact or aspects that are more or less effective. Further, our continuing professional development and training typically emphasises accrual of more ‘factual’ knowledge, and far less, it would seem, enhancement of these key skills that clearly benefit patient care.

In addition to elevated levels of corticotropin-releasing factor (CRF), those with post-traumatic stress disorder (PTSD) show several alterations of the glucocorticoid system linked to the symptoms and severity of the disorder. Glucocorticoid-induced leucine zipper (GILZ) is a transcription factor activated by stress markers, shown to have an impact on hippocampal and cortical dendritic spine integrity, and is used as a reliable indicator of glucocorticoid pathway sensitivity. Looking to elucidate the role of GILZ, Lebow and colleagues used a transgenerational model to induce PTSD in mice.Reference Lebow, Schroeder, Tsoory, Holzman-Karniel, Mehta and Ben-Dor3 Doxycycline (dox) was administered via drinking water to an experimental group of pregnant females once in late term, a time known as a critical window for stress reactivity and impact on epigenetic programming. Avoiding the stress of handling, which often confounds experiments like this, the dox activates a previously inserted lentivirus vector and causes a continuous overexpression of CRF. Although they delivered early, there was no impact on maternal behaviour in the dams. However, their male pups showed an early dysregulation of the glucocorticoid system. Pups were undisturbed until adulthood, at which point a portion underwent a stress-enhanced fear learning paradigm and behavioural tests to identify those that were ‘PTSD-like’. Although the prenatal stressor had no impact on the baseline anxiety of the mice, it did increase the likelihood of PTSD-like behaviours after the adult trauma in males, but not females. GILZ messenger (m)RNA and methylation level reductions in amygdalar tissue were evident and corresponded to the number of stressors experienced, again only in males. Finally, as a confirmation of findings, the authors silenced GILZ in the amygdala with RNA interference in adulthood, which mimicked the double exposure to stressors in the PTSD induction and caused corresponding PTSD-like behaviours in the mice.

Following up in humans, the authors explored the way GILZ interacts with early-life stress, multiple stress exposure and current diagnosis by recruiting a subset of 435 participants from the Grady Trauma Project. Gene expression and DNA methylation were measured via microarray and clinical assessments were performed including a modified PTSD Symptom Scale, Clinician Administered PTSD Scale and the Traumatic Life Inventory. GILZ mRNA and methylation levels correlated with current PTSD diagnosis, severity of abuse exposure and number of traumatic incidents in men. GILZ is located on the X chromosome, leaving males more vulnerable to the impact of alteration, these animal and human data suggest GILZ is an epigenetically regulated quantifier of accumulating stressful or traumatic experiences across a lifetime in men. Representing a susceptibility to the development of PTSD, GILZ could be measured in those with a known history of trauma as a way to target preventative measures in the vulnerable.

The cultural anthropologist Margaret Mead was not a clinical trialist but her statement ‘Always remember that you are absolutely unique. Just like everyone else’ might have been apt. There has been much debate over the years about randomised controlled trials (RCTs) only capturing average effects of a treatment in highly selected samples that bear little resemblance to the ‘real patients’ clinicians see in everyday practice. A related idea was recently put forward by Krauss who analysed the ten most cited RCTs and concluded that trials ‘inevitably produce bias’ by virtue of participants not being truly equivalent between arms of a trial and neglect to explore alternative factors that contribute to their main outcomes.Reference Krauss4 There is a counterpoint to Krauss, in Harrell's blog.Reference Harrell5 Perhaps more than other specialities, psychiatry has reason to hope that differential or heterogeneous treatment response is real because we cannot yet explain why two people derive some or no benefit from the same medication or intervention. One seductive and visual illustration is Simpson's paradox where, for example subgroups of a sample (say, people aged 60 to 70 years) show a positive benefit with a hypertensive drug, but when analysing for an effect over all ages (the whole sample) there is no overall effect of treatment. In this case, there is a differential effect of treatment when conditioned on another variable (age). Statistically, what we really desire is to understand patient × treatment interactions, but often, we do not have adequate trial design or data (we need expensive repeated cross-over designs to establish this). Worryingly, we are likely to be seduced by methods that promise us a way to identify who will (or will not) benefit; perhaps the most familiar being ‘responder analysis’ based on subgroups of patients that showed a response above or below a dichotomising threshold. And now, we have personalised medicine facilitated by a boom in technological approaches including mining electronic health records, wearable devices and the application of machine learning where (perhaps overoptimistic) bold claims are made, such as Perna et al’s statement that ‘Theoretically, predictive tools may be developed for nearly all clinically relevant questions, assisting clinicians when making decisions with patients’.Reference Perna, Grassi, Caldirola and Nemeroff6

So, before we get excited about personalising treatments, we should probably look for evidence that patients actually do respond differently to treatments? In the context of antipsychotic treatments for psychotic disorders, this is what Winkelbeiner et al described as follows: ‘An assumption among clinicians and researchers alike is that the response to antipsychotic drugs by patients with psychosis differs considerably between individuals’ and they set out to examine this by meta-analysing 52 RCTs of antipsychotics.Reference Winkelbeiner, Leucht, Kane and Homan7 The rationale behind their approach is this: in both the control and treatment arm of a trial, the spread of pre- and post-treatment symptom scores is attributable to sources that include the within-participant variation. But in the treatment arm there is an additional source of variation attributable to patient × treatment interaction effect (if there is one). So, one might reasonably assume that if the treatment arm shows more variation compared with the control arm then this would be some evidence for variation in individual response. Winkelbeiner et al derive a log variability ratio to measure this contrast in variation over the 52 RCTs. Here is the punchline: rather than a relative increase in variability (suggestive of individual response) they found lower variability in the treatment versus control arms. Further, looking at each individual antipsychotic, they found the same pattern. They helpfully conclude by reminding us that RCTs ‘… provide unbiased estimates of the relative efficacy of an intervention, which even the largest observational studies cannot provide’ (emphasis added) and further, they counter the ‘placebo response’ by stating that if such effects were occurring they would (by virtue of randomisation) be present in both control and treatment arms and would cancel out.

Finally, we like to think of Kaleidoscope as the No Spin Zone, not least as we are all avid Fox News fans. How much spin goes on in the abstracts of scientific articles? Does authors’ ‘amusing’ use of ‘mind the gap’ and inane song lyrics in paper titles bedazzle us away from an oversell on the abstract front? Although research conventions and standards set out how RCTs’ results should be reported, these do not apply to abstracts. Is this lack of consensus and authors’ understandable desire to highlight the merits of their work in the shop-window of those opening 250 words too tempting to keep to the truth? Jellison et al undertook a cross-sectional review of clinical RCTs with non-significant primary end-points published in six leading psychiatry and psychology journals – including our own BJPsych – between 2012 and 2017.Reference Jellison, Roberts, Bowers, Combs, Beaman and Wayant8 Unlike Bill O'Reilly, they defined spin as ‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results’. Their paper included 116 RCTs, with spin found in 56%, most commonly in the abstract results and conclusion sections. Interestingly, there was no relationship between industry funding and spin. The findings matter: we are all guilty of skimming papers via just reading the abstracts – and in part, that is what the abstracts are for – and you come to Kaleidoscope because you are too lazy to do your own in-depth literature review each month, right? The authors suggest establishing standards for abstracts and actively inviting reviewers to comment on the presence of any spin in papers assessed; we are happy to report we found none in theirs.

References

1Zanelli, J, Mollon, J, Sandin, S, Morgan, C, Dazzan, P, Pilecka, I, et al. Cognitive change in schizophrenia and other psychoses in the decade following the first episode. Am J Psychiatry 1 Jul 2019 (doi:10.1176/appi.ajp.2019.18091088).Google Scholar
2Priebe, S, Conneely, M, McCabe, R, Bird, V. What can clinicians do to improve outcomes across psychiatric treatments: a conceptual review of non-specific components. Epidemiol Psychiatr Sci 15 Aug 2019 (https://doi.org/10.1017/S2045796019000428).Google Scholar
3Lebow, MA, Schroeder, M, Tsoory, M, Holzman-Karniel, D, Mehta, D, Ben-Dor, S. Glucocorticoid-induced leucine zipper “quantifies” stressors and increases male susceptibility to PTSD. Transl Psychiatry 2019; 9: 178.Google Scholar
4Krauss, A. Why all randomised controlled trials produce biased results. Annals of medicine 2018; 50: 312322.Google Scholar
5Harrell, F. Randomized Clinical Trials Do Not Mimic Clinical Practice, Thank Goodness. Statistical Thinking, 2018 (https://www.fharrell.com/post/rct-mimic/).Google Scholar
6Perna, G, Grassi, M, Caldirola, D, Nemeroff, CB, et al. The revolution of personalized psychiatry: will technology make it happen sooner?. Psychological medicine 2019; 48: 705713.Google Scholar
7Winkelbeiner, S, Leucht, S, Kane, JM, Homan, P. Evaluation of differences in individual treatment response in schizophrenia spectrum disorders: a meta-analysis. JAMA Psychiatry 3 Jun 2019 (doi:10.1001/jamapsychiatry.2019.1530).Google Scholar
8Jellison, S, Roberts, W, Bowers, A, Combs, T, Beaman, J, Wayant, C, et al. Evaluation of spin in abstracts of papers in psychiatry and psychology journals. BMJ Evid Based Med 5 August 2019 (doi:10.1136/bmjebm-2019-111176).Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.