Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-27T17:42:12.322Z Has data issue: false hasContentIssue false

Part I - Social and Institutional Contexts

Published online by Cambridge University Press:  17 June 2021

George Ikkos
Affiliation:
Royal National Orthopaedic Hospital
Nick Bouras
Affiliation:
King's College London

Summary

Robairt Clough was a long-term patient at Holywell Psychiatric Hospital in Antrim (Northern Ireland). In the Christmas 1972 edition of the patients’ magazine, he wrote the following verses

Type
Chapter
Information
Mind, State and Society
Social History of Psychiatry and Mental Health in Britain 1960–2010
, pp. 3 - 68
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Chapter 1 Historical Perspectives on Mental Health and Psychiatry

Joanna Bourke
Introduction

Robairt Clough was a long-term patient at Holywell Psychiatric Hospital in Antrim (Northern Ireland). In the Christmas 1972 edition of the patients’ magazine, he wrote the following verses:

Little drops of medicine,
Little coloured pills,
Cure us of all ailments,
Cure us of all ills.
Doctors with a stethoscope,
Doctors with a bag,
Make us fit and healthy,
Cheer us when we flag.
Pretty little nurses
Nurses bold and strong,
Nurses with a banjo,
Nurses sing a song.
Wardsmaids with the dinner,
Wardsmaids with the tea,
Keep our tummies happy,
Until the day we’re free.1

This little ditty from 1972 draws attention to some of the dominant themes in the social history of mental health and psychiatry in Britain from the 1960s to the 2010s. These include the championing of psychopharmacology (those ‘little coloured pills’), symbolic representations of psychiatric authority (stethoscope and medicine bag), the emotional management of patients (‘pretty little nurses’ with their songs), cultures of conviviality (keeping tummies happy) and, of course, the ominous mood instilled by disciplinary practices and environments (they wait ‘Until the day we’re free’). The ditty acknowledges the social meanings generated when psychiatric patients and health professionals meet. Each participant brings to the encounter a multitude of identities based on gender, class, ethnicity, sexuality, generation, age, religion and ideological dogmas as well as memories of past encounters, sensual perceptions and embodied knowledges. As a result, any overview of the history of British psychiatry needs to grapple with the tension between temporally fluid societal contexts and the sensual intimacy of very personal human encounters (i.e. seeing the Other, touching, stories only partly heard, bodily smells, the metallic taste of pills).

The historiographical literature on mental health and psychiatry from the 1960s to the end of the first decade of the twenty-first century is exceptionally detailed in terms of policy and politics. Scholars have drawn on Foucault’s ‘great confinement’ (particularly his book Madness and Civilization), Roy Porter’s patients’ perspectives, material culture and the powerful themes of discipline, power and social construction.2 In contrast to these grand political and theoretical themes, this chapter seeks to explore some of the most significant changes in the lived experiences of mental health and psychiatry. It pays attention to social encounters between medical professionals and patients.

It is important to acknowledge, however, that the medical professionals discussed in this chapter are not the main caregivers. People identified as having ‘mental health issues’ engage in their own practices of self-care. They often have strong familial networks (especially those sustained by mothers, grandmothers, daughters and sisters). They routinely seek advice from health-oriented journalists, radio and television programmers, teachers, police, pharmacists and doctors’ receptionists, not to mention herbalists, tarot card readers, astrologers, psychics and faith healers. Since the 1960s, Britain has undoubtedly become more secular, but distressed people continue to seek the laying on of hands. As I argue in my book The Story of Pain: From Prayer to Painkillers, secularisation is honoured more in rhetoric than reality.3 The same people who attend consultations with cognitive behavioural therapy (CBT) therapists eventually return home where they call out to their gods for relief of mental suffering.

Psychiatric professionals, however, claim to provide a superior tier of help to people who identify themselves, or are identified by others, as experiencing mental health problems. In the period from the 1960s to 2010, there are six major shifts in encounters between these professionals and their patients: deinstitutionalisation, changes in diagnostic nomenclature, anti-psychiatry, patients’ movements, evidence-based medicine and the privileging of psychopharmacology, neurochemistry and neurobiology. These themes overlap to varying degrees.

Deinstitutionalisation

The first theme is deinstitutionalisation (see also Chapter 31). The closure of the Victorian public asylum system and its replacement with community-based psychiatric services is the greatest social shift for patients and professionals since the 1960s. Although the movement had a long history and was by no means confined to Britain, a decisive moment for British practices occurred in 1961 when Enoch Powell, the minister of health, gave a speech at the annual conference of the National Association for Mental Health (now called Mind). In it, Powell predicted that, within fifteen years, ‘there may well be needed not more than half as many places in hospitals for mental illness as there are today’. He described the Victorian asylums as standing ‘isolated, majestic, imperious, brooded over by the gigantic water-tower and chimney combined, rising unmistakable and daunting out of the countryside’. These asylums ‘which our forefathers built with such immense solidity to express the notions of their day’ were now outdated. He maintained that it was necessary for ‘the medical profession outside the hospital service … to accept responsibility for more and more of that care of patients which today is given inside the hospitals’. They had to be ‘supported in their task’ by local authorities.4 In the years that followed Powell’s speech, mentally ill people moved repeatedly between hospitals, care centres, other local facilities and family homes, as well as within a growing sector of ‘for-profit’ care. The belief that welfare dependency was obsolete and morally corrupting grew; the mentally ill were not only encouraged but required to become independent and autonomous.

Deinstitutionalisation had major societal effects. Attacks on the welfare state by the governments of Margaret Thatcher and John Major contributed to the stigmatisation of the poor and mentally ill. Asylums were emptied and converted into luxury homes, parks and other public centres. In the 1950s, there were 150,000 psychiatric beds in England; by 2006, this had fallen to 34,000.5 The underfunding of mental health services, especially when compared to physical health facilities, plagues the entire field.6

The emergence of therapeutic communities in the 1950s led by Maxwell Jones encouraged other psychiatrists such as John Wing, director of the Medical Research Council’s Social Psychiatry Research Unit between 1965 and 1989, to propagate the concept of ‘rehabilitation’. Prior to this period, ‘rehabilitation’ had primarily referred to assistance to people who were physically injured, especially ex-servicemen, but it was reworked as a way of providing mentally unwell patients with the skills necessary for independent living.7 Prescriptions for drugs skyrocketed. Prisons, too, had to deal with soaring numbers of inmates diagnosed with mental illnesses, although it is unclear if this was due to the ‘psychiatrisation’ of criminality or a greater recognition of mental illness more generally.8

There is no consensus about whether deinstitutionalisation has been a good or bad thing. Was it an enlightened move, facilitated by the introduction of more effective drugs and seeking to give greater autonomy to the mentally ill? Or was it a component of right-wing policies determined to slash public spending, irrespective of the effects on vulnerable members of society?

Few commentators, though, dispute the fact that deinstitutionalisation did create many new problems. A lot of people who needed support lost contact with mental health services altogether.9 There was often inadequate or non-existent community provision in the first place. After all, in 1959, there were only twenty-four full-time psychiatric social workers employed by local authorities throughout England. This problem was exacerbated by the balance-of-payment crisis and inflation in the 1970s, which left local authorities struggling to cope.10 Even Frank Dobson, secretary of state for health between 1997 and 1999, admitted that ‘care in the community has failed’. He observed that

people who are mentally ill, their carers and the professional staff responsible for their welfare have suffered from ineffective practices, an outdated legal framework and lack of resources … Discharging people from institutions has brought benefits to some. But it has left many vulnerable patients to try to cope on their own … Too many confused and sick people have been left wandering the streets and sleeping rough.11

Deinstitutionalisation was a particular problem for the most severely and chronically ill who required long-term care. One group that was most ruthlessly affected was older people. They often suffer from complex, overlapping psychiatric conditions, including disorders such as depression, chronic degenerative brain disorders and lifestyle crises resulting from isolation and bereavement.12 Despite this, mentally ill older people were relegated to a lowly position within the hierarchy of psychiatry patients. As late as the 1940s, there were no specialist services in England for people with senile dementia.13 The first international conference on the subject only took place in London in 1965.14 Even at the end of the 1970s, only half of health service districts provided specialist services for older people with mental health issues.15 Older men and women were increasingly and disproportionately represented in psychiatric institutions. For example, in the early 1970s, around 9 per cent of the population of England and Wales were over sixty-five years of age, but they occupied 47 per cent of the psychiatric beds.16 In the words of the psychiatrist W. Alwyn Lishman, who worked in the late 1950s,

Every large mental hospital has a secret large ward tucked away – perhaps three or four … which were not much visited because they were full of old demented people. Because there was no interest in them, it fell to the most junior doctor to go there once a week to see if any one needed to have their chest listened to. The most neglected parts of any mental hospital were the old age wards.17

Deinstitutionalisation and ‘community care’ made these older patients’ situation even worse (see also Chapter 22). Families struggled to cope. Despite the obvious dependence of older people on the state, little investment was forthcoming. Ageism and prejudiced beliefs that nothing could be done to improve their well-being hampered positive responses. In 1973, J. A. Whitehead, consultant psychiatrist at Bevendean Hospital in Brighton, bluntly declared that psychiatric services were ‘loath to deal with [older] patients’. He added that, when this neglect was ‘coupled with society’s fear of mental illness and the ambivalent attitudes to older people in general, the mentally ill old person is in a dismal position’.18

Anti-psychiatry

If the first theme is deinstitutionalisation, with its negative effects on the most vulnerable members of society, including older people, the second is anti-psychiatry (see also Chapter 20). Although the German physician Bernhard Beyer first used the term ‘anti-psychiatry’ in 1912, David Graham Cooper (a South African psychiatrist who trained and worked in London) popularised it in his book Psychiatry and Anti-Psychiatry (1967). In it, Cooper accused psychiatry of being ‘in danger of committing a well-intentioned act of betrayal of those members of society who have been ejected into the psychiatric situation as patients’.19 The movement was led by global figures such as Michel Foucault, Erving Goffman, Ken Kesey and Thomas Szasz and, in Britain, by R. D. Laing, Cooper and, from 1966, the Institute of Phenomenological Studies.20 These anti-psychiatrists were embedded in wider countercultural movements. They maintained political links with global liberationist movements and were attracted by existential psychiatry and phenomenology.

They were a diverse group but all rejected the ‘whiggish’ approach to psychiatry – that is, the ‘rise and rise’ of psychiatric knowledge and power. Instead, they developed damning critiques of the profession, arguing that it was damaging people’s lives. As Laing explained in The Voice of Experience (1982), asylums were total institutions, like concentration camps. When patients enter psychiatric care, he contended, they ‘are mentally dismembered. Raw data go into the machine, as once raw human meat into the mouth of Moloch’ (i.e. the Canaanite god associated with child sacrifice).21

For anti-psychiatrists, society not biology was the cause of mental distress. Laing’s highly influential work on schizophrenia strengthened their belief that even major disturbances could be understood as intelligible responses to environments and relationships. Both Laing and Cooper were also highly critical of the bourgeois family, warning that it fostered psychic abuse. For many anti-psychiatrists, mental illness did not even exist. As the scholar Michael Staub explains in Madness Is Civilization (2018), anti-psychiatrists believed that only ‘good fortune and chance’ stood between insanity and mental health.22

Anti-psychiatrists insisted that mental health could be better achieved by the establishment of therapeutic communities (such as Cooper’s Shenley Hospital for young schizophrenics between 1962 and 1967) that were anti-hierarchical, people-led, positive and open to the worlds of psychotics. Laing even argued that severe psychotic illnesses could be healing, enabling a person to travel into a mystical world and return with greater insight. He established the Kingsley Hall therapeutic community for people with psychotic afflictions in 1965, eradicating hierarchies between patients and doctors.23

The long-term assessment of anti-psychiatry is divided. The movement drew attention to abuses of power and the need to give people control over their own lives. Yet, as the historian Andrew Scull concluded, it also encouraged two views. The first was the ‘romantic notion’ that madness was nothing more than a ‘social construction, a consequence of arbitrary labeling’. The second was that ‘existing mental hospitals were so awful that any alternative must somehow represent an improvement. Sadly, “it just ain’t so”.’24

However, anti-psychiatry gained support from a wide range of people who were becoming critical of what was increasingly seen as the abuse of power by psychiatrists. Lobotomy, insulin coma therapy, drug-induced convulsions and electroconvulsive therapy (ECT) treatments were dubbed instruments of oppression, designed not to heal but to coerce people into behaving in particular ways. The treatment of homosexuals was exposed as particularly repellent. Not only were homosexuals given religious counselling and psychoanalysis in an attempt to ‘cure’ them of their sexual orientation but they were also prescribed oestrogen treatments to reduce libido,25 and between the early 1960s and the early 1970s, they were subjected to behavioural aversion therapy with electric shocks. Many were given apomorphine (which causes severe nausea and vomiting) as an aversive stimulus (see also Chapter 34).26

Revelations of psychiatric abuses swung support behind anti-psychiatrists. Particularly important were Barbara Robb’s Sans Everything: A Case to Answer (1967); a 1968 World in Action documentary; reports that ECT was being given without anaesthetic or muscle relaxant at Broadmoor Hospital; and exposés of abuses at Ely (1969), Farleigh (1971), Whittingham (1972), Napsbury (1973), South Ockendon (1974) and Normansfield (1978) hospitals. One of the consequences of the earlier scandals was the setting up of the Health Service Commissioner (Ombudsman) in 1972. Dr Louise Hide discusses these scandals in Chapter 7 in this volume.

Patients’ Movements

Scandals, coupled with rising numbers of voluntary patients (who had been admitted to mental hospitals from 1930), resulted in the third major shift in the social history of mental health and psychiatry in Britain after the 1960s. This was the rise of parents’ and then patients’ movements (see also Chapters 13 and 14).

There was a range of reasons for the frustration felt by people being treated for psychiatric ailments but the dominant one was linked to debates about informed consent. The 1959 Mental Health Act had not even mentioned consent to treatment. By the early 1970s, the Medical Defence Union endorsed the need for consent but there was a lack of clarity about how medical professionals were expected to ensure that their patients understood the nature, purpose and risks of treatments.27

By the 1983 Mental Health Act (England and Wales), however, consent was a prominent feature. Part of the credit for this change goes to the rise and growing influence of mental health charities and activist groups, including the National Association for Mental Health (now Mind). When it was established in 1946, Mind brought together professionals, relatives and volunteers to lobby for the improved care of children and adults with psychiatric problems. There was also an abundance of other user-movements, including the Mental Patients’ Union, the Community Organisation for Psychiatric Emergencies, Protection for the Rights of Mental Patients in Treatment, Campaign Against Psychiatric Oppression, Survivors Speak Out and the United Kingdom Advocacy Network. Other groups, styling themselves (initially) as charities included the National Autistic Society, which sought to draw the public’s attention to the specific needs of autistic children, while also castigating the medical establishment for stigmatising them. From only a dozen such organisations in the 1980s, there were more than 500 by 2005, partly due to the establishment of the NHS and the Community Care Act of 1990.28 By banding together, the families of people designated ‘mentally ill’ could share information, provide encouragement and lobby for improved facilities.29 In 1970, a major battle was won when the government passed the Education (Handicapped Children) Act, which gave all children with disabilities the right to education.

From the 1970s, activists began drawing on the language of human rights to make their case. In contrast to parents’ movements, which tended to lobby for more and better biomedical research into the causes and cures of mental illness, an abundance of patient-led organisations sprang up. These groups repudiated the paternalism inherent in charity status, together with the implication that they needed to be ‘cured’. As patients, they insisted that they were the ones with the authority to adjudicate on the meaning of their situation and the appropriate responses (which may or may not include treatment). They emphasised health rather than illness.30 These patients’ movements sought to draw attention to the positive aspects of being ‘different’; they insisted on the complexity of their lives; and they reminded people of the contributions they made to society.31

Patients’ movements were encouraged by the rise of the Internet, which enabled people with mental health problems to communicate with each other much more easily. Mental health blogging greatly facilitated the development of therapeutic communities outside institutional ones. Unfortunately, the positive aspects of these forms of communications have also been exploited by unscrupulous commercial companies, quacks and advertisers.

In more recent decades, the ‘rights’ discourse has been supplanted by one emphasising individual ‘choice’. As the historian Alex Mold boldly states, patients ‘have been made into consumers’, in which the main language is choice and autonomy.32 The Labour government from 1997 to 2010 transformed the NHS into a ‘market’, which unfortunately has exacerbated inequalities based on race, region, class, gender and age (see also Chapters 3, 10 and 11). Mold concludes that ‘It is difficult to see how patient-consumers can overcome completely the power imbalance with health professionals’ and ‘the tension between individual demands and collective needs also persists’.33

Diagnostic Nomenclature

The fourth shift involves the fabrication as well as demolition of diagnostic categories. These processes provide important insights into how knowledge is created, spread, consolidated and shrunk (see also Chapter 17).

There have always been dramatic shifts in medical nomenclature, the most written about one being the rise of hysteria as a diagnostic category in the Victorian period and then its falling from grace in the twentieth century, as documented by Mark Micale.34 In the period from the 1960s to the 2010, the most notable change has been the 1973 removal of homosexuality from the category of a mental illness (although transsexuality has been introduced as an item of interest to psychiatrists).

Another new diagnostic category that has been introduced is attention deficit hyperactivity disorder. It has proved controversial, however, on the grounds that it pathologises children and leads to over-medicalisation. These critiques were linked to wider debates about the ‘medicalisation’ of everyday problems. Social anxiety disorder, for example, has been said to be an example of medicalising shyness, giving rise to the ‘therapeutic state’ that psychologises every aspect of people’s lives and is inherently pathologising. There has also been the growth of transcultural psychiatry, recognising that diagnostic categories, symptoms and measuring instruments might not be valid for the twenty-two different ethnic groups with populations of more than 100,000 living in the UK (see also Chapters 35 and 36).35

Evidence-Based Medicine

The fifth shift is more administrative but had a major impact on patients and mental health professionals. This is the revolution in evidence-based medicine, with the standardisation of evaluation methods, as well as the formalisation of quality assurance, efficiency metrics, interdisciplinary teams and randomised controlled trials (see also Chapter 17).36 In the 1960s, drug trials had simply meant clinical observations. As one psychiatrist involved in early trials of drugs for schizophrenia recalled,

There was no blinding and no randomization. Informed consent was unnecessary. There were no institutional review boards. Initially, there were no rating scales, and results were reported in a narrative fashion.37

This all was swept away. As Ann Donald in an article in Culture, Medicine, and Psychiatry argues, one result was the introduction of ‘algorithms of care’ or ‘Wal-Marting’. Instead of individualised, personalised treatments, physicians turn to population-based databases. This ‘epistemological change results in the development of a clinical knowledge patterned along algorithmic pathways’, Donald explains, ‘rather than subjective understanding. An increased and more rapidly rationalization of psychiatry is the result.’38

Psychopharmacology, Neurochemistry and Neurobiology

The final shift is one that has affected each of the five themes discussed in this chapter. Since 1900, psychiatric thought has been divided between what have (in very broad strokes) been called biomedical versus psychosocial models (see also Chapters 4 and 17). Is psychiatric ill health primarily physiological or psychosocial? Of course, in reality, nearly everyone (except some anti-psychiatrists) agree that the answer is ‘a bit of both’; the dispute is over the balance of impacts. From the 1960s, however, the neurochemical and neurobiological origins of mental illness were privileged over the psychodynamic and sociological. New pharmaceutical products such as anti-anxiety and anti-depressant compounds have also revolutionised treatments. With ‘big pharma’ has come the increased role of mental health insurance industries. Crucial to this shift was the synthetisation of chlorpromazine (Largactil) in 1952, the ‘new wonder drug for psychotic illness’.39 As G. Tourney observed in 1968,

We are in the great age of psychopharmacology, in which industry has great stakes … The physician has become increasingly dependent on brochures from drug companies rather than formal scientific reports.40

Freudian therapies, in particular, were stripped from medical training as well as practice. In the words of one commentator, the last half of the twentieth century saw the ‘disappearance of the couch’; the Freudian fifty minutes of therapeutic time has been cut to only fifteen minutes of CBT.41

Psychopharmacology has been boosted by a massive investment of capital in neurology, other brain sciences and chemistry. The prominent neuropsychiatrist Henry A. Nasrallah argues that the ‘future of psychiatry is bright, even scintillating’. He maintains that ‘psychiatric practice will be transformed into a clinical neuroscience that will heal the mind by replacing the brain’. In particular, he mentions new technologies, including ‘cranial electrical stimulation (CES), deep brain stimulation (DBS), epidural cortical stimulation (ECS), focused ultrasound (FUS), low-field magnetic stimulation (LFMS), magnetic seizure therapy (MST), near infrared light therapy (NIR), and transcranial direct current stimulation (TDCS)’.42 What this will mean for patients is still unknown.

Conclusion

This chapter began with the patient Robairt Clough’s ditty, published in the 1972 Christmas edition of the patients’ magazine of Holywell Psychiatric Hospital in Antrim:

Little drops of medicine,
Little coloured pills,
Cure us of all ailments,
Cure us of all ills.

A year after Robairt Clough wrote these words, the Christmas edition of the magazine was to be the last edition. In-patient numbers at the Holywell Psychiatric Hospital were in decline. Faculties were ‘generally cramped’, occupational therapy was ‘substandard’ and the entire health service was being restructured.43 With deinstitutionalisation, ‘communities of suffering’ moved elsewhere; diagnostic categories expanded dramatically with the growth of the therapeutic state; anti-psychiatry morphed into patient activism; and those ‘little coloured pills’ continued to be popped into the open mouths of patients like Robairt Clough who waited for ‘the day we’re free’.

Summary Key Points
  • This chapter draws attention to social encounters between medical professionals and patients.

  • Deinstitutionalisation, changes in diagnostic nomenclature, anti-psychiatry, patients’ movements, evidence-based medicine and advances of psychopharmacology, neurochemistry and neurobiology are among the main encounters between professionals and patients from the 1960s to the 2010s.

  • Discipline, power and social construction have been prominent in the policy and politics of psychiatry.

  • Deinstitutionalisation is perhaps the most important social experiment of the last century.

  • The mental health needs of older people, homosexuality and consent are among the major themes underlined during the fifty years under consideration.

Chapter 2 The International Context

Edward Shorter
Introduction

There is a distinctive British tradition of psychiatry that goes back to the earliest days embodied in such institutions as the York Retreat. What is the definition of ‘insanity’?, asked James Sims in 1799. It is, he said, ‘the thinking, and therefore speaking and acting differently from the bulk of mankind, where that difference does not arise from superior knowledge, ignorance, or prejudice’.1 It is a definition that has never been surpassed.

The late 1950s and early 1960s were a period of dynamic change in British psychiatry. The Mental Health Act in 1959 saw a vast expansion of outpatient care and made it possible for clinicians to admit patients without the intervention of the magistrate. Eliot Slater, editor-in-chief of the British Journal of Psychiatry and head psychiatrist at the National Hospital at Queen Square London, said in 1963, ‘Rehabilitation and new treatments are already reducing the bed numbers throughout the country, allowing many of the old “asylums” built in the last century to close within the next 10 to 15 years.’2

Michael Shepherd stated in 1965, ‘The term “social psychiatry” has come to designate a distinctly British contribution, much as “psychodynamic psychiatry” has characterized American thinking.’ Shepherd flagged such British innovations as the ‘open-door’ system, the ‘therapeutic community’ and the emphasis on rehabilitation.3

Seen in international perspective, British psychiatry is best set against the American. This is the fundamental difference between American and British psychiatry: American psychiatry is for those who can afford it unless they receive forensic referrals. Maxwell Jones, instrumental in creating the ‘therapeutic community’, commented in 1963 after a guest stay at a mental hospital in Oregon, ‘About 50 million people in the United States have no health insurance whatever, mostly because they cannot afford it.’4 In Britain, psychiatry, of course, is funded by the state.

Equally crucial in the United States has been the evolution of psychopharmacology. In pharmacotherapeutics, the palette went from a supply of genuinely effective agents around 1960 to a limited handful of drugs around 2010 that are now either of disputed efficacy, such as the selective serotonin reuptake inhibitors (SSRIs) ‘antidepressants’, or toxic when used inappropriately or in children and the elderly, such as the so-called atypical antipsychotics (or second-generation antipsychotics (SGAs)). In diagnosis, US psychiatry went from an eclectic group of indications that had accumulated over the years to the ‘consensus-based’ system of the Diagnostic and Statistical Manual of Mental Disorders (DSM). The result in the United States was that the psychiatry of 2010 was scientifically in a much more parlous state than in 1960.

Travellers between the UK and the United States

In the mid-1970s, the American psychiatrist Jay Amsterdam, later head of the depression unit at the University of Pennsylvania, spent a year training at the Maudsley Hospital in London. He said, ‘Back then, my Maudsley teachers called their diagnostic process a “phenomenological” approach, and rarely seemed to have difficulty distinguishing folks with true, melancholic, biological, cognitive, physical depression from other types of depression. In London, it all seemed so obvious and “Schneiderian” to me, it just seemed like the correct way to diagnose physical from mental disorders.’ What a shock when he re-entered the world of psychoanalysis in the United States. ‘It seemed so, well, undisciplined to me, compared to my days at the Maudsley!’5

Yet the traffic went both ways. In 1955–6, Michael Shepherd, a senior psychiatrist at the Maudsley Hospital, spent a year travelling across the United States. He was overcome by the differences from British psychiatry. The American world lived and breathed psychoanalysis. ‘In the U.S.A. a remarkable attempt has been made in many centres to inject the whole system, python-like, into the body of academic opinion.’ By contrast, ‘In Great Britain psychoanalysis has been in contact with, rather than part of, academic psychiatry; its concepts have been transmitted through a semi-permeable membrane of critical examination and testing, and the rate of absorption has been slow.’6 This was a gracious way of putting the substantial rejection of psychoanalysis on the part of British psychiatry. Medical historian Roy Porter concluded, ‘The British medical community as a whole long remained extremely guarded towards psychoanalysis.’7

David Goldberg contrasted in 1980 typical hospital visits to a psychiatrist in Britain and the United States. In the latter, ‘He will wait for his interview in a comfortably furnished waiting area usually complete with armchairs, fitted carpets, and luxurious potted plants. He will be interviewed by an unhurried psychiatrist who will be sitting in an office which looks as little like a hospital clinic as he can make it.’ Eighty-six per cent of such patients will receive psychotherapy, drugs are prescribed in only 25 per cent. In Britain, by contrast, the patient will be interviewed in an office ‘which looks most decidedly like a hospital clinic. He is relatively more likely to be physically examined and then to have blood tests and X-rays.’ About 70 per cent of British psychiatry patients will receive drugs.8

In retrospect, it is hard for British clinicians to imagine the hold which psychoanalysis once exercised on US psychiatry. The psychoanalyst, or ‘my shrink’, became the standard go-to figure for any mental issue. The parents of Jason, age eight, feared that he might be gay and took him to the ‘psychoanalyst’, where he remained in treatment for his purported homosexuality for four years.9 These were all private practitioners.

Shepherd noted with wonder the large amounts of funding available to psychiatric research in the United States. Yet the National Institute of Mental Health (NIMH) had barely opened its doors, and shortly enormous amounts of money would start sloshing across academic psychiatry in the United States. British psychiatrists comforted themselves with their beggar’s pittance. Aubrey Lewis said, ‘To buy a little piece of apparatus costing £5 was a matter which one had to discuss at length and go to the highest authority in order to get approval.’10

Finally, Shepherd found curious the American fixation upon ‘mental health’, which seemed more a hygienic than a medical concept. The British, at that point, felt more comfortable with the notion of mental pathology than mental health, and even though Aubrey Lewis preached the ethics and sociology of social and community psychiatry, in Britain the study of psychopathology had a high priority.11

Jumping ahead fifty years, in understanding ‘the British mental health services at the beginning of the twenty-first century’, certain perspectives have been lacking: ‘The scope and rapidity of change has left many developments in social policy, legislation, medico-legal practice, service design, service delivery and clinical practice without systematic historical analysis.’12 In studying the history of psychiatry in Britain from 1960 to 2010, a most interesting question is, to what extent did the British escape the American disaster?

Diagnosis

At the beginning of the period, there were striking international differences in diagnosis. As Swiss psychiatrist Henri Ellenberger noted in 1955,

The English call almost any kind of emotional trouble ‘neurosis’. The French apply the diagnosis of feeble-mindedness very liberally; in Switzerland we demand much more serious proof before using it … Child schizophrenia is a rare diagnosis in Europe, but a rather frequent one in America; Americans diagnose schizophrenia in almost all those cases of children whom we would call ‘pseudo-debiles’.13

Yet as early as the 1960s, psychiatry in Britain was alive with innovative thinking about diagnosis. There were a number of systems in play, and in 1959 émigré psychiatrist Erwin Stengel in Sheffield classified them.14 It was a mise au point of the richness of the international offering. Michael Shepherd led efforts to foster an ‘experimental approach to psychiatric diagnosis’. The Ministry of Health published in 1968 a ‘glossary of mental disorders’ that presented in concrete terms the nosology of the eighth edition of the International Classification of Diseases by the World Health Organization (WHO) (which made it apparent how inadequate the diagnoses of DSM-II were).15

This innovation came to an end with the American Psychiatric Association’s publication of the third edition in 1980 of the DSM, which erected gigantic monolithic diagnoses such as ‘major depression’ and, in later editions, ‘bipolar disorder’, while retaining the hoary age-old ‘schizophrenia’. The Americans soon came to dominate the world diagnostic scene with DSM-III. This represented an extraordinary demonstration of the prescriptive power of American psychiatry: that a consensus-based (not a science-based) nosology such as DSM could have triumphed over all these other systems.

There has always been an academic tradition in England of distrust of abstract diagnostic concepts, such as manic depression, in favour of the clinically concrete. This would be in contrast to the Americans’ initial fixation on psychoanalysis, which they then re-exported back to Europe. Following the psychoanalysis vogue came the wholesale US plunge into psychopharmacology. The British were resistant to both these trends. In 1964, E. Beresford Davies, in Cambridge, urged colleagues to switch out disease thinking in favour of just noting symptoms and their response.16

Cautiousness in the face of novelty can spill over into a stubborn resistance to innovation. Catatonia, for example, has for decades ceased to be considered a subtype of schizophrenia;17 but not in the Maudsley Prescribing Guidelines, the 2016 issue of which continues to include catatonia in the chapter on schizophrenia. Melancholia, a diagnosis that goes back to the ancients and has experienced a recent revival, is not even mentioned in the index.18

Other distinctively British approaches have also battled to preserve themselves. One was, in contrast to the DSM tendency to treat the clinical picture as the diagnosis, a British reluctance to leap directly from current presentation to diagnostic determination. As Felix Brown, a child psychiatrist in London, pointed out in 1965, ‘I have seen many patients who have given an early history of neurosis and they have appeared twenty years later with a really severe depression.’ Syndromes, he said, were valuable: ‘But I do not believe that they necessarily represent disease entities, especially as the syndromes vary at different times in the lives of the patients.’19

Epidemiology

Lest it be thought that Great Britain was limping along behind some mighty US powerhouse, there were areas where the US NIMH squandered hundreds of millions of dollars, such as the very modestly helpful at best trials of SSRI ‘antidepressants’ in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study. In contrast, at the same time, on much smaller amounts of funding, British investigators made significant progress in areas that mattered in patient care, such as the amount of psychiatric morbidity in general practice. In 1970, David Goldberg and Barry Blackwell assessed with the General Health Questionnaire some 553 consecutive attenders in a general practitioner’s surgery. They found that 20 per cent of these patients had previously undiagnosed psychiatric morbidity. This study became an international epidemiological landmark.20

Goldberg was among Britain’s leading psychiatric epidemiologists; but here, there was a deep bench. Myrna Weisman, a social worker turned leading US psychiatric epidemiologist, commented from her ringside seat, ‘The UK led in psychiatric epidemiology. In America we didn’t think you could make diagnoses in the community … The leaders in the field were all English. There was John Wing, who developed the Present State Examination, Norman Kreitman and Michael Shepherd. These were giants in the field.’21

Therapeutics

At a certain point, the therapeutics baton is passed from the UK and France to the United States. In the 1960s and 1970s, the Americans were still enmeshed in psychoanalysis and the English had more or less free run internationally. Malcolm Lader, a psychopharmacologist at the Institute of Psychiatry said, ‘The United States was not interested in drugs. I was lucky, we had almost a 30-year clear run in the 1960s and 1970s when the Americans were not doing much psychopharmacology. It was only then that they finally gave up their flirtation with psychoanalysis and moved into psychopharmacology, and of course with their resources they’ve swamped the subject.’22

There is one other area of psychopharmacology where there seem to be, alas, few international differences and that is the influence of the pharmaceutical industry over education and practice in psychiatry (see also Chapter 17). One would not be far afield in speaking of the ‘invasion’ of psychiatry by Pharma, given the companies’ influence over Continuing Medical Education and over ‘satellite’ sessions at academic meetings. Joanna Moncrieff deplored industry’s influence on both sides of the Atlantic, on the grounds that it led to over-biologizing psychiatric theory and over-prescribing psychotropic drugs. Moncrieff concluded, not unjustly, that ‘Psychiatric practice is now firmly centred around drug treatment, and millions of other people, who have no contact with a psychiatrist, are receiving psychotropic drugs in general practice.’23

Yet in terms of the classes of psychotropic medications prescribed, there are international differences, and there have been major changes over the fifty-year period. The conservatism of British clinicians with regard to new diagnoses extended towards new medications. In contrast to Germany, where prescriptions of new drugs in 1990 amounted to 29 per cent of total prescriptions, it was in the UK about 5 per cent (down from 10 per cent in 1975).24 (Yet English clinicians were not actually shy about prescribing psychotropic medication; as one observer noted, ‘In 1971, in order to make patients feel happy, keep calm, sleep or slim, about 3,000,000,000 tablets or capsules of psychotropic drugs were prescribed by general practitioners in England and Wales.’)25

In the central nervous system (CNS) area, there was once quite a bit of divergence between the UK and the United States. In the years 1970–88, only 39.4 per cent of drugs were introduced in both countries, one of the lowest overlap figures for any therapeutic class.26 Later, this divergence narrowed as the pharmaceutical industry became more international.

Successive Mental Health Acts of 1983 and 2007 in England largely addressed psychiatry’s custodial and coercive functions and will not be considered here, except that the 2007 Act stipulated that electroconvulsive therapy (ECT) must not be administered to patients who had the capacity to refuse to consent to it, unless it is required as an emergency. This continued the stigmatisation of convulsive therapy that has governed British psychiatry over the years: when you must not use it. In contrast, in the United States, ECT legislation, if any, is decreed at the state level, and here the tendency has been towards a growing acceptance of ECT as the most powerful treatment that psychiatry has on offer. A NICE Guidance in 2003 begrudgingly consented that ECT might be useful in certain circumstances (after ‘all other alternatives had been exhausted’), but its use must not be increased above current levels (imposing a ceiling on it, in other words). Of maintenance ECT there was to be no question.27 (An update in 2009 moderated only slightly this forbidding approach.) These recommendations contravened international trends in this area.28

Research

Before the Second World War, with the exception of the psychiatry unit at the National Hospital in Queen Square and perhaps the Maudsley, there was virtually no psychiatric research in England. Even at the Maudsley, Aubrey Lewis, who declared a pronounced interest in social rehabilitation, kept aloof from drug research. The phrase ‘Maudsley psychiatry’ meant social psychiatry, epidemiology and statistical methods.29 It did not refer to a special approach to clinical care or pharmacotherapeutics.30

The first real step forward in psychiatric research originated after Eliot Slater’s arrival in 1931 at Queen Square. There he was soon joined by several eminent émigré German psychiatrists of international reputations. Otherwise there was silence on the British psychiatric research. (However fabulous Aubrey Lewis might have been as a teacher at the Maudsley, he was not a researcher, and his famous paper on the unitary nature of depression got the story exactly wrong.)31

The basic medical sciences were, however, another story, and the history of British psychopharmacology must be seen in the context of a long history of interest in neurophysiology at Oxford and Cambridge. The work of Charles Sherrington, Richard Adrian and Henry Dale attracted worldwide attention. Derek Richter discovered monoamine oxidase at Cambridge, and John Gaddum at Edinburgh thought that serotonin might play a role in mood regulation.32

The big British leaps in this area were achieved at Oxford and Cambridge but also at the established ‘red-brick’ universities in Birmingham, Manchester and Liverpool as well as at London. In 1954, Charmian Elkes and Joel Elkes in Birmingham at the Department of Experimental Psychiatry – the world’s earliest dedicated laboratory for research in psychopharmacology – reported the first randomly controlled trial for chlorpromazine.33 In 1970, Hannah Steinberg became Professor of Psychopharmacology at University College London, the first woman in the world to occupy such a chair. She pioneered research on the effect of drug combinations on the second-messenger system in the brain.34 The work of Malcolm Lader at the Institute of Psychiatry, Martin Roth at Newcastle upon Tyne and Eugene Paykel as Professor of Psychiatry at Cambridge, helped lay the foundations of clinical psychopharmacology. The efforts of Alec Coppen, Max Hamilton and others led to the foundation of the British Association for Psychopharmacology in 1974.

In sum, British contributions to drug discovery and development in CNS were immense. As a joint government–industry task force reported in 2001, ‘Companies based here maintain a significant presence in all the major markets in the world and the UK has consistently “punched well above its weight” since the 1940s … In terms of overall competitiveness, the UK is second only to the US and well ahead of its main European competitors.’35

Deinstitutionalisation and Community Care

Among the most dramatic international differences is that, in Britain, services were transferred from the asylum to local hospitals and, in the United States, from the asylum to the prison system (see also Chapters 1, 23, 30).

In Britain, the locus of care was shifting from the old mental hospital ‘bins’ to general hospitals. This process began before the Second World War as the Maudsley set up clinics in three big London hospitals.36 It continued with the District General Hospitals created by Enoch Powell’s ‘Hospital Plan’ in 1962. No longer mere waystations before the transfer to the asylum, the general hospital departments provided comprehensive care. An early innovator here in the 1960s was the Queen’s Park Hospital in Blackburn a borough in Lancashire. Its 100 beds were divided into three sections: emergency, ambulatory-chronic and acute. The acute section was divided between male and female. All sections were ‘open’ and the hospital provided lunch for the emergency and ambulant-chronic patients. Maurice Silverman, the consultant psychiatrist, noted with some pride in 1961, ‘This has been largely due to the pioneering work of the Regional Hospital Board in forging ahead with comprehensive psychiatric units in general hospital centres throughout the region.’37 In contrast to Britain, in the United States ‘The overall proportion of the population with mental disorders in correctional facilities and hospitals together is about the same as 50 years ago. Then, however, 75% of that population were in mental hospitals and 25% incarcerated; now, it is 5% in mental hospitals and 95% incarcerated’.38

The Psychiatrist of the Past versus the Psychiatrist of the Future

It is training that stamps the clinician’s whole mindset, and here the change in training objectives has been dramatic and resulting UK/US differences wide.

A UK survey in the mid-1960s established the teachers’ principal learning objectives for their students: ‘scientific attitude regarding behavior’, ‘factual knowledge about psychiatric illnesses’ and ‘treatment skills’. For example, within the category of scientific study of behaviour, the lecturers were asked to rank various attainments. ‘Students must be taught psychopharmacology and neurophysiology as an essential part of their psychiatric training’ was number one; systematic history-taking and ‘methodical examination of the mental state’ was number two; and learning the ‘importance of diagnosis and systematic classification for effective practice of psychological medicine’ was number three.39

Now, no training programme would ever ignore any of these objectives, but we see how the goals sought for trainees have changed today. In 2008, one observer at the Institute of Psychiatry listed the attributes of the future psychiatrist.40 It was a list that would have made Aubrey Lewis, the earlier professor and himself an advocate of community care, blink:

  • ‘Working in partnership’ – this was long thought to have been self-understood.

  • ‘Respecting diversity’ – diversity was a newcomer to the list.

  • ‘Challenging inequality’ – this gave psychiatry a decided political spin.

  • ‘Providing patient-centred care’ – Aubrey Lewis, rightly or wrongly, believed that he and his colleagues at the Maudsley offered such care, in that competent treatment in the community was inevitably ‘patient-centred’; yet it was the physician’s intuitive sense of ethics, values and professional responsibility that decided what was patient-centred, not a climate of opinion that demanded it.

On the other side of the coin, we have surveys of the reasons students don’t chose psychiatry. One survey of the literature concluded, ‘The major factors that appeared to dissuade medical students/trainees from pursuing psychiatry as a career included: an apparent lack of scientific basis of psychiatry and work not being clinical enough, perception that psychiatry is more concerned about social issues.’41 In Aubrey Lewis’s generation there were complaints that psychiatry was too medical, with its insistence on seeing illness as brain disease (treatable with electroshock, phenothiazines and tricyclic antidepressants); in the current generation, there are complaints that psychiatry is not medical enough, with views of the field as an extended arm of social work.

In the 1960s, the term ‘diversity’ was as yet on no one’s lips, but concern was already stirring about the low number of women in psychiatric training. One tabulation showed the percentage of female trainees as low as zero in the Welsh National School of Medicine, 0.3 per cent in St Thomas’s Hospital and 0.4 at Leeds. (The maximum was 2 per cent at Bristol.)42

The putative deterioration in NHS services and decline in psychiatric care became a major theme. Brian Cooper wrote in 2010, ‘British psychiatry, it appears, flourished as long as the NHS remained secure and in good hands, but then, despite ongoing scientific progress, it has gone into decline since the national service infrastructure began to disintegrate under sustained political pressures.’43

Interestingly, US training has developed in a quite different direction. Rather than emphasising a progressive agenda, as in Britain, the accent in the United States has been on ‘professionalism’, not necessarily as a humanitarian objective but as a defensive posture. Several observers at Emory University wrote in 2009, ‘Most medical educators would agree that learning how to deliver care in a professional manner is as necessary as learning the core scientific data’. Here, the themes of diversity, equality and partnership are completely absent, although, if queried, the authors might have agreed to the importance of these as well. What was really on their minds, however, was that ‘Patient complaints, malpractice lawsuits, and media stories that depict the inequity and high costs in the U.S. health care system’ are the real agenda.44

Conclusion

Roy Porter noted that, ‘Deep irony attends the development of psychiatry within twentieth-century British society. The public became more receptive towards the fields of psychiatry and psychology.’ The asylum was dismantled; new modes of care more congenial to ‘service-users’ were conceived. ‘Yet the longer-term consequence’, continued Porter, ‘has not been growing acclaim, but a resurgence of suspicion.’45 The challenge of the future will be demonstrating that psychiatry really is capable of making a difference in people’s lives.

Key Summary Points
  • British psychiatry is almost entirely publicly funded; in the United States, a tradition of well-remunerated private practice has prevailed.

  • Despite similar therapeutics and nosology, psychiatry in Britain and the United States has developed in strikingly different ways.

  • Psychoanalysis once dominated US psychiatry; in a big swing of the pendulum, it has been almost entirely replaced by psychopharmacology.

  • In Britain, the research tradition in the past was weak; in the United States, it has been fuelled by large amounts of government funding. A British hesitancy about embracing large abstract theories has no US counterpart.

  • In terms of training, a progressive agenda has been emphasised in Britain, more defensive postures in the United States.

Chapter 3 Liberty’s Command: Liberal Ideology, the Mixed Economy and the British Welfare State

Graham Scambler
Introduction

The immediate period after the Second World War and lasting until the 1960s was an unusual one for capitalism. It was characterised not only by steady economic growth, not to be matched since, but also by a cross-party political consensus on the desirability of a strong safety net in the form of a welfare state. The period has often been termed ‘welfare state capitalism’ by commentators. It was an era that came to a fairly abrupt end in the mid-1970s. If the oil crisis of that time provided a marker, the election of Margaret Thatcher in 1979 was to usher in a new phase of what we might call ‘financial capitalism’. In this chapter, I trace the origins and course of this transition up to and including the global financial crash of 2008–9. It was a transition that was as cultural as it was structural, leading to new and exacerbated forms of individualism and fragmentation as well as material and social inequality. The generalised commitment to the provision of state welfare gave way to a ubiquitous, near-global ideology of neoliberalism.

I start with a necessarily abbreviated consideration of the history of the British welfare state in general and the emergence of the National Health Service (NHS) in particular. The present, we need to remind ourselves, involves both past and future, the former being the present’s precursor, the latter its vision of what is to come.

Origins of the Welfare State and NHS

In fact, Britain was slow off the mark with welfare provision. The first direct engagement was via the National Health Insurance Act of 1911. Prompted by concerns about high rates of work absenteeism and lack of fitness for war duty among working men, this Act protected a segment of the male working class from the costs of sickness. It drew contributions from the state, employers and employees, and it entitled beneficiaries to free primary care by an approved panel doctor (a local GP) and to a sum to compensate for loss of earning power due to sickness. Better paid employees, women, children and older people were excluded and had either to choose fee-for-service primary as well as secondary health care or resort to a limited and fragmented system of ‘public’ (state-funded) or ‘voluntary’ (charitable) care. The Act covered 27 per cent of the population in 1911, and this had only expanded to 45 per cent by the beginning of the Second World War.

Pressure, most notably from the expanding middle class, grew during the 1920s and 1930s to extend the reach of health care services. An overhaul eventually took place during the Second World War, evolving out of Beveridge’s (1942) painstaking blueprint for an all-out assault on the ‘five giants’ standing in the way of social progress: Want, Disease, Ignorance, Squalor and Idleness. In retrospect, it seems clear that this initiative marked the end of liberal capitalism, the consolidation and expansion of state intervention and the commencement of welfare state capitalism. The Beveridge Report incorporated plans for a National Health Service. The displacement of Churchill’s wartime government by Attlee’s Labour Party in 1945 not only realised the concept of a welfare state but, following the National Health Service (NHS) Act of 1946 – a skilled piece of midwifery by Health Minister Aneurin Bevan – the birth of the NHS in 1948. The NHS was based on the principles of collectivism, comprehensiveness, universalism and equality (to which should be added professional autonomy). The state was thereafter committed to offer primary and secondary care, free at the point of service for anyone in need. These services were to be funded almost exclusively out of central taxation.

The 1946 Act was a compromise with history and the medical profession. GPs avoided what they saw as salaried control and became independent contractors paid capitation fees based on the number of patients on their books; the prestigious teaching hospitals won a substantial degree of autonomy; and GPs and, more significantly, hospital consultants won the ‘right’ to continue to treat patients privately. The survival of private practice has been judged important: ‘the NHS was weakened by the fact that the nation’s most wealthy and private citizens were not compelled to use it themselves and by the diluted commitment of those clinicians who provided treatment to them.1

Evolution of the Welfare State and the NHS to 2010

It is not possible in this short contribution to do justice to the development of the post–Second World War welfare state in Britain, but a brief mention of changes in social security, housing and education is important. The terms ‘welfare’ and ‘social security’ are often treated as synonyms, and state interventions in welfare provision in Britain date back to the introduction of the Poor Law (and Work Houses) in 1536. What is known as the ‘welfare state’, however, has its origins in the Beveridge Report. Beveridge recommended a national, compulsory flat-rate insurance that combined health care, unemployment and retirement benefits. This led to the passing of the National Assistance Act (1948), which saw the formal ending of the Poor Law; the National Insurance Act (1946); and the National Insurance (Industrial Injuries) Act (1946). There have been many changes to this legislative package since, most having the character of piecemeal social engineering, but there has been a growing tendency towards cutting welfare benefits in post-1970s financial capitalism. By 2010, expenditure on state pensions amounted to 45 per cent of the total welfare bill, with housing benefit coming a distant second at 11 per cent. Then came the decade of austerity and much more savage cuts.

As far as housing is concerned, the average home in 1960 cost £2,507, while by 2010 this had risen to £162.085. Incomes failed to keep up with property prices everywhere, although there were strong regional differences. The type of housing also changed dramatically. Between 1945 and 1964, 41 per cent of all properties built were semi-detached, but after 1980 this fell to 15 per cent. The number of bungalows also declined. Detached houses, however, which were 10 per cent of stock built between 1945 and 1964, accounted for 36 per cent of new builds after 1980. The peak year for house building was in 1968. Private renting made a comeback after years of decline, reflecting a growth in buy-to-let investing, while home ownership slipped back from a peak of 70 per cent in 2004 to 68 per cent by 2010. Thatcher’s right-to-buy legislation meant that the number of people renting from their local council fell from 33 per cent in 1961 to 14 per cent in 2008.2

The Education Act of 1944 introduced a distinction between primary and secondary schooling and was followed by the introduction of the eleven-plus examination that determined the type of secondary education a child received. One in four passed the eleven-plus and attended grammar schools (or more rarely technical grammar schools), while the remainder attended secondary modern schools and were typically destined to end up in manual jobs. The private sector continued, including the major ‘public schools’, and these institutions still tutor the political and social elite. In 1965, Crosland in Wilson’s Labour Cabinet brought in mixed ability or ‘comprehensive state education’, which expanded to become the national norm. This system remained largely in place until Thatcher’s Education Act of 1988, which emphasised ‘school choice’ and competition between state schools. In the years since, ‘successive governments have sought to reintroduce selection or selective processes under different guises’.3

The evolution of the NHS is covered more fully in other chapters of this volume, so brevity is in order. A political consensus on the ‘character’ of the NHS held steady through much of the era of welfare state capitalism. Moreover, its ‘tripartite’ structure – involving divisions between GP, hospital and local authority services – remained largely intact into the 1960s. By the close of that decade, however, the increasing number of people with long-term and disabling conditions in particular provoked calls for a more integrated as well as a more efficient service. The result in 1974 was a bureaucratic reorganisation of the NHS initiated by Heath’s Conservative government.

The recession of the 1970s saw the advent of financialised capitalism and a renewed focus on cost containment in health care. In its first year of stability in spending, 1950/1, the NHS had absorbed 4.1 per cent of gross domestic product (GDP); this percentage fell steadily to 3.5 per cent by the mid-1950s; by the mid-1960s, it had regained and passed the level of 1950/1; and by the mid-1970s, it had risen to 5.7 per cent of GDP (see also Chapter 11). In fact, total public expenditure as a percentage of GDP peaked in 1975, accounting for nearly half. The Wilson and Callaghan Labour administrations from 1974 to 1979 felt compelled to take steps to contain public expenditure, including that on the health service. Faced with the prospect of stagflation, Labour retreated from its traditional, socio-democratic stance, most notably with the beginnings of fiscal tightening announced in Healey’s 1975 budget. While the ‘centre-left technocratic agenda’ was not abandoned, impetus was certainly lost.4

When Thatcher was elected in 1979, she brought to office a set of convictions fully in tune with the idea that the welfare state was in crisis. She took advantage of Galbraith’s (1992) observation that bureaucracy had long been more conspicuous in public than in private institutions.5 In 1983, she invited Griffiths (from Sainsbury’s supermarket chain) to conduct an enquiry into NHS management structures (see also Chapter 12). The result was the end of consensus management and its replacement by a new hierarchy of managers on fixed-term contracts.

Against a background of a continuing political rhetoric of crises of expenditure and delivery, in 1988 Thatcher announced a comprehensive review of the NHS. A White Paper, Working for Patients, followed a year later. The notion of an internal market was the most significant aspect of the NHS and Community Care Act of 1990 that followed. I have argued that ‘it sat on a spectrum somewhere between a bureaucratic command and control economy and a private free market’.6 Key was the separation of ‘purchaser’ and ‘provider’ in what was described as ‘managed competition’ (see also Chapter 9).

It was Thatcher’s successor as Conservative leader, John Major, who introduced the Private Finance Initiative (PFI). This paved the way for the private sector to build, and own, hospitals and other health care facilities that they then leased back to the NHS, often at exorbitant rents. This was a convenient arrangement for government since PFI building and refurbishment did not appear on government books: they represented an investment of private not public capital. Nevertheless, by the time of Major’s departure from office in 1997, expenditure on the NHS had topped 7 per cent of GDP.

Labour prime ministers Blair and Brown were in office up to the end of the period of special relevance to this volume. Both embraced PFIs, despite warnings that many trusts were destined to fall heavily into debt as a consequence. As Allyson Pollock predicted in 2005,7 the chickens would one day come home to roost. There was in fact considerable continuity between the Thatcher/Major and Blair/Brown regimes. Blair, too, saw the welfare state as encouraging dependency, adversely affecting self-esteem and undermining ambition and resolve. Labour’s ‘third way’ afforded cover for the sticking with the Thatcher experiment.

In 2000, Blair announced that spending on the NHS would increase by 6.1 per cent annually in real terms over a four-year period. In the same year, ‘The NHS Plan’ was published. These moves showed a degree of continuity with the Thatcher project rather than a halting of or rowing back from it.

This brief sketch or timeline covers the period of relevance to this discussion, though it will be important to refer to changes to the NHS post-2010 in what follows. It is time now to turn to the nature of the underlying societal shifts that help us to understand and explain these NHS ‘reforms’ in the half-century from 1960 to 2010.

Parameters of Societal Change

Given the limited space available, it will be expedient here to identify and focus on select themes. The first of these might be termed the ‘financialisation of capitalism’. The decade from the mid-1960s to the mid-1970s saw a slow-burning transition from the relatively benign era of post–Second World War welfare state capitalism to a much harsher era of financial capitalism. If it was Thatcher, along with Reagan in the United States, who symbolised and was the principal political champion and beneficiary of this shift, it must be added that in doing so she was surfing much deeper social structures.

It was the American abrogation of Bretton Woods and the rise of the Eurodollar – which freed up money capital from national regulation by central banks – that marked the advent of financial capitalism. The international recession brought banks further and deeper into the global arena. Banks became internationalised and developed closer relations with transnational corporations. References to financialisation grew more common, summing up not only the phenomena of deregulation and internationalisation but also a shift in the distribution of profits from productive to money capital (accompanied by an increase in the external financing of industry). Industrial capital more and more resembled financial capital.

Pivotal for financial capitalism as it developed was a revision of the ‘class/command dynamic’.8 This refers to the relations between what I have termed the ‘capital executive’, namely that mix of financiers, rentiers and major shareholders and CEOs of largely transnational corporations that comprise today’s dominant capitalist class, and the political elite at the apex of the nation state. The key point here is that those who make up the capital executive in Britain are essentially global operators: they have been described as ‘nomads’ who no longer belong or identify their interests with their nation of origin. They, like their capital, can resituate at a rapid and alarming pace. The American historian David Landes once remarked that ‘men (sic) of wealth buy men of power’.9 What the class/command dynamic asserts, in a nutshell, is that they get more for their money during financial capitalism than they did during welfare state capitalism. This can be interpreted as follows: capital buys power to make policy. This is a critical insight for anyone wanting to understand and explain the ramping up of the assault on the principles and practices of the welfare state in general, and the NHS in particular, from Thatcher onwards.

A second theme concerns material and social inequality. Health inequalities are not simply a function of the nature of a health care system, important though this is. Rather, they reflect the distribution of material, social and cultural goods or assets in the population served.10 I have articulated this elsewhere in terms of ‘asset flows’,11 arguing that strong flows of biological, psychological, social, cultural, spatial, symbolic and, especially, material asset flows are conducive to good health and longevity, while weak flows are associated with poor health and premature death. Moreover, there tend to be strong and weak ‘clusters’ of asset flows. Having said this, compensation can occur across asset flows: there is evidence, for example, that a strong flow of social or cultural assets can compensate for a weak flow of material assets.

The transition to financial capitalism, characterised by its newly distinctive class/command dynamic, has witnessed growing levels of material and social inequalities, with elevated rates of health inequalities following closely in their wake. At the time of writing this chapter, this is being reflected in the specific patterning of the Covid-19 pandemic in the UK and elsewhere.12 It is not coincidental that attempts to ‘reform’ the NHS post-Thatcher have occurred alongside deepening material, social and health inequalities. The tacit model for these health care reforms, tentative at first but growing in conviction and potency post-2010, is the United States, where commercial interests predominate and yield rich returns. The putative ‘Americanisation’ of the NHS is very much on the agenda (to reiterate, capital buys power to make policy).

To push this point home, it is necessary for a moment to go beyond the timeline of this chapter. The 2010 General Election resulted in a Cameron–Clegg Conservative-led coalition government. Almost instantly this government reneged on a pre-election promise not to initiate any further top-down reorganisations of the NHS. Health Minister Lansley published a White Paper called Liberating the NHS a mere sixty days after the election, having consulted widely with private providers beforehand.13 This led to the Health and Social Care Act of 2012, a piece of legislation that opened the door for a root-and-branch privatisation of health care in England. There was considerable opposition to the passing of this Act from both inside and outside of the medical profession, but perhaps few realised its likely longer-term ramifications. A decade later this was clearer: what the Act made possible, namely a rapid privatisation of clinical and other services, was underway.14

In short, social processes of health care ‘reform’ that started around the beginning of the period under consideration here, 1960–2010, have gathered pace since and come to regressive fruition. It will be apparent that this statement has application beyond health care. It is pertinent to physical and mental health alike that ideological assaults on the welfare state have been major contributors to growing material and social inequalities. Like those on the NHS itself, these assaults have accelerated post-2010, culminating in years of political austerity and welfare cuts via devices like Universal Credit. In fact, social security payments in 2020 were proportionally the lowest since the formation of the welfare state back in the time of the Attlee government. Formal social care has been decimated. The advent of the Covid-19 pandemic in 2020 has exposed these properties of what has been called the ‘fractured society’.15

There emerged cultural shifts alongside structural social change through the years 1960 to 2010. In the arts, humanities and social sciences, these were sometimes characterised as ‘postmodern’. One aspect was certainly the foregrounding of individualism, which fed into political and economic ideologies of personal responsibility: remember Thatcher’s insistence that ‘there is no such thing as society’. Another aspect of the cultural shift has been the ‘postmodernisation’ or ‘relativisation’ of culture itself. The French theorist Lyotard put it well when he argued that grand narratives had given way to a multitude of petit narratives.16 What he meant was that overarching philosophies or theories of history or progress, or visions or blueprints of the good society, had been seen to fail and consequently been abandoned. Now people had been emancipated: they were free to choose their own identities, projects and futures as discrete individuals. New identity politics had displaced the old politics of distribution associated with welfare statism.

While some commentators and others celebrated this newfound freedom, others labelled it a form of neoconservatism. Habermas, for example, maintained that the announcement of the death of the grand narrative was not only philosophically premature but also politically convenient.17 After all, it followed that no rationally compelling case might now be made for challenging the (conservative) status quo.

The right of the individual to choose – their identities, orientations and practices – has become firmly established in the culture of financial capitalism. It is a major theme running through accounts of ‘neoliberal epidemics’.18 If individuals can be presumed responsible for their behaviour, then they can be held culpable for any medically defined conditions, physical or mental, that can be associated with lifestyle choices. If, for example, obesity is causally linked to diabetes and heart disease, and possibly Covid-19, too, then the obese must surely accept some personal responsibility for indulging in ‘risk behaviours’. The point here is a political one: it is not that individuals are not responsible for their health but rather that (1) their health and their behaviours are also a function of, often inherited, circumstance and (2) a governmental emphasis on risk behaviours allows for cutbacks in spending and support. Furthermore, given that during the fifty years under consideration here the implicit rationing of health care services has transmuted into explicit rationing, it is only reasonable that ‘behavioural conditionality’ be factored into decisions about priorities for treatment and care.

This argument has potency beyond health and health care. The political contraction of the welfare state as a whole, together with the spread of ‘precarity’ in employment via zero-hours contracts and the undermining of work conditions, sick pay and pensions, has been facilitated by a recasting of personal responsibility.19 Distinguishing between ‘stigma’, referring to infringements against norms of shame, and ‘deviance’, denoting infringements against norms of blame, I refer to a stigma/deviance dynamic and maintain that ‘blame has been heaped upon shame’ in the era of neoliberalism. What I mean by this is that citizens are now being held responsible (blamed) for what was previously regarded as non-conformance rather than non-compliance with cultural prescriptions. Thus, people who are disabled are now treated as if this is in some way their fault and similarly with many departures from mental health. If blame can be effectively appended to shame, the thesis suggests, then people are rendered ‘abject’, permitting governmental sanctions, even punishments, without public protest. The disabled have been among those hit hardest by welfare cuts enabled by the calculated political ‘weaponising of stigma’. If these processes have only become tangible post-2010, their DNA establishes their origins in Thatcher’s 1980s.

A Welfare and Health Care System Unravelling

The years from 1960 to 2010 reflect major social change. During that period, the exceptionally benign phase of post–Second World War welfare state capitalism came to an end, to be succeeded by a much harsher regime of financial capitalism. While the New Labour years of 1997 to 2010 to some degree saw a stalling of the deindustrialisation, financial deregulation and the programmes of privatisation of the Thatcher/Major years, and a corresponding decrease in the rate of growth of material inequality, this has been no abandonment of neoliberalism.

Financial capitalism has witnessed an accelerating rate of mental as well as physical health problems in line with the fracturing of society. Health inequalities already entrenched by the 1960s have since expanded.20 This was especially true in the 1980s, when Thatcher’s policies of state-enforced neoliberal individualism led to a surge in rates of morbidity and premature mortality among poor segments of the population.

The period 1960–2010 set the scene for what many at the time of writing (2020) see as a severe crisis in welfare and health care. The years of austerity, coupled with a sustained political effort to get citizens to obey the capitalist ‘imperative to work’ as well as to privatise as wide a range of public sector services as possible, have precipitated an ‘Americanisation’ of British society. Cultural cover has been provided for these policies by a populist rhetoric of individualism and ‘freedom of choice’. Scant regard has been paid to the distinction between ‘formal’ and ‘actual’ freedom, in other words between what people are formally free to do (e.g. buy their own house or send their children to fee-paying schools) and what they are actually free to do (i.e. in the absence of the requisite capital and/or a reasonable income).

Conclusion

Starting with a highly abbreviated chronology of the evolution of the welfare state, this chapter has gone on to discuss core structural and cultural mechanisms that have shaped the policy shifts that have occurred, concentrating on the period 1960–2010. The case has been made that policy shifts are often functions of deeper social processes. It is in this context that the class/command and stigma/deviance dynamics have been explored. Discourses, too, typically have ideological components that reflect structural and cultural dynamics. This complicates simple historical chronologies of social institutions like the NHS, as it does debates about improving welfare support and the delivery of good health care. Not infrequently policy-based evidence is substituted for evidence-based policy. Another level of complexity has been added of late, which is largely cultural. This was apparent by 2010 and has been characterised in this chapter as a relativisation of perspectives and modes of thinking. Progeny of this tendency include present analyses of ‘post-truth’ and ‘fake news’, linked to but trespassing beyond social media, each rendering rational judgements based on available evidence harder both to make and to evaluate.

Key Summary Points
  • The decade from the mid-1960s to the mid-1970s saw a slow-burning transition from the relatively benign era of post–Second World War welfare state capitalism to a much harsher era of financial capitalism. Scant regard has been paid to the distinction between ‘formal’ and ‘actual’ freedom.

  • The recession of the 1970s saw the advent of financialised capitalism and a renewed focus on cost containment in health care. At the same time, new identity politics had displaced the old politics of distribution associated with the welfare state. The political contraction of the welfare state as a whole, together with the spread of ‘precarity’ in employment via zero-hours contracts and the undermining of work conditions, sick pay and pensions, has been facilitated by a recasting of personal responsibility.

  • Strong flows of biological, psychological, social, cultural, spatial, symbolic and, especially, material asset flows are conducive to good health and longevity, while weak flows are associated with poor health and premature death.

  • Ideological assaults on the welfare state have been major contributors to growing material and social inequalities. Financial capitalism has witnessed an accelerating rate of mental as well as physical health problems in line with the fracturing of society.

  • The period 1960–2010 set the scene for what many at the time of writing (2020) see as a severe crisis in welfare and health care.

Chapter 4 Social Theory, Psychiatry and Mental Health Services

Rob Poole and Catherine Robinson
Introduction

This chapter describes the development of social concepts within psychiatry and mental services between 1960 and 2010 and the impact of the new ideas developed within social sciences at the time. Concepts and movements considered include deinstitutionalisation; therapeutic communities; Community Mental Health Teams (CMHTs); social constructionism; labelling theory; social functionalism; paradigm shift; stigma; the service user movement; and the social determinants of mental health. There was tension between new postmodernist ideas and the positivist-scientific model that underpinned both social psychiatry of the period and confident, and ultimately hubristic, advocacy of the primacy of neuroscience in psychiatry during ‘the Decade of the Brain’. Although some new ideas were eventually assimilated by psychiatry, the tension was unresolved in 2010.

Social Thinking in Psychiatry in 1960

In 1960, the social perspective was prominent within British mental health services. A subdiscipline of social psychiatry had been forming for some time and many of its enduring themes were already evident. British psychiatry had needed to change rapidly as a consequence of the social, political and economic impact of the Second World War and its aftermath. Mental hospitals, which previously had been the responsibility of local authorities, were absorbed into the new National Health Service (NHS) in 1948 and deinstitutionalisation commenced almost immediately, alongside changes in organisation, staffing and attitudes to treatment.

During the Second World War, psychological reactions to combat were regarded as a medical problem rather than as a matter of military discipline. A relatively small pool of psychiatrists was called upon to treat large numbers of service personnel suffering from ‘battle fatigue’ or ‘effort syndrome’, leading to pragmatic experimentation with group treatments. Necessity proved to be a virtue and new group-based social therapeutic modalities followed. At the Maudsley Hospital, Maxwell Jones developed the idea that the entire experience of living together could be therapeutic and, at Northfield Military Hospital, Tom Main coined the term ‘therapeutic community’, which became a banner under which many later reforms were made to inpatient care (sometimes less stridently labelled the ‘therapeutic milieu’).

There was a growing belief that, with support, people with chronic psychosis could have better lives in the community. There was optimism about new biomedical treatments, such as antipsychotic and antidepressant drugs, and electroconvulsive treatment. A degree of therapeutic heroism meant that there were some awful therapeutic mistakes too, such as deep sleep therapy (continuous narcosis) and insulin shock, which, when properly evaluated, were found to be dangerous and ineffective. Nonetheless, at the time there seemed to be a realistic possibility that NHS psychiatrists would soon be able to work from day hospitals (developed by Joshua Bierer at the Marlborough Day Hospital in London) and outpatient clinics, avoiding mental hospital admission altogether.

Social psychiatry was also ascendant in academia under the pervasive influence of Sir Aubrey Lewis at the Maudsley Hospital. Lewis was a social psychiatrist who had undertaken early anthropological research among Aboriginal Australians. He was influenced by Adolf Meyer’s work in Baltimore (see also Chapter 2). The Institute of Psychiatry was formed at the Maudsley in 1946 under his leadership and in 1948 he became the first director of the Medical Research Council (MRC) Social Psychiatry Unit there. He retired in 1966, but his influence persisted long after he had gone, as did his brand of social psychiatry.

As the 1960s started, and for many years thereafter, organised British medicine, including social psychiatry, followed a theoretical model that went back to scientific medicine’s Enlightenment origins. It was based upon the belief that positivism, reductionism and empiricism were the most powerful and meaningful ways of understanding mental disorders and that science itself was intrinsically subject to continuous progress. Fundamental causes of mental disorders were believed to be biological or psychological but social factors were recognised to influence their expression, course and outcome. Social psychiatry research mainly concerned itself with quantifying the impact of social environment and social interventions on mental illnesses, without challenging the fundamental assumptions of what came to be labelled ‘the medical model’. Classic studies of the time (e.g. Wing and Brown’s Three Hospitals study)1 exemplified social psychiatry’s research approach; patients’ symptoms and their social environment were assessed using operationalised criteria, the beginning of a long tradition of quantification through the use of symptom and social interaction rating scales.

A seminal 1963 study by Goldberg and Morrison addressed the possibility that social adversity might cause psychosis.2 It appeared to convincingly demonstrate that people with a diagnosis of schizophrenia drifted down the hierarchy of social class after they became ill, while unaffected family members did not. For many years, this ‘social drift’ was taken to account for known differences in prevalence between prosperous and deprived areas. It was not until the mid-1990s that new research methods started to shift the balance of evidence by showing that a variety of childhood adversities consistently increased the risk of adult psychosis. Psychiatry’s resistance to the idea that social adversity might cause mental illness was such that, even in 2010, there was little sign that British psychiatrists were changing their thinking in response to the implications of newer research findings.

The positivist but eclectic scientific stance of mainstream British psychiatry meant that it readily adopted the biopsychosocial model proposed by Engel in the late 1970s and a version of it remained the explicit stance of organised British psychiatry until 2010 and beyond.3 Although a broad church of scientific and clinical orientations flourished within British psychiatry, psychiatrists remained highly protective of their status as leaders of mental health services and of research. They encouraged growth and development in mental health nursing, social work and clinical psychology, but they were insistent that their own profession was uniquely equipped to be in charge.4 A reluctance to take on new ideas about relative professional standing eventually weakened psychiatry’s position when, from 1979 onwards, neoliberal politicians increasingly forced change upon it.

Pilgrim and Rogers have suggested that, in 1960, psychiatry and sociology were in alliance with each other, using empiricism to understand the impact of social context on mental health.5 However, everything in social science was about to change. While empirical sociology never disappeared, a rift opened between the disciplines that was only just beginning to close again in 2010.

New Social Theories and the Reaction of British Psychiatry

In Chapter 20, Burns and Hall refer to four books published in 1960/1 (Foucault’s Madness and Civilisation; Laing’s The Divided Self; Szasz’s The Myth of Mental Illness and Goffman’s Asylums) which were collectively the founding texts of so-called anti-psychiatry (a term rejected by the authors and later contested within social theory as serving to dismiss and marginalise valid critiques of psychiatry). They set out many of the themes that dominated social theory about mental health over the subsequent decades.

The new social theories had diverse origins and many variations developed. Those of the left came to be lumped together under the umbrella of ‘postmodernism’ (another label that was not wholeheartedly embraced by all of those it was applied to). The key theoretical positions about mental health were social functionalism, social constructionism and social labelling theory. Postmodernism tended to be concerned with the way that power is exercised and with privilege sustained through social and cultural institutions, language and ownership of knowledge. Many ideas were developed within a framework of neo-Marxism (in particular, the work of Gramsci), but psychoanalytic ideas as applied to social interaction were also important. Few post–Second World War social theories ignored psychiatry, because many social scientists came to understand it as a key way in which society managed ‘deviance’ (in other words, the breaking of social rules).

Erving Goffman stood alone as a critic of mental health services who was well received by a significant proportion of psychiatrists. According to Goffman, mental hospitals were ‘total institutions’ where every aspect of life, activity and human interaction served to maintain control and subjugation of the patients. Far from being therapeutic, they were intrinsically oppressive and marginalising. This characterisation distressed some psychiatrists, who saw themselves as benign and caring, but an influential minority felt that Goffman had described something that concerned them too and that his ideas were helpful to programmes of deinstitutionalisation that they were leading.

Goffman’s next project was on ‘the spoiled identity’ or stigma. According to social labelling theory, the ways that words that are used by psychiatry and society to describe mental illness, and the people so diagnosed, have a profound impact on both social attitudes to them and their sense of self. New concepts of stigma had an extensive and enduring impact, leading to successive campaigns for the use of less negative forms of language for mental illnesses and the people diagnosed with them (sometimes disparagingly labelled by opponents as ‘political correctness’). The importance of stigma, and of reducing it, influenced mainstream psychiatry to the point where, in 1998, the Royal College of Psychiatrists mounted a five-year anti-stigma campaign.

Other new social theories proved more difficult for psychiatry to accept. Postmodernism held that mental illness had no existence independent of psychiatrists. It was seen as a social construct, which had developed to maintain order in the new urbanised society of the Industrial Revolution, justifying the sequestration of disruptive people in mental hospitals. This process was labelled ‘the Great Confinement’ by Foucault. British psychiatry rejected this as a denial of scientific facts and of human suffering and by 2010 had not reconciled itself to the idea.

The belief that the things that people diagnosed with schizophrenia said were intrinsically bizarre and non-understandable was a key element in Karl Jaspers’s phenomenological approach to psychopathology, a cornerstone of British descriptive psychopathology. Laing and others strongly challenged this, insisting that the things people with psychosis said were intelligible if you took the trouble to understand the social and family context they existed within. Jaspers’s influence weakened from the 1990s, mainly because of the application of cognitive behavioural ideas to the psychopathology of psychosis. Almost without acknowledgement, some of Laing’s early ideas eventually gained acceptance (see also Chapter 20).

According to postmodernism, psychiatric practice, diagnosis and treatment could not be separated from the oppressive values of those who controlled society, especially sexism, racism and homophobia. Activists pointed out that women, people of black and other minority ethnic heritage, and gay people were more likely to receive a psychiatric diagnosis than male, white and heterosexual people. Professional ideologies were seen as intrinsically sexist, racist and homophobic. Diagnoses such as ‘hysterical personality disorder’ were condemned as sexist caricatures. Aversion therapies to change sexual orientation and the attribution of high rates of psychosis among black people to an intrinsic racial characteristic (rather than to social adversities, such as racism) were seen as value-laden and oppressive. From the mid-1990s, these ideas did begin to exert an influence on the way that psychiatry thought about itself, particularly as some urban CMHTs (see the section ‘Developments in Social Thinking In Psychiatry’) developed links with the communities they served. There was also increasing evidence from psychiatric research that these critiques were valid.

To the four key books of 1960/1 identified by Burns and Hall (see Chapter 20) can be added Thomas Kuhn’s The Structure of Scientific Revolutions,6 which was published in 1962. This highly influential book had nothing specific to say about psychiatry but it had implications for the certainty with which psychiatry defended its positivistic roots. Kuhn’s central thesis was that science does not progress smoothly following immutable and irreducible principles. Instead, ‘normal science’ operates within a constructed meta-model or paradigm. Over time, conflicting evidence accumulates that cannot be reconciled within the paradigm and eventually there is a paradigm shift, whereby all previous assumptions and ways of thinking about scientific problems are revised or dismissed, with the formation of an entirely new paradigm of greater explanatory power. ‘Normal science’ then proceeds within the new paradigm until it, in turn, is replaced. The implication for psychiatry was that its methods and models of science (and, by extrapolation, the profession’s status) were neither timeless nor self-evident. Kuhn lent support to the postmodernist concept that psychiatry found most unpalatable: the suggestion that the objectivity of psychiatric science was illusory and rested upon a medical model that was a poor fit for psychological distress. The medical model could only, it was suggested, reflect psychiatrists’ perception of ‘truth’. Psychiatrists’ understanding of mental disorder had no intrinsic claim to greater legitimacy than their patients’ or anybody else’s. By 2010, this concept was still fiercely resisted by psychiatry, despite a growing acceptance, at least theoretically, that there was value in social science qualitative research techniques that captured lived experience.

As postmodernism became increasingly influential, the gap between sociology and psychiatry widened. The scope of critiques of the medical model became greater, particularly after the publication of Ivan Illich’s Medical Nemesis: Limits to Medicine in 1976.7 The book opened with the statement ‘The medical establishment has become a major threat to health’ and went on to suggest that this involved three different types of iatrogenesis: clinical, social and cultural. The first referred to direct adverse effects of treatment, the second and third to a wider impact that has the effect of undermining people’s ability to manage their own health.

In the later period, social theorists were especially influenced by Foucault. Pierre Bourdieu, widely considered the most influential social theorist of his time, built on Foucault’s ideas to develop concepts about social and cultural capital that were relevant to the understanding of mental disorders.8 Unlike Foucault, Bourdieu regarded empirical evidence as important. Nonetheless, his ideas only influenced a small minority of social psychiatrists. Later still, Nikolas Rose developed Foucault’s concept of governmentality to explore the impact of the ‘psy disciplines’ beyond people diagnosed with mental disorder.9 He suggested that these disciplines (or industries) had had a profound role in forming general ideas about self, autonomy, control and authority for the entire population. Through the whole of our period of interest, UK psychiatry reacted negatively to postmodern critiques. Eventually, the relationship between sociology and psychiatry was distant, if not actively hostile.

The social theories of the new left challenged all the institutions of liberal democracy, but they were not the only intellectual movements that did so. Szasz, for example, was a right-wing libertarian who objected to the restriction of individual liberty by the state. To Szasz, compulsion had no role in helping people who were emotionally distressed. Indeed, the state itself had no legitimate role. The only legitimate relationship between psychiatrist and patient was an individual commercial transaction, freely entered into by both parties. Similarly, in economics, a challenge to the institutions of liberal democracy was forming on the right from neoliberals influenced by political economists such as Friedrich Hayek and Milton Friedman.10 In its purest form, neoliberalism came to see post–Second World War social welfare provision as a structural impediment to the workings of a free market which, if unfettered, would resolve social problems through perfectly expressed individual self-interest. These free market libertarian concepts became the economic orthodoxy of the second half of our period. They had a vicarious impact on psychiatry through the progressive marketisation of British health care following the NHS reforms of 1990 (see also Chapter 12).

Developments in Social Thinking in Psychiatry

While there was tacit acceptance of elements of new social theories in the later period, organised psychiatry mostly stood aloof and saw little reason to examine its own legitimacy. Postgraduate curricula and standard textbooks made scant reference to the new social theories. In 1976, a young Irish psychiatrist working at the Maudsley Hospital, Anthony Clare, published a book in defence of psychiatry, Psychiatry in Dissent.11 The book sought to refute anti-psychiatry and the new social theories on empirical grounds. To the profession, the exercise was satisfying and successful, but to psychiatry’s critics, Clare missed the point. Having rejected the primacy of positivist science, a defence on that basis could not be convincing to them. On the other hand, academic mental health nursing, which developed rapidly from the mid-1980s onwards, embraced the new theories much more readily. Over time, nurses came to dominate mental health service management. By this route, postmodernist ideas came to have an impact on psychiatry from within services but from outside of the profession.

Despite resistance to postmodernism, from 1960 to 2000 service innovation was led by social psychiatry. From 1962, Maxwell Jones applied therapeutic community principles to the entire mental health service at Dingleton Hospital in Scotland (see also Chapter 2). The result was the earliest version of the CMHTs. Twenty years later, alongside efforts to suppress the use of stigmatising language, specialist services started to emerge for women, for black and other minority ethnic groups and (mainly in response to the HIV epidemic) for gay people. These new services tended to accept that systematic disadvantage and discrimination were relevant to people’s mental health and actively acknowledged this. Psychiatrists started to actively engage with these communities and more collaborative approaches developed. As usual, these developments were piecemeal and many services remained unreconstructed. Attitudes among younger psychiatrists changed, but this was probably a consequence of shifts in values among the educated middle class in general.

A major factor that eventually influenced all of the mental health professions was the mental health service user movement (see also Chapter 13). This had roots outside of health services and universities. It developed in the wake of the other liberation movements as part of the radical ‘underground’ of the 1960s and 1970s. It was a broad movement that varied in its attachment to critiques of, and hostility towards, psychiatry. Despite marked differences, the movement had some generally agreed-upon objectives: that service users should have choices in, and agency over, their treatment; that they should be involved in planning services and in developing research; that mental health assessment should take into account their full circumstances; that talking therapies should be as available as medication; that mental disorder should not be regarded as lifelong; that the aim of treatment should be recovery, defined by the patient; and that services should avoid stigmatising its users. By 2010, few service users felt that these objectives had been achieved, but they were accepted as legitimate and desirable by most mental health service managers and psychiatrists. From the 1990s, psychiatrists were pressed by governmental policies such as the Care Programme Approach and National Service Frameworks to conform to some of the service user movement’s demands. In the later period, this led to major changes in the way that psychiatry was practised in the UK. For example, in 2010, many services claimed to follow ‘the Recovery Model’, albeit amid some controversy over ownership of ‘recovery’.

‘The Decade of the Brain’

Prompted by industry lobbying in the wake of the huge success of a new antidepressant, Prozac (fluoxetine), US president George H. W. Bush declared the 1990s to be ‘the Decade of the Brain’. This had international ramifications and academic social psychiatry went into sharp decline. There was a massive biomedical research and development effort and new medications appeared that were claimed to be more effective, with fewer side effects, than the older ones. Advances in molecular genetics and brain imaging technologies created an expectation that the limitations of psychiatric treatment would be overcome by reference to ‘fundamental’ brain processes. New diagnostic categories appeared, generating suspicions that new markets were being created. There was less money available for social research in mental health and most of it was directed at trials of complex manualised community interventions such as ‘assertive outreach’.

Optimism about biological advances in ‘the Decade of the Brain’ proved ill-founded. The new drugs proved no more effective and just as problematic as the old ones. Molecular genetics and new imaging techniques generated much new knowledge, but by 2000 there was no sign of any implementable technologies that might revolutionise psychiatric treatment. In fact, psychiatry had unwittingly confirmed some aspects of postmodernist critiques. It stood accused of having a deep and corrupt relationship with the pharmaceutic industry. Although organised psychiatry worked hard from the late 1990s onwards to distance itself from the industry, it was too late to undo the reputational damage. Intense attention to ‘fundamental’ biological processes had proven as fruitless as postmodernism had predicted.

Postmodernism did influence psychiatry in other parts of the world. For example, in Italy, neo-Marxist and Foucauldian theories underpinned Franco Basaglia’s Psichiatria Democratica movement. Basaglia was a professor of psychiatry in Trieste. He was influenced by visiting Dingleton in the 1960s, and in 1978 his movement was successful in getting Law 180/78 enacted throughout Italy, banning mental hospital admissions and introducing a system of community care. The UK saw no corresponding positive response to new social theories until, in 2001, Bracken and Thomas heralded the development of a postmodern psychiatry (which they labelled ‘post-psychiatry’) with an article in the British Medical Journal (BMJ).12 Bracken and Thomas were part of the broader Critical Psychiatry Network, a group of radical psychiatrists. Unlike the medically qualified anti-psychiatrists of the 1960s, they insisted that they remained within the umbrella of the psychiatric mainstream, but the impact of their various conceptual threads varied. Their concerns over the medicalisation of life were widely shared, but post-psychiatry per se enjoyed little general support, possibly because of its use of the dense and unfamiliar language of postmodernism.

Other Social Theory Developments

From 2000, disillusionment with the claims of ‘the Decade of the Brain’ set in and social psychiatry gradually revived. Interest started to grow in the social determinants of mental health, due to the work of empirical researchers from public health and sociology such as Michael Marmot and Richard Wilkinson and the emerging epidemiological evidence that childhood deprivation related to psychosis more as a causal factor than a confounding factor.13 These findings implied the possibility of preventing mental ill health through social and public health intervention. Linked to this, there was increasing interest in global mental health, whereby international socio-economic factors were seen to have a disproportionate impact on the mental health of people in low- and middle-income countries, who were the majority of humanity.

A range of other social theories were little noticed by mainstream psychiatry but had some impact on specific therapies. For example, cybernetics and systems theory were applied to systemic family therapy. This was seen to be a powerful technique but attracted little interest beyond child and adolescent psychiatry. New ways of understanding social networks developed but they were rarely adopted in psychiatric research, and similarly concepts concerning social capital had little impact on psychiatry’s understanding of inequality.

There are other examples, but the point is clear. Complex ways of understanding inequality and social context were hard to absorb into psychiatry’s medical model, despite signs of a renaissance of interest in social factors by 2010. Writing at the end of the period, we pointed out that the biopsychosocial paradigm could not accommodate the contradictions in the evidence about mental health problems and that we appeared to be awaiting a scientific paradigm shift.14

Conclusion

While postmodernism and other social theories had a limited direct impact on the way that organised psychiatry understood social context and its own role in society, an indirect impact was felt as the years passed. This was due mainly to external influences such as the service user movement and assertive nurse-led management of mental health services. Later empirically based work had some traction on psychiatry, but at the end of our period, notwithstanding signs of revival, social psychiatry remained significantly less influential than it had been fifty years earlier.

Key Summary Points
  • This chapter describes the development of social concepts within psychiatry and mental services between 1960 and 2010. This occurred against the backdrop of the emergence of new social theories concerned with psychiatry, medicine, science and other institutions of liberal democracy from the very beginning of the period.

  • Attacks on the legitimacy of psychiatry came from postmodernists on the left and neoliberals on the right and coincided with a distancing between psychiatry and sociology.

  • Organised psychiatry reacted defensively to most, but not all, of its critics and had difficulty assimilating even those new social theories that appeared neutral regarding the professional and scientific status of psychiatrists.

  • From the 1990s, mental health nursing become dominant in a newly empowered NHS management, and the service user movement was successful in its campaigns to have key demands included in national and local government policy. These external influences forced change upon psychiatry.

  • In the last decade of the period, empirical evidence regarding social determinants of mental health, together with the failure of biomedical technology to deliver on promises of better treatments, led to the beginnings of a revival of interest in social factors within academic psychiatry.

Chapter 5 A Sociological Perspective on Psychiatric Epidemiology in Britain

David Pilgrim and Anne Rogers
Introduction

The relationship between psychiatry and sociology has been both ‘troubled’ and collaborative.1 Social psychiatry represents a reconciliation between positivist labelling in the profession (diagnoses) and social causations, as an alternative to bio-determinism. Either side of the Second World War, social psychiatry had emerged in the United States, influenced by Adolf Meyer’s work, the ecological wing of the Chicago School of sociology and, subsequently, the development of the biopsychosocial model by George Engel.

In Britain, during the twentieth century ‘environmentalism’ or ‘the social’ was evident in the treatment of shellshock and in the development of therapeutic communities as well as in the emergence of attachment theory and social epidemiology.2 The last of these offered us a ‘bio-social’ model of common mental disorders, deleting the ‘psychological’ from the biopsychosocial model.3 Before looking at the ways in which this legacy influenced psychiatric research, we consider the influence of the nearby British sociological work of George Brown and his colleagues.

The Social Origins of Depression

Originally Tirril Harris, George Brown and colleagues aspired to examine the link between social class and depression. The title The Social Origins of Depression signalled that intention.4 However, on grounds of methodological pragmatism it became celebrated mainly as a study of female depression; women were more available to participate and so only they were interviewed. Accordingly, the subtitle of the book became A Study of Psychiatric Disorder in Women. Brown and Harris considered that their work demonstrated the exposure of working-class women in ‘urban industrialised communities’ to depression-inducing stressors, which reflected social inequalities and a ‘major social injustice’.

What the study brought to light were the biopsychosocial dimensions expected within the Meyerian legacy, in which the team developed a model of depression that included three groups of aetiological factors. These referred to biographical vulnerability, provoking agents and symptom formation factors. Subsequently, the model was elaborated to include more on childhood adversity and social defeat (negative ‘life events’) and a breakdown in the interpersonal bonds that sustain a confident sense of self. Brown and colleagues concluded that the probability of depression increases not necessarily with loss or threatened loss per se but with the coexistence of humiliation and entrapment and the meanings that they incur for incipient patients.5

The research started an interest in adverse childhood events, which we now know increase the possibility of mental health problems in general and are not diagnosis-specific. Around a third of the women studied by the research team had experienced neglect or physical or sexual abuse during childhood. This subgroup had twice the chances of becoming depressed in one year, compared to those without such adverse antecedents,6 which also predicted anxiety symptoms. Bifulco and colleagues’ work elaborated the original model to mark a convergence with attachment theory,7 setting the scene for a ‘trauma-informed’ approach to mental health work.

The Harris, Brown and colleagues research was augmented in Camberwell by psychiatrists, but this focused on the follow-up of discharged hospital patients with a diagnosis of unipolar depression.8 This time, men were included in the sample but were still in a minority. This team concluded that gender, not class, does predict depression and that life events were important. This project was different from the community surveys of the Harris and Brown group, as its focus was on a particular clinical population and their families, with a view to disaggregating the salience of nature and nurture.

This research group embodied equivocation within British psychiatry about ‘the social’, as the search for the salient role of biogenetics continued as part of the work. This can be contrasted, for example, with the work of Brown and colleagues or that of Goldberg and Huxley, who were more interested in how social factors, as immediate stressors, may impact on somatic systems.9 The sampling and theoretical differences in these studies raised questions about the sociological competence on the part of psychiatrists. As will be clear now, their engagement of the ‘sociological imagination’ was sparse but not always absent.10 For reasons of space, our summary of the topic is guided by examples given from British contributions to the main international social psychiatric journal, Social Psychiatry and Social Epidemiology. This brings us to consider how ‘the social’ was framed and represented in papers from British psychiatry.

British Articles in Social Psychiatry and Social Epidemiology

When we looked at the articles from the UK in the journal Social Psychiatry and Social Epidemiology (which began life in 1966 as Social Psychiatry), we traced three overlapping framings of ‘the social’, which for heuristic convenience here we have called ‘the micro’, ‘the meso’ and ‘the macro’. We provide illustrative examples of this tracing exercise in what follows.

The Micro Version of ‘the Social’: The Biological Undertow and Clinical Focus

Adelstein and colleagues reported an epidemiological study from Salford, which described the clinical characteristics of newly diagnosed patients.11 A series of correlates were reported about sex, class, age and marital status. The authors opted to compare ‘functional’ mental disorders to Huntington’s chorea, noting that the ‘resemblance is great, and these data therefore provoke serious consideration of the existence of genetic susceptibility to schizophrenia and psychoneurosis, although there are good reasons for seeking multiple causes including social causes’.

Social psychiatric research set in that period remained wedded to the hospital as an institution, reflected in the reliance and focus on the use of service-based data (rather than community, nonclinical samples) to map out mental disorders in society. Cooper argued for the ‘need to look outside hospital’ to primary care.12 He identified that women were over-represented in the latter but not the former, but no sociological hypothesis was raised about its reason (e.g. about risky male conduct requiring control). The author did note, however, that more research was required on ‘the general population’.

Later, this focus on individual characteristics in clinical samples began to define ‘the social’, as an outcrop of patient pathology or deficit. For example, Platt and colleagues reported on the development of the ‘Social Behaviour Assessment Schedule’, to measure ‘the patient’s disturbed behaviour, the patient’s social performance, and the adverse effects on the household’.13 Another example of making ‘the social’ a patient characteristic was the investigation of the problematic conduct of those abusing alcohol in Scotland.14

The Meso Level of ‘the Social’: The Focus on Immediate Relationships

The reporting of social disability as a product of patient pathology continued but overtime was now augmented by an interest in family life. As we noted earlier, one motive for this was to test the nature/nurture question. The role of family life in relapse but not causation (eschewing a possible implication of British ‘anti-psychiatry’) came to the fore in the study of ‘expressed emotion’.15 Speculating more generally about intimacy, Birtchnell and Kennard posed the question, does marital maladjustment lead to mental illness?16

The interpersonal field was reflected in the emerging epidemiological picture of the overrepresentation of black and ethnic minority patients (mainly African-Caribbean). Family and cultural features of these groups became a focus of interest, along with a consideration about stressors linked to migration and community; not just clinical samples were investigated.17

Up until the mid-1990s the social ‘deficits’ assumed to be flowing from mental illness continued to be of interest,18 as well as an exploration of the racialised picture of admissions, with a particular interest in why rates of psychosis were higher in second generation Afro-Caribbean people.19 Also, the biogenetic assumption from the 1960s persevered in studies looking at social factors.20

With that continued assumption, premorbid deficits implied an endogenous source for current social disability. An example of the continuing biological undertow was the suggestion that immigrant mental illness might be caused by a virus in their host country. This preference for a putative biological cause is noteworthy given the clear prevalence of both poverty and racism in society.

An alternative exploration of significant others was offered by Morley and colleagues,21 who suggested that the relatives of patients are wary of services and this shapes admission decisions of African-Caribbean patients. Also, the continuation of the research on how ‘high expressed emotion’ in the families of psychotic patients contributed to relapse continued; the relapse focus thereby allowed the biogenetic assumption to be retained.22

There was a continuing focus on clinical routines. For example, Tunnicliffe and colleagues were interested in tracing how ethnic background predicted compliance levels with depot medication.23 ‘The social’ then was a profession-centred resource in the research and not a window into sociological understanding. For example, the depot question could have been discussed in terms of psychiatrists as being agents of social control in open settings, in anticipation of the controversy about Community Treatment Orders (see also Chapter 8).

The discursive picture in this period of social psychiatric research is one in which patient characteristics (i.e. symptoms and deficits, with embedding biogenetic assumptions) and service utilisation processes define what is meant by ‘the social’. The latter might be extended to family context, as a source of relapse and possible gene pool of aetiology. The Camberwell Depression study mentioned in the section ‘The Social Origins of Depression’ was of this type.

The Macro Level of ‘the Social’: A Shift Back to Social Causation

The insights of the ecological wing of the Chicago School of sociology were seemingly unrecognised or acknowledged in the abovementioned published research. However, gradually a fuller account of social causation emerged, along with more methodological sophistication and some willingness to explore the sociological imagination. The latter dawning was evident in a paper by Rodgers looking at the social distribution of neurotic symptoms in the population.24 The role of poverty became evident and the author commented (we assume without irony) that previous studies ‘may have underestimated the importance of financial circumstances’. Similarly, Gupta showed significant statistical correlations between mental disorder, occupation and urbanicity.25 The powerful causal combination of poverty and city living was now being re-vindicated.

Those signs in the 1990s of a recognition of the macro level have been more evident recently. For example, resonating in part with the sociological notion of intersectional disadvantage, the older pattern of racialised admissions was now revisited with the added question of class. Brugha and colleagues found that ‘ethnic grouping was strongly associated with: unemployment; lone parent status; lower social class; low perceived social support; poverty (indicated by lack of car ownership) and having a primary social support group of less than three close others’.26

By the turn of this century, the taken-for-granted diagnostic categories used by social psychiatrists were subject to critical questioning. The weak construct validity of schizophrenia was conceded and a broader notion of psychosis as the medical codification of madness (i.e. socially unintelligible conduct) emerged.27 This marked a recognition that social constructivist arguments might now be relevant to explore, when and if dubious diagnostic concepts were the dependent variable in epidemiology (i.e. what exactly was being studied?). This small shift should not be over-drawn, though; except for querying ‘schizophrenia’, there was no apparent rush to abandon psychiatric diagnosis in social psychiatric research.

Samele and colleagues found that occupational (but not educational) status predicted psychotic symptoms.28 In line with these newer macro-focused studies, Mallett and colleagues found that ‘unemployment and early separation from both parents distinguish African-Caribbeans diagnosed with schizophrenia from their counterparts of other ethnic groups as well as their normal peers, and imply that more attention needs to be focussed on socio-environmental variables in schizophrenia research’.29 A similar point was made by Marwaha and Johnson about the role of employment in recovery from psychosis.30 The focus on black and minority ethnic (BME) patients remained present in the literature but within that there was a return to a focus on the impact of migration,31 though with a continuing interest in British-born BME groups as well.32

By the start of the present century, there had also been a return to community-based rather than clinical sampling. For example, King and colleagues examined psychotic symptoms in the BME general population and concluded that prevalence rates ‘were higher in people from ethnic minorities, but were not consistent with the much higher first contact rates for psychotic disorder reported previously, particularly in Black Caribbeans’.33 This return to the general population rather than clinical samples also applied to depressive and anxiety symptoms, with links being made explicitly to them being inflected by social inequality.34

During the first decade of this century there was a continuing service-centred concern but now it was explicitly in relation to the context of deinstitutionalisation, especially when substance misuse amplified the risky conduct of psychotic patients in the community.35 Other new trends included a small shift to well-being rather than mental disorder,36 as well as a concern with self-harm and suicide, especially in relation to middle-class groups.37

Discussion

We noted that the sociological work of Tirril Harris, George Brown and colleagues was of a different character to that led by psychiatrists, as it was explicitly about social inequality and its implied social injustice (defying the fact-value separation of the positivist tradition). During the 1960s, British social psychiatrists at first tended to discuss ‘the social’ as an attribute of patients to be measured, taking it for granted that their illness made them socially impaired and that this placed a social burden on others. This gave way to one in which family determinants, especially about relapse or service contact, took on a greater salience. These micro and meso depictions of ‘the social’ always carried with them the strong presumption of bio-determinism. Eventually, there was a return to a macro-social interest; socio-economic conditions were now causally relevant and community, not clinical, samples regained their importance.

Looking at the literature of the period of interest, apart from this pattern about the framing of ‘the social’, another observation is the uneven ecology of the research itself. Most of the work is derived from the Institute of Psychiatry in London, augmented by a minority of studies from Scotland and the English provinces. Accordingly, one immediately convenient locality (Camberwell) has been investigated disproportionately. It may well be that Camberwell reflected the UK-wide picture of what was being researched in the past few decades, but we will never know. The dominance of the work in South London may have had the effect of marginalising the impact of attachment theory from the rival Tavistock Clinic, north of the Thames;38 and within the London-based quantitative research, the relevant correlations were derived from varying sample sizes. For example, the clinical sample in the Camberwell Depression study in the 1980s was of 130 patients, whereas the community sample from King and colleagues in 2005 was of 4,281 face-to-face interviews.

By the turn of this century, British social psychiatric research was still largely medically led, although the faltering integration of health and social care prompted some social worker leadership.39 We also noted some work on assessment tools that was led occasionally by psychologists (e.g. Slade and Kuipers) or sociologists (e.g. Platt). Epistemologically, medical dominance extended to the ongoing reliance on diagnostic groups as dependent variables. However, we noted a cautious shift, for some, from ‘schizophrenia’ to ‘psychosis’. This ambivalence within social psychiatry about reifying diagnoses is an old trope, traceable to Meyerian psychiatry.

At the start of the chapter, we noted that it had provided the basis for a potential collaboration between the psychiatric profession and sociology. The material covered suggests that the latter has not had a consistent impact on the former, with the sociological imagination being in poor supply at times, though occasionally contributions from sociologists were evident.40 We mention this separation because sociologists themselves have produced their own research on mental health and illness, which does cite the psychiatric (and psychological) literature. We have summarised this intermittently in the past thirty years.

Conclusion

British social psychiatry has tended to start with the anchor point of clinical concerns and diagnoses and their presupposed underlying biological causation. It then moves out tentatively to interpersonal settings and societal stressors. By contrast, sociologists are more likely to start at the other end of the telescope. Their focus is on groups of people in their social context not pathology, which then comes into focus because of sociological reflection more widely. Their alternative agenda inter alia includes medical dominance and the social control role of psychiatry in society, eugenics, political economy, lay views, especially from service users, and metaphysical debates about social causationism and social constructivism. A life-course approach predominates, for example, with considerations about childhood adversity and mental health in old age. These topics, drawing upon wider sociological work, might have helped social psychiatrists to develop their sociological imagination.

No discipline has a monopoly of understanding about this topic and so the interdisciplinary potential of social psychiatry, broadly conceived, remains an opportunity for all. However, for its potential to be realised, interdisciplinarity needs to be fully respected by all. The shortcomings about the sociological imagination, which we drew attention to in this chapter, might have been avoided had this point been recognised in the mid-1960s in British social psychiatry. Psychiatric wariness of sociology was one likely source of this shortcoming, along with the self-referential norm in the medical literature reinforcing a uni-disciplinary silo.

Key Summary Points
  • Either side of the Second World War, social psychiatry had emerged in the United States, influenced by Adolf Meyer’s work, the ecological wing of the Chicago School of sociology and, subsequently, the development of the biopsychosocial model by George Engel. The sociological imagination in British social psychiatry was sparse but not always absent.

  • Most of the work in UK is derived from the Institute of Psychiatry in London, augmented by a minority of studies from Scotland and the English provinces.

  • By the turn of the present century, the taken-for-granted diagnostic categories used by social psychiatrists were subject to critical questioning. The weak construct validity of schizophrenia was conceded and a broader notion of psychosis as the medical codification of madness (i.e. socially unintelligible conduct) emerged.

  • No discipline has a monopoly of understanding about this topic and so the interdisciplinary potential of social psychiatry, broadly conceived, remains an opportunity for all. However, for its potential to be realised, the principle of interdisciplinarity needs to be fully respected by all. British psychiatry has been wary of sociology and has had a tendency, like other medical specialities, to be self-referential in its published outputs, thereby producing a largely uni-disciplinary silo.

Chapter 6 Life, Change and Charisma: Memories of UK Psychiatric Hospitals in the Long 1960s

Thomas Stephenson and Claire Hilton
Introduction

Psychiatric hospitals across the UK were in various states of change during the long 1960s, influenced by legal, medical, ideological, social, psychological, financial and other factors. The Mental Health Act 1959 (MHA) created more liberal admission processes and encouraged community care. The Suicide Act 1961 decriminalised suicide and attempted suicide. New medications were available and multidisciplinary teamwork developed. Societal perspectives shifted concerning individuality, personal autonomy and human rights. In the face of a more educated, less class-deferent society, paternalistic ‘doctor-knows-best’ attitudes became less acceptable, although clinical styles which infringed on human dignity continued in many National Health Service (NHS) settings.1 New organisations, such as the Patients Association, and older ones, such as the National Association for Mental Health (today, Mind), began to campaign for patients’ health care rights (see also Chapter 14). Antiquated psychiatric hospitals were expensive to maintain and stimulated government policies to close them. Psychiatric hospital beds in England and Wales, which peaked at 148,000 in 1954, reduced to 136,000 in 1960 and to 94,000 in 1973 (see also Chapters 1 and 31).2

Decisions and actions within health care institutions, from central government down, impact on the lives of individual patients, their families, the staff and the wider community. However, too often, experiences of individuals are ignored in historical analyses. Recognising this for mental health care, oral history resources have been developed, such as the Mental Health Testimony Archive,3 comprising patients’ life stories and health service experiences. Some people wrote memoirs, such as psychiatrists William Sargant and Henry Rollin,4 and community projects such as at the former Whittingham Hospital, Preston and Fairfield Hospital, Stotfold, are recording the lives of the people associated with them. To complement existing resources of personal experiences, we convened a witness seminar on psychiatric hospitals in the 1960s. A witness seminar allows invited witnesses to present their memories and to discuss them with each other and a participating audience. Challenge and comment are akin to open and immediate peer review. Seminars are recorded, transcribed, annotated and made available as a primary historical resource.5

Witness seminars are inevitably selective and subjective and include contributors’ prejudices and personal agendas. These biases may be less evident in written archives, especially official or clinical documents, but they still exist and all sources need critical analysis. Our witness seminar aimed to be multidisciplinary, drawing on experiences from across the UK. Lifespan, health and competing commitments were factors limiting availability of witnesses. Unfortunately, we had only one patient witness and none from the civil service or black and Asian ethnic groups.

In this chapter, we present some of the seminar themes, focusing on experiences in the hospitals and some of the clinicians and patients who inspired and influenced change. Discussion jogged memories onto other topics and anecdotes, such as ethics, leadership, gender and professional training, far more than can be explored in this chapter. The transcript, witnesses’ biographies and a commentary on some of the more controversial themes are free to download.6 In this chapter, references to the seminar are indicated by transcript page numbers and the speaker’s initials (Table 6.1).

Table 6.1 Witness seminar participants quoted in this chapter

NameInitials (used in text)Role/context (1960s)Affiliated hospital (1960s)
Dora BlackDBJunior doctorNapsbury, Hertfordshire
Bill BoydBBConsultant psychiatrist and physician superintendentHerdmanflat, East Lothian
John BradleyJBConsultant psychiatrist and medical directorFriern, London
Malcolm CampbellMCJunior doctorFriern, London
Peter CampbellPCPatientAddenbrookes, Cambridge; Royal Dundee Liff, Angus
Susanne CurranSCMental welfare officer (psychiatric social worker)Prestwich and Rossendale, Lancashire
John HallJHClinical psychologistSt Andrew’s, Norwich
John JenkinsJJStudent nurseParc Gwyllt, Bridgend
David JolleyDJMedical studentSeveralls, Colchester
Jennifer LoweJLOccupational therapistLittlemore, Oxford
Peter NolanPNStudent nurseTooting Bec, London
Geraldine PrattenGPLived with her family in a hospital staff houseCrichton Royal, Dumfries
Angela RouncefieldARJunior doctorSefton General; Birkenhead General; North Wales, Denbigh
Peter TyrerPTJunior doctorSt Thomas’ and Maudsley, London
Harry ZeitlinHZJunior doctorMaudsley, London
Onto the Wards

In the 1960s, most psychiatric hospitals were suburban or rural. Many were ‘total institutions’, as Erving Goffman described, places of residence and work where many like-situated individuals, cut off from wider society for an appreciable period of time, led an enclosed and formally administered way of life.7 The public were often fearful of the hospitals (SC,20) and avoided them. Staff too could find them disconcerting, the ‘foreboding experience of a very long, dimly lit corridor’ (MC,25) could provoke panic (JB,50). MC (25) also remembered the ‘overall pervading … smell, a mixture of incontinence and disinfectant’ and others recalled the foul odour of the sedative paraldehyde (JB,27), while written sources mentioned smells of vomit, faeces, cats, mothballs, cooking, ozone machines, flowers, talc and freshness.8

Psychiatric wards were gender segregated, but some hospitals were ‘bringing the genders together for civilised activities’ (DJ,12). Nevertheless, PC (16) recalled: ‘men and women used to come down to mealtimes at slightly different times and sit at different tables. It was a brave person who made up a mixed table.’ Whereas ward staff were usually of the same gender as their patients, doctors generally worked across several wards and were not. Women hospital doctors were few and far between,9 and when they worked on the ‘male side’, there could be practical difficulties: ‘no ladies [toilets] in the male units: a male nurse would stand outside the gents while I went in’ (AR,47). Male and female wards also had different cultures. Since our nurse and patient witnesses (JJ; PN; PC) were all male, their descriptions might be less applicable to female wards. Male wards were more militaristic and arguably physically harsher than women’s wards, which perpetuated different sorts of detrimental practices.10

Distressing mental symptoms could preoccupy a new patient:

traumatised by my situation … extremely anxious. I believed I’d been taken away to die … I would wander around the ward. My concentration was extremely poor. I tried to watch the television but could make no sense of it. This distressed me greatly.

(PC,16)

Ward environments might add to distress:

The day room always seemed crowded with constant movement. Although there were few comfortable places to sit, it was where patients smoked, had their meals, met visitors, listened to the radio, watched television, had cups of tea, sat alone or chatted in groups. No matter how often the ashtrays were emptied they were always full.

(PN,26)

Sometimes there were ‘insufficient chairs. The only escape was to sit in the toilets smoking or just sleeping which often happened, thus blocking the use of some toilets’ (PN,69). Some hospitals attempted to improve environments but patients might resist change:

we got lovely comfortable chairs with coffee tables so that the patients could sit in nice little groups. Of course, when I came in the day after it was in action, the patients were all sitting around the wall … . backs to the wall and I said, ‘What’s happening?’ And the nurses said, ‘Well, that’s what the patients wanted to do’ so we just had to go along with it.

(BB,52)

Communal living arrangements, few personal possessions and the loss of usual social roles and relationships diminished patients’ sense of individuality and promoted conformity with ward regimes.11 Rigid regimes might ease life for staff but could be unhelpful to patients. PC (16) recalled:

I found it very difficult to sleep. I remember lying in bed waiting for the right time to go and ask for chloral hydrate. Too soon, and I would be sent back to try again. Too late, and I would also be refused.

Some procedures, rules and inadequate facilities were also challenging to staff:

I entered [the ward] briskly to prevent a possible key snatch or an escape attempt. The necessary rituals of saying good morning to the charge nurse and donning a white coat completed, the day started. Getting the patients up, dressed and washed before breakfast were priorities. Only having three wash basins and three toilets [for twenty-six patients] often delayed this process and frequently led to annoyances and confrontations.

(PN,26)

Seminar participants spoke of interactions, between staff and patients, among patients and among staff. JJ (36) recalled an encounter with a senior nurse, who, because of his rank, was likely to influence staff attitudes and practices: ‘I was given a very large key and taken … to ward eight. [The nurse] only spoke a few words … I will never forget … “The key is the most important thing here and what it represents is your power over the lunatics.”’ The obsolete word ‘lunatics’ suggested outmoded attitudes towards patients, and ‘power over’ suggested a coercive culture, unlikely to foster kindness. Some staff showed ‘institutionalised indifference’ (Figure 6.1), walking past patients because ‘being busy was preferable to being engaged’ (PN,28), but there was also kindly interaction:

Assisting with [personal care] tasks, patients would sometimes confide to nurses such things as, ‘It’s my birthday today’ or ‘Today is the tenth anniversary of my leaving the army.’ I often found myself captivated by observing the good-natured gentleness with which able bodied patients helped those less able.

(PN,26)

PN’s observation resonated with the ‘kind of camaraderie’ among patients which PC (17) experienced.

Figure 6.1 Institutionalised indifference: Russell Barton’s painting, Potential Murderers? c.1960.

© Bethlem Museum of the Mind.

Medical staff as well as nurses could ignore patients’ personal dignity. PC (17) described ward rounds:

We stood by our beds with our kit laid out neatly. The consultant and his retinue went from dormitory to dormitory and he interviewed us briefly where we stood and in front of all the other patients. It was nerve wracking, as ward rounds always are, but also lacking any confidentiality.

This style of ward round was unacceptable elsewhere, such as at Crichton Royal Hospital, Dumfries.12

Many staff, distressed by practices which they witnessed or experienced, lacked the confidence or know-how to challenge them. Juniors also feared that, in a culture of deference to seniors, antagonising those in authority might jeopardise their future careers. The NHS’s first formal complaints guidance in 1966 had little effect and did not shift the defensive culture of NHS leadership.13 However, NHS leaders began to acknowledge detrimental psychiatric hospital culture and practices in 1969, after inquiries into scandalously poor standards of psychiatric care.14 Those reports echoed our witnesses’ memories (JJ,36). The Ely Hospital, Cardiff, inquiry attributed bad practice to defective leadership in an inward-looking institution rather than to inherent malice.15

Some senior staff supported their distressed juniors in the event of a death by suicide (JH,28) but others did not. PN (27) recalled a recently discharged patient being found dead on Tooting Bec Common. The only acknowledgement from staff was that: ‘I was asked to take his case-notes to the medical records office to be filed away.’ Yet, for PN, ‘His death profoundly affected me. There was no funeral, but I grieved for him.’ Speaking about it fifty years later: ‘I found myself back on that ward again. There was no emotional support, the term was never mentioned … the way it was coped with was merely to ignore it’ (PN,28). DB (40) also remembered a patient she had treated:

a seventeen-year-old epileptic girl … she asked … to go out into the grounds and she’d been very stable and I didn’t see any reason why I shouldn’t give her permission and I did. But there was a railway that ran on the north border of Napsbury Hospital … And she was found dead on that railway track … nobody was sure whether she’d deliberately gone and killed herself or whether she’d had an epileptic fit … it was never really sorted out … And I always wondered if I should have given her permission to go out … I’ve remembered it for … sixty years.

Historians have tended to see patients as victims and staff as perpetrators,16 but the seminar pointed to a complex mix of experiences and emotions for both groups. Some staff had indelible painful memories and ongoing self-questioning about their own roles at the time (PT,30). Memories of unkindness haunted them, spurring them on to improve clinical practice over the course of their careers (JJ,37; PN,54).

Treatment

A daily routine, meaningful employment and social and leisure activities were all part of treatment, alongside medication and electroconvulsive therapy (ECT). New professions joined the hospitals, such as occupational therapists (OTs) and clinical psychologists, reflecting the challenges of accurate diagnosis, a broader range of treatments and a greater emphasis on discharging patients into the community. Psychologists, however, focused mainly on assessment (JH,42): ‘Full psychometrics please’ was often requested for a new inpatient, sometimes with its purpose difficult to ascertain (PC,19). JH (43) referred to routine psychometric testing as ‘test bashing’.

Employment was considered to benefit self-esteem and well-being, through creativity and ‘the involvement, fellowship, the jokes that would come from work’ (DJ,16). At Severalls Hospital, ‘the most damaged people’ would ‘turn cement’, from which others would ‘make slabs and they would sell the slabs in the town’ (DJ,12). DJ (12) also pointed out ‘what a tragedy at the moment, if you ask what people [with severe mental illness] are doing, there’s no work’ for them, referring to a report in 2013 which found UK employment rates of around 8 per cent for people with schizophrenia and stated that many more could work and wanted to do so.17

Industrial therapy’ (IT) fitted with the ethos of rehabilitation and discharge into the community. It was conceptualised as employing patients under clinical supervision in factory-type work appropriate to their individual needs, including providing psychological and economic satisfaction. If IT managers set up external business contracts, patients could be paid (JB,16). For patients engaged in non-IT hospital work, there were several material reward systems: cash; a ‘token economy’ based on psychological principles; and ‘rewards in kind’ which some senior psychiatrists criticised as ‘out of keeping with modern views’.18 Overcrowded wards lacking individual lockers for storing personal possessions put valued items at risk of theft and deterred hospitals from providing cash or tokens.19

Newly qualified OT JL (35) was deployed to a long-stay ward: ‘I found it hard to feel that there was any good that I could do.’ Bingo as an activity ‘for heavily sedated patients, was very badly decided upon’:

Patients sat around tables with their bingo cards in front of them and one of us shouted out the numbers. We would then run around the tables moving the numbers for them and when a card was full, we would pick up a hand and shout, ‘Come on Jan! Come on Alice!’ or whoever it was, ‘Shout Bingo! You’ve won!’ It was impossible and exhausting.

JL (35) described her OT role as a ‘cosmetic addition’. JH (42), another new hospital professional, commented: ‘I was the first junior trained psychologist they had ever had, and to be honest I am not sure they knew what to do with me!’ Hospital leaders developing multidisciplinary teams, with one eye on financial expenditure and uncertainty about what newcomers were able to do, could create impossible workloads.20

PC (19–20) described being treated with ECT, noting that it is given differently today:

another patient [said] to me, ‘Oh, you need ECT. You’ll be given ECT. They’ll do this and that to you in ECT.’ And because ECT sounds awful, you start worrying about being given ECT … I remember being in a Nightingale ward and being lined up on a bed and the ECT machine worked its way down to you … You were just in the ward and the ECT machine came down to you and you could hear it getting nearer and nearer to you.

For many seminar participants, this disturbing description was a lasting memory of the day.

Medications, such as chlorpromazine, reserpine, imipramine and monoamine oxidase inhibitors which could modify the symptoms and course of ‘functional’ psychiatric disorders such as schizophrenia, were newly available.21 DB (40) recalled their effect on her patients:

In 1957 … my senior registrar … put everybody on reserpine … it was like Oliver Sacks talked about with levodopa … Many of them, just started walking and talking and being human again after having been doped for twenty years. It was really an amazing experience.

The outcome for one of these patients gives insights into rehabilitation processes and staff–patient dynamics:

Dorothea had been on the ward for thirty years and she’d been cleaning the ward for thirty years, but she had been put on reserpine and she’d responded to it. I think she had been diagnosed originally as schizophrenic and she was really very well, and would we take her on [to clean our house]? … I just gave her the keys … and she went and cleaned my house … And eventually she decided that she was well enough to leave the hospital and she got a job as a housekeeper for a local family who knew she was from Napsbury, so therefore they paid her hardly anything. And then she was successful at that job, so she got a better job and she used to come and visit … bringing little presents for the children, she became a friend of the family.

(DB,41)

The MHA, social expectations and policy ideas, along with new medications and a greater emphasis on rehabilitation, facilitated a move towards community care. The MHA allowed, but did not mandate, community care, with the result that local authorities were reluctant to fund it. Many hospitals built links with their local communities, but this was far from uniform, even between neighbouring hospitals (SC,21). It was dependent on the priorities of the local psychiatric leadership and staff creativity to nurture the links,22 for instance by encouraging local people to volunteer to support patients with activities, such as knitting: ‘Simple things but bringing people together to do things that they never did before’ (DJ,13). At Herdmanflat Hospital, ‘the Round Table, the young version of Rotary … became very interested in the hospital, ran events, and visited’ (BB,52). DJ (12) accompanied Dr Russell Barton to schools and colleges where ‘Russell would be preaching the gospel and encouraging people not to be frightened of this place on the edge of the town’. From some hospitals, staff began to visit discharged patients at home and ‘this … blossomed into a full community nursing service’ alongside improved liaison with local social services (JB,51). BB (52) described how bringing people together was ‘quite a therapeutic thing’, improving links with local clergy and with family doctors who had ‘been a bit distant from the hospital’.

Evidence-Based Practice

Notions of clinical scientific evidence in the 1960s contrasted with those used today: PT (34) commented: ‘When someone very important made a statement or made an observation about a clinical problem, that was evidence … I suppose it was the only evidence we had to go on.’ Seminar participants recalled working with, or being taught by, ‘very important’ charismatic people, such as William Sargant (1907–88), R. D. Laing (1927–89) and Russell Barton (1923–2002). Each had firmly held views. Sargant advocated biological treatments, including prolonged sleep, insulin coma, ECT and high doses of medication, often in combination. Diametrically opposed, R. D. Laing promoted ‘anti-psychiatry’ concepts with entirely psychosocial models of, and treatment for, mental illnesses such as schizophrenia. Barton was vociferous that psychiatric hospitals must provide humane, dignified and rehabilitative treatment, but in the 1960s his methods were considered almost as radical as those of Sargant and Laing. All three stood on the fringes of conventional psychiatry, generated enthusiasm and hostility and provoked public and professional debate. Their styles of communication were as contentious as their clinical methods: both could impact on patients and colleagues, with constructive and destructive outcomes.

Barton’s humanity enthused DJ (12) and helped shape his life work. Like the nurses, junior doctors could be inspired to follow in the footsteps of their seniors or, if disturbed by their experiences, could become determined to do things better. PT (31) recalled:

No one said anything about ethics. I first got my interest in randomised controlled trials by working for William Sargant … I learnt more from him by what he did to excess than from others who taught me more correctly … But … it really concerned me … was I complicit in this in my very first psychiatric post … ? The patient that got the insulin coma did get remarkably better (in the short term at least) and his parents were so pleased they gave a cheque to William Sargant, who gave it to me and said: ‘Go and buy a couple of books.’ The first one I bought was Principles of Medical Statistics by Austin Bradford Hill. [Working in Sargant’s department] made me realise that doing things without any proper evidence was shocking.

Variable Standards

Many factors helped shape psychiatric hospital standards during the 1960s. No participants referred to poor standards in Scottish psychiatric hospitals. Scotland had similar mental health legislation to England and Wales but different policies on hospital closure, and new treatments and ideologies of deinstitutionalisation transformed care within them.23 BB (53) noted that local people staffed Herdmanflat and the Royal Edinburgh Hospital, in contrast to the situation described south of the border. Staffing by local people (whose family and friends may also have been patients) may have facilitated communication and understanding between patients, staff and community, contrasting with hospitals which employed many staff recruited from further afield, who may have faced cultural and language challenges and lacked local social networks (PN,26; JH,46).

In England and Wales, government plans to close the psychiatric hospitals discouraged interest and expenditure on them.24 The MHA abolished the supervisory body for the psychiatric hospitals, leaving them without external inspection or oversight for most of the decade, until a new inspectorate was established in 1969.25 While historiographies about NHS psychiatric hospitals in the 1960s are overwhelmingly disparaging, oral history accounts suggest that standards were far from uniform. This is compatible with reports from independent inquiries into psychiatric hospital care which found ‘wide contrasts’ within hospitals, with grave concerns about some wards but not others.26 SC (20) worked concurrently in a psychiatric hospital and a district general hospital and found better communication, team working and attitudes of the public to the psychiatric service and of the service towards the patients in the general hospital. John Wing and George Brown identified many differences between the three psychiatric hospitals they studied between 1960 and 1968.27 They also highlighted the importance of idealistic leadership to improve standards, concurring with views of witnesses that leadership and vision were vital, such as to help overcome the tendency of staff to resist changing established practices previously regarded as safe and appropriate (JJ,38; HZ,38).

Conclusion

The 1960s were a decade of radical societal change, ideological battles within mental health services and a complex psychiatric hospital narrative.28 New medication, the MHA, multidisciplinary teams and ideals of community care provided opportunities to improve the lives of patients but not all hospitals grasped the nettle of implementation. Some in positions of responsibility endorsed neglect and old-fashioned methods, while others inspired improvements in patient care above and beyond expectations.

With the nature of medical ‘evidence’ ill-defined, and ethical frameworks unclear, clinical creativity by individual charismatic leaders entailed risks, particularly if associated with dogmatic inflexibility. Our witnesses’ impressions, particularly about the extremes, from harshness and tragedies to humanity, were formative in their careers and persisted lifelong. Encounters with inhumane and disrespectful care and lack of autonomy for patients were unnerving for witnesses in the 1960s and for our seminar audience. It did not require an ‘expert’ to recognise the essence of ‘good’ and ‘bad’. New or junior staff seemed more able to appreciate extremes than those accustomed to the local routines. Deference to seniority and a defensive leadership failed to enable the lower ranks to offer constructive criticism. However, people and practices perceived as ‘good’ provided role models and standards to emulate, whereas ‘bad’ spurred on others to achieve more humane and evidence-based care and treatment later in their careers.

Some witness seminar participants linked past to present. Why are there still scandals of care? How do we respond to practices we regard as unethical or harmful? Materially and scientifically psychiatric care has improved, but to what extent do unconstructive underlying societal beliefs and expectations perpetuate? We can learn from the experiences of our forebears and use the themes raised in this seminar to help us reflect on our roles, obligations, practices and standards today.

Key Summary Points
  • The witness seminar revealed perspectives on the past which are not readily available in written sources.

  • The memory of a tragedy, such as the suicide of a patient, can haunt involved staff members lifelong.

  • Individual senior staff were role models who had profound effects on the course of junior clinicians’ future careers.

  • Wide contrasts in clinical standards and practices existed within and between hospitals.

  • Many aspects of psychiatric practice have improved since the 1960s but others have not. Supported employment has been lost in the era of community care, scandals of poor care recur and staff may still fear speaking up when they experience or witness substandard practices.

Chapter 7 Mental Hospitals, Social Exclusion and Public Scandals

Louise Hide
Introduction

On 10 November 1965, a letter appeared in The Times newspaper drawing readers’ attention to the ill treatment of geriatric patients in certain mental hospitals. The authors, who included members of the House of Lords, the eminent academic Brian Abel-Smith, two senior clerics and the campaigner Barbara Robb, asked readers to send in evidence that would help them to make a case for a national investigation that could give the Ministry of Health more ‘effective and humane’ control over such hospitals.1 In response, hundreds of letters poured in. Many detailed appalling conditions and practices in ‘geriatric’ wards of psychiatric and general hospitals.

Two years later, towards the end of June 1967, a number of these accounts were published in Sans Everything: A Case to Answer.2 In July, a group of student nurses convened at Whittingham Hospital near Preston in Lancashire to voice their concerns regarding inhumane treatment on some wards.3 One month later, the News of the World newspaper exposed allegations of cruel practices and callous conditions which were subsequently revealed to have taken place at Ely Hospital in Cardiff. In late 1968, the police were called to investigate ill treatment of patients by male nurses at Farleigh Hospital in Somerset. While minor local inquiries were held into the allegations published in Sans Everything, broadly discrediting them, the inquiries into practices at Ely, Farleigh and Whittingham hospitals were the first in a long run of inquiries to be conducted into NHS psychiatric and ‘mental handicap’ hospitals, as they were then called, during the late 1960s and 1970s. They revealed a horrifying web of abuses, neglect, corruption and failures of care on certain wards – by no means all – and how they were facilitated by ingrained cultures that permeated socially, professionally and geographically isolated institutions.

This chapter provides an overview of the inquiries that took place into practices and conditions in some psychiatric and mental handicap hospitals, with a focus on the former. It describes the state of institutional care for people diagnosed with mental disorders in the post-war period and outlines the course this initial run of hospital ‘scandals’ took. It then examines why, when some ward cultures and hospital management practices had remained unchanged over decades, they were raised into public and political awareness at this particular point in time, bringing about changes in long-term care, particularly for older people in England and Wales.

Mental Hospitals in the Post-war Period

Following the Second World War, plans for the social reconstruction of the country, including a major reorganisation of health and social welfare systems, began to be implemented. Most hospitals were nationalised under the newly formed NHS and responsibility for them, including the large county mental and mental handicap hospitals, was passed from local governments to Regional Hospital Boards (RHBs) which were accountable to the Ministry of Health.

After the war, admissions into mental hospitals grew. By 1954, the rambling asylums of England and Wales contained around 154,000 people,4 46 per cent of whom had been resident for more than ten years,5 often living on overcrowded and under-resourced wards. When the psychiatrist David H. Clark joined Fulbourn Hospital in Cambridge in 1953, he described his first visit to the ‘back’ wards:

my conductor had to unlock every door; within the wards patients, grey-faced, clad in shapeless, ill-fitting clothes, stood still or moved about aimlessly … [In the men’s dormitories] … chipped enamel chamber pots stood everywhere … the smell of urine was strong and there were no personal items of any kind to be seen in the cold rooms … there were no curtains on the windows … the furniture was massive, deep brown, dingy and battered.

By contrast, the Admission Villas were described as ‘sunlit, pleasantly decorated one-storey buildings with an air of brisk purpose’.6

Interest in the symbiotic relationship between the environment, the body and the mind grew during the war. The 1946 Constitution of the World Health Organization (WHO) defined ‘health’ as ‘a state of complete physical, mental and social well-being and not merely the absence of disease and infirmity’.7 Theories around the psychosocial stirred the interest of sociologists and social psychiatrists into the effects of institutional environments on those who lived and worked inside them. Hospitals, including mental hospitals, were run along ingrained militaristic lines which were imposed on both staff and patients as a form of managing and controlling large numbers of people who might be mentally unwell. A strict routine was enforced from the moment patients got up in the morning until going to bed at night, which could be as early as 5 p.m. Personal possessions were often ‘removed’ on admission, clothing was shared, staff wore uniforms and doctors were referred to as medical ‘officers’. Hierarchies were strict. Eric Pryor, who joined Claybury Hospital as a student nurse soon after the end of the war, explained how ‘there was no fraternising, either with those above or below one’s own rank … It was not etiquette for junior staff to speak to Medical Staff, Head Nurses or official visitors unless spoken to.’8

In Britain, the concept of ‘institutionalisation’ emerged into clinical discourse from the 1950s. The deputy physician superintendent at Claybury, Denis V. Martin, remarked in 1955 that terms such as ‘becoming institutionalised’ or ‘well institutionalised’ could often be found in clinical notes, meaning that a patient had ‘more or less’ surrendered to institutional life and was seen by nurses as ‘resigned’, ‘cooperative’ and not causing any trouble. The more institutionalised patients became, the more manageable and tractable their behaviour. In Martin’s view, some doctors, who might be responsible for up to 300 patients, had a ‘vested interest in maintaining the process of institutionalization’ because it gave them more time to focus on the patients they found more therapeutically interesting. Yet it was nurses who managed the day-to-day lives of patients and ‘ran’ the wards. Martin argued that their training ‘far more than the doctors’ is destructive of individuality’.9

Russell Barton, a social psychiatrist and medical superintendent of Severalls Hospital in Essex, suggested that the institution was in itself pathogenic. Patients suffered from two conditions: one for which they were initially admitted and a second caused by the stultifying environment of the hospital resulting in a condition that was ‘characterized by apathy, lack of initiative, loss of interest … [and] submissiveness’.10 Perhaps the best-known indictment of the large institutions was Asylums (1961) by the Canadian sociologist Erving Goffman. Based on his study of a large mental hospital in Washington, DC, it was an excoriating critique of psychiatric hospitals and their effects on patients. Asylums were, according to Goffman, ‘total institutions’ which he defined as places of ‘residence and work where a large number of like-situated individuals, cut off from the wider society for an appreciable period of time, together lead an enclosed, formally administered round of life’.11

Steps were taken to improve institutional life. During the war, a group of doctors, some of whom had been trained in psychotherapy, created more open and egalitarian therapeutic communities to help rehabilitate servicemen. Some of these approaches were subsequently adopted by social psychiatrists such as Clark and Martin to help patients regain a sense of independence and leave hospital. The large mental hospitals began to move away from the old custodial practices and follow guidance from the WHO which stated that the ‘life within the hospital should, as far as possible, be modelled on life within the community in which it is set’.12 Doors were unlocked and spaces were liberalised so that patients could move freely around the buildings and grounds. More therapeutic ward practices based on less rigid hierarchies and routines – nurses no longer wore uniforms and might be called by their first names – were introduced to help patients learn how to live more independently. These measures were bolstered from the mid-1950s by the introduction of new psychotropic drugs, such as chlorpromazine, which could alleviate severe psychotic symptoms. With the right medication and access to treatments such as electroconvulsive therapy (ECT) available in outpatient clinics, hospitalisation was no longer deemed necessary for many people experiencing serious mental health conditions.

New treatments, both biological and psychosocial, were matched by a strengthening political resolve to close the old asylums. In 1961, the Conservative minister of health, Enoch Powell, gave his ‘water-tower’ speech articulating the government’s intention to reduce the number of psychiatric beds by half and to close the large isolated mental hospitals. The 1962 Hospital Plan mapped out how treatment would shift to psychiatric wards in district general hospitals.13 It also signalled formally the beginning of the deinstitutionalisation process and the move towards care in the community (see also Chapter 31).

Two Standards of Care

Psychiatry’s new therapeutic approach was not primarily intended for older, long-term patients who were believed to be suffering from incurable conditions, such as senile dementia, about which nothing could be done. When it came to distributing resources, services for younger people with acute conditions were prioritised over those for people with long-term chronic conditions, creating a two-tier system. Whittingham Hospital, which was the subject of a major inquiry in the early 1970s, is a good case in point. Located in an isolated spot some seven miles outside of Preston in Lancashire, it was a large sprawling institution that had been established in 1873. Some of the buildings had been modernised but the majority were described by the inquiry report as the old ‘three-decker wards of 80 beds or more, with large cheerless day-rooms and grossly inadequate sanitary facilities’. In accordance with the government’s plan to close the asylums, Whittingham was to be gradually run down until it could be closed. To that end, the number of occupied beds dropped from 3,200 in 1953 to slightly more than 2,000 in 1969.14 This reduction was achieved through more ‘active’ psychiatry, when people with ‘treatable’ conditions were moved from Whittingham to acute psychiatric units. As a result, 86 per cent of inpatients who remained at Whittingham had been in the hospital for longer than two years and a high proportion were old.15 The chairman of Whittingham’s Hospital Management Committee – who was severely criticised in the inquiry – described this group of patients as ‘the type who sit around all day just doing nothing but becoming cabbages’.16

The ‘Scandals’

The depressing environment on some of Whittingham’s long-stay wards was not uncommon. Early in 1965, Barbara Robb visited an acquaintance who had been admitted to Friern Hospital in North London. She was horrified by the dismal ward conditions and later described how older female patients in some hospitals were deprived of ‘their spectacles, dentures, hearing aids and other civilized necessities’ and left ‘to vegetate in utter loneliness and idleness’.17 Within months, Robb established the pressure group AEGIS (Aid for the Elderly in Government Institutions).18

Robb was a co-signatory of the letter to The Times mentioned in the Introduction to this chapter. In 1967, two years after its publication, she presented on behalf of AEGIS a book titled Sans Everything: A Case to Answer which included carefully chosen accounts that had been sent in by nurses and social workers describing heartbreaking cruelty, neglect and suffering on geriatric wards. The book caused a public outcry. The Labour minister of health, Kenneth Robinson, who had ignored Robb’s earlier requests to investigate conditions at Friern, ordered the relevant RHBs to investigate the allegations relating to the hospitals in their region. The results of the inquiries were published in a single report.19 Many of the original allegations were strenuously refuted and described as ‘false’, ‘incomplete and distorted’; one informant was described as a ‘highly emotional witness, prone to gross exaggeration’.20

Running simultaneously to the publication of Sans Everything was the exposure (without naming the hospital or individuals concerned) by the News of the World in August 1967 of allegations of patient mistreatment, failures of care and staff ‘pilfering’ at Ely Hospital in Cardiff, a former Poor Law institution primarily for ‘sub-normal’ or ‘severely sub-normal’ patients (see also Chapter 24). A delegation from the Ministry of Health had inspected Ely two years earlier when members had found appalling conditions yet done nothing to address them.21 Worried about public criticism, the minister instructed the RHB to establish an inquiry into Ely, which was to be led by Geoffrey Howe QC.22 The way in which the inquiry was set up – the structure of the committee and the terms of reference – was similar to the methods used to investigate the Sans Everything allegations. Howe did, however, extend the investigations beyond the events that took place to expose broader structural failings, including weak medical leadership, poor standards of nursing and an inadequate Hospital Management Committee.23

On 18 July 1967, just after the publication of Sans Everything, a meeting was convened at Whittingham Hospital by the Student Nurses Association which gathered to make a number of serious allegations of inhumane, cruel and even violent treatment of vulnerable older people on certain long-stay wards. Their complaints were consistently suppressed and ignored by the hospital. Two years later, a psychiatrist and a psychologist wrote directly to the Secretary of State alleging ‘ill-treatment of patients, fraud and maladministration, including suppression of complaints from student nurses’. An inquiry was launched when allegations of ill treatment were concentrated on four long-stay wards, one of which had been run by the same Sister for forty-seven years.24

At Farleigh Hospital in Bristol, concerns about serious mistreatment and violence towards patients emerged at the end of 1968 when the police were called to investigate brutal treatment by male nurses of men with severe mental handicaps. Following judicial proceedings, when three of the nine nurses who had been charged received prison sentences, an inquiry was launched into the administrative systems and conditions of the hospital in order to understand the cultural and systemic mechanisms that had facilitated abuses of care over such a long period of time.

Claire Hilton has rightly argued that it was the publication of Sans Everything and the tireless work of Barbara Robb, among others, that triggered widespread revelations of abuses leading to Ely, which was the first inquiry into NHS care to be published in full.25 The Whittingham Hospital Inquiry (1972) was also significant because, according to the sociologist John Martin, it ‘dissected the organization which had allowed maltreatment and pilfering to occur, and which had stubbornly resisted all attempts to change old styles of care’. A socially and professionally isolated environment had been a major factor in allowing practices and customs to persist.26 At least ten inquiries of national significance, and many other smaller ones, were conducted from the late 1960s and throughout the 1970s.27

Given that ‘care’ on long-term geriatric wards had often been so wretched for so long, and that so many knew about it, why did no one act before and how were such practices allowed to continue?28 Sociologists Ian Butler and Mark Drakeford have argued that the ‘everyday tragedies’ that were revealed by the welfare scandals of the 1960s and 1970s are ‘where meanings and historical significance become attached to acts and events that at other times might have passed almost unobserved’.29 Widespread social change was in process, driven in part by human and civil rights campaigns, counterculture movements which challenged the prevailing establishment, including psychiatry (see Chapter 13 and 14), and service-user pressure groups like Mind which demanded greater rights for patients.30 The isolated and dilapidated former asylums that housed ossified professional and social cultures were anachronistic and belonged to the past. They had to go.

The media played a crucial part in raising public awareness which, in turn, put pressure on the government to act. In addition to the News of the World, which had supported AEGIS and Sans Everything,31 and broken the Ely story, The Lancashire Evening Post set up a ‘press desk’ in a local pub to gather information about malpractices at Whittingham Hospital before publishing ‘A Big Probe into Allegations of Cruelty’ in February 1970.32 Television took the horrors of overcrowded wards and inhumane treatment into people’s living rooms. In 1968, World in Action broadcast Ward F13, showing the harrowing conditions in which women were living on a female geriatric ward in Powick Hospital in Worcestershire, while younger patients with acute conditions were being treated in well-resourced facilities in the same hospital. In this case, it was the medical superintendent who had invited the cameras into the hospital so that viewers could see for themselves the inequities of the two-tier system.33

Change

As one inquiry after another was held during the 1970s, a pattern of underlying causes that could be attributed to such catastrophic failures of care began to emerge. They included geographical, social and professional isolation; patients with no one to advocate for them; an absence of formal complaints procedures; the privileging of task-centred nursing or other professional interests over the needs of patients; failures of leadership; poor administration and lay management; union intervention; inadequate training; and personal failings.34 Threaded through these factors were deeply ingrained values and belief systems which had been passed on for years; these included ageism and the commonly held belief that older patients were suffering from inevitable and untreatable cognitive decline, which in many cases was erroneous (see also Chapter 22).

In 1969, the Labour MP Richard Crossman, who had replaced Kenneth Robinson and become the secretary of state for the newly formed Department of Health and Social Services, took over responsibility for hospitals providing long-term care. Crossman had a strong personal and professional interest in bringing about reform and quickly established an independent inspectorate, the Hospital Advisory Service, which was to evaluate and regulate long-stay hospitals.35 An NHS ombudsman with a remit to introduce an effective complaints process was introduced in 1972.36 While patient numbers fell, the government allocated more resources to long-stay hospitals. During the 1970s, medical and nursing staff levels rose in both psychiatric and mental handicap hospitals. Conditions were improved and there was some diversification into various sub-specialisms including psychogeriatric medicine.37 Three important White Papers were produced by the DHSS: Better Services for the Mentally Handicapped (1971), Services for Mental Illness Related to Old Age (1972) and Better Services for the Mentally Ill (1975). The latter framed mental illness as a social as well as a medical issue and set out plans to improve the provision of community care through expanding local authority social services and to move away from institutional care.38 This process took a decisive turn when a new Conservative government led by Margaret Thatcher came into power in 1979 and opened the provision of care to private, voluntary and independent sectors – often in large houses that were not fit for purpose. The old asylums began to close in the 1980s and valuable sites and buildings were sold, reducing the number of available hospital beds. When local authorities failed to provide suitable accommodation in the community, people who had left hospital were faced with the prospect of having nowhere to go.

Conclusion

This chapter has examined the ways in which ‘exclusion’ as a social mechanism allowed appalling conditions and practices to persist in mental hospitals over many years, leading to inquiries into the failures of long-term NHS care and subsequent scandals. Geographical remoteness led to professional exclusion as hospitals became increasingly inward-looking and medical and nursing staff lost touch with developments in their respective fields. The inquiries were an important catalyst that helped bring about the closure of the old asylums, many of which had played a central role in the segregation of people who were believed to be mentally unwell or socially ‘undesirable’ from society. The two-tier system had been active since the late nineteenth century. Embedding it into policy through deinstitutionalisation, from the early 1960s, imposed an additional layer of exclusion upon those who were left behind on the ‘back’ wards. Those who did leave the institutions could find themselves facing new forms of isolation as facilities in the community could fail to materialise or to provide adequate care.

While long-term care is now provided in smaller, usually privately owned, facilities, abuse, undignified treatment and neglect continue to be seen but unseen, known but unknown. Inquiries may be expensive and repetitive. They may need to be reimagined and reframed to engage with different questions and perspectives. Whatever their shortcomings, they remain vital mechanisms for uncovering harmful practices visited on vulnerable people. They can reassure the public and, crucially, highlight where changes need to be made by individuals and professional groups, as well as by management and policymakers.

Key Summary Points
  • Between the late 1960s and early 1980s, at least ten major and many smaller inquiries were held into neglectful, abusive and violent practices in psychiatric and ‘mental handicap’ hospitals.

  • Many institutions, or certain wards inside them, had become professionally isolated and severely under-resourced. Deeply ingrained cultures of harm and neglect had evolved over years.

  • Growing interest in the effects of institutional environments on patients contributed to the post-war impetus to move care for acute conditions into the community, leaving long-stay wards more isolated than ever.

  • The exposure of harmful practices by the press and campaigners compelled politicians to order inquiries which contributed to changes in the provision of long-term care and the widespread closure of the old Victorian asylums from the 1980s.

  • While long-term care is now provided in smaller facilities, abuse, undignified treatment and neglect continue. Inquiries may need to be reimagined and reframed to engage with different questions and perspectives.

Footnotes

Chapter 1 Historical Perspectives on Mental Health and Psychiatry

Chapter 2 The International Context

Chapter 3 Liberty’s Command: Liberal Ideology, the Mixed Economy and the British Welfare State

Chapter 4 Social Theory, Psychiatry and Mental Health Services

Chapter 5 A Sociological Perspective on Psychiatric Epidemiology in Britain

Chapter 6 Life, Change and Charisma: Memories of UK Psychiatric Hospitals in the Long 1960s

Chapter 7 Mental Hospitals, Social Exclusion and Public Scandals

Figure 0

Table 6.1 Witness seminar participants quoted in this chapter

Figure 1

Figure 6.1 Institutionalised indifference: Russell Barton’s painting, Potential Murderers? c.1960.

© Bethlem Museum of the Mind.

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×