In the recent election of William Hague, former British Foreign Secretary, to become the University of Oxford’s chancellor, Hague promised, among other things, to keep education at the heart of public policy, to focus on science and technology at the university and to remain competitive with wealthier American rivals. All three challenges would be familiar to Robert Bud, for they are very much like the ones he addresses in his long-anticipated new book.
Bud has delivered an ambitious, erudite and now acutely timely biography of a concept – applied science. It is an important subject, one which, given Bud’s prominence in the history of science, will no doubt receive a great deal of scholarly attention.Footnote 1 Bud divides his detailed and complex story into three stages. The first covers the concept’s birth and its steady growth throughout the nineteenth century, the second embraces its most popular period from the First World War through the Second World War, the last stage follows the decline of applied science during the Cold War. The stages are roughly scaffolded with reference to British politics, or, more precisely, to changes in industrial and educational policies. To this end, Bud skilfully navigates a seeming flood of parliamentary papers documenting select committees, royal commissions and councils. (In one sentence, for instance, he covers eighteen subcommittees of the Committee of Civil Research.) Bud’s history, however, encompasses more than politics; he wants to explore the meaning of a category of scientific knowledge and its role in shaping modern Britain. Indeed, modernity itself, Bud imagines, might be characterized by the rise and fall, and possible re-emergence, of applied science.
The term first appeared in 1817, and its author was the Romantic poet, Kantian philosopher and fancier of neologisms (such as ‘psychoanalytical’, ‘factual’ and ‘mountaineering’) Samuel Taylor Coleridge. Coleridge introduced applied science in a proposal for a new encyclopedia, the Encyclopedia Metropolitana, to be based on a rational structure of knowledge rather than on an epistemologically haphazard alphabetical organization like that of the Encyclopaedia Britannica. Coleridge classified knowledge into distinct categories – pure and applied, with ‘mixed’ in between; applied sciences referred to knowledge that depended on empirical facts or experimental evidence in contrast to pure sciences such as theology, logic and mathematics, which were abstract and based on first principles. By this devising, pure and applied were autonomous, though not equal, kinds of knowledge.
Applied science began to gain popularity in the late 1840s. For Bud, the Great Exhibition of 1851 marks an important inflection, the moment when Britain could display its manufacturing dominance, reflect on how it achieved such superiority and prepare to compete against emerging rivals like Germany and the United States. For some contemporary observers, like the chemist Lyon Playfair, British industry was incapable of meeting the risks of free trade and free markets occasioned by the repeal of the Corn Laws in 1846 and the end of protectionist tariffs on agriculture. According to Bud, the perceived threat of British industrial decline was translated into a problem of education. Applied science was the prescribed remedy, at once an aspirational discourse linking a triumphant past to a future prosperity and an educational policy to be institutionalized in the great civic liberal science colleges. It was at the opening of one of these, Josiah Mason’s Science College in Birmingham in October 1880, that the botanist Thomas Henry Huxley announced his now famous regret that the phrase ‘applied science’ had ever been invented. In Huxley’s mind, applied science was nothing but the application of pure science, which was the only sort of science. Moreover, pure science was the necessary prerequisite to any future applications. For Bud, debates among historians over Huxley’s meanings, and their explicit hierarchy and dependence, misses the fact that, at the time, the terms were ‘rarely explicitly either decoded or disputed’ (p. 58). With such contextualizing, Bud sets aside any further discussion, but for some readers this historiographical swerve will be a disappointment, especially given the nuanced attention, albeit brief, that Bud gives to current scholarship in his introduction.
Readers may also be disappointed by Bud’s sidestep around those innovative industries associated with the applications of pure science. The economic and cultural prominence of telegraphs, telephones, electric lights, dyestuffs, pharmaceuticals and other such conveniences had prompted purists like Huxley, and thirteen years later the American physicist Henry Rowland, to denounce applied science. Some historians have described these chemical and electrical industries as ‘science-based’, a term Bud avoids and one that he would surely point out was not in use at the time, although ‘scientific industry’ apparently was, as Bud shows in an image of an attractive medal from 1870s Manchester (p. 9). As an analytical category, science-based industry, or, as some have argued, ‘industry-based science’, highlights just how applied science became evident, relevant and highly profitable, and thus Huxley’s and Rowland’s top-priority target. Some historians think that this wave of inventions and industries warrants the designation of a ‘second industrial revolution’. Interestingly, Bud, much later in the book, does note the usage of this term to describe the period from 1870 to 1914, but apparently he agrees with the economic historian D.C. Coleman, who in 1956 dismissed the ‘terminological impertinence of historically uninformed commentators’ (p. 227).Footnote 2 In a similar vein, readers may also wonder about Bud’s omission of patents, the many contemporary controversies and numerous high-profile court cases surrounding them, and the much-debated propriety of commercializing scientific knowledge.Footnote 3 Interestingly, again, Bud does address patents at the end of the book when he mentions that, in the 1980s, the changing nature of what could be patented rendered obsolete the distinction between private/proprietary knowledge and public/scientific knowledge. However, as Bud knows, the changing definitions of patents and intellectual property had been challenged long before Margaret Thatcher became prime minister (p. 248).
Bud does give careful attention to the profession of engineering in the nineteenth century, and in particular to the education of middle-class engineers in the liberal science colleges, where applied science provided the epistemic basis for the engineering curricula. Curiously, Bud does not consider engineering to have been a competing concept with applied science, but technology was, and in this regard Britain differed from America, where the term ‘technology’ did not gain widespread use until the twentieth century, according to Eric Schatzberg, whose critical history of technology complements Bud’s book.Footnote 4 Nonetheless, nineteenth-century American educational initiatives like the Massachusetts Institute of Technology (MIT), along with the Technische Hochschulen in Germany, influenced the vocabulary of British bureaucracy, where technology became central to the training of lower-status artisans.
A change in the meaning of applied science from education to research characterizes Bud’s second stage. That change began well before the First World War and it reflected the emergence of industrial research and the establishment of dedicated scientific laboratories within large corporations, like the German chemical firms BASF and Bayer, and American electrical companies such as GE and AT&T. Bud relates Britain’s response to renewed international competition through the establishment, in 1900, of the National Physical Laboratory (NPL), a prestigious institutional innovation that fit within a laissez-faire industrial policy and an ongoing narrative about the necessity of pure science before any ‘research-based disruptive innovation’ (p. 117). Bud’s invocation of ‘disruption’ here and elsewhere in the book draws on the American business scholar Clayton Christensen, whose theory of disruptive innovation envisioned small, low-level technology start-ups in market niches with little competition, a contested idea that nevertheless does not quite fit with Bud’s descriptions of government-funded scientific laboratories.Footnote 5
The First World War and the industrialization of warfare were forcing houses of change, but Bud has left to others to explain their impact on British science and technology. Instead, he focuses on ‘classification’, the categories of science deployed in the war’s aftermath as Britain prepared to face ‘a harsher world of commercial competition’ (p. 118). Interwar Britain was committed to building an applied-science infrastructure. In 1916, the government had created the Department of Science and Industrial Research (DSIR), and this institutional innovation funded applied scientific research as industrial research at in-house government laboratories, such as the NPL, and through the establishment of research boards. Bud thinks historians have unduly neglected these boards, and he relates the research programmes of two cases, coal and radio. Applied science promised in the first case to rescue an industry ‘on the way down’, and in the second to promote the growth of an industry ‘on the way up’ (p. 160). In Bud’s view, the reality of success meant that applied science became a research category. For the British government, applied science became an important part of policy, promising solutions to national problems and the pathway to prosperity.
The interwar years, according to Bud, were characterized as the age of applied science. Beyond industrial and educational policies, applied science helped to shape public attitudes about the so-called ‘jingle jangle’ of modern civilization. Bud uses, in exemplary fashion, the online British Newspaper Archives to survey the views of contemporary commentators like H.G. Wells and Julian and Aldous Huxley on the broader culture of applied science. In the public sphere, badly or dangerously applied science was often represented by frightening weapons like poison gas, submarines and bombers; more positive evaluations featured new ‘gadgets’ (Bud eschews the term ‘technology’ in this chapter) like automobiles, plastics and radio, a wide-reaching media whose science programming on the BBC ‘confirmed the reality of applied science’ (p. 189).
The Second World War demonstrated the awesome power of science – atomic bombs, radar, jet engines, penicillin (a subject Bud has discussed elsewhereFootnote 6) – but Bud once again passes over the war years, and instead his third stage addresses ‘questions of how post-war policymakers reshaped models of applied science and technology’ (p. 196). The now dominant United States was the major source of those models, in particular the so-called linear model propounded by Vannevar Bush in Science: The Endless Frontier (1945). In American science policy, the model was supposed to work like a conveyor belt; government funding went to pure, now relabeled basic, science, at the start, which moved through applied science and development, and finished with new technologies. Bud discusses some of the historical debate about the usefulness of the linear model, and with regard to Britain he emphasizes that government drew an opposite lesson; economic competitiveness was to be achieved by increased funding not to basic science, but rather to applied science. During the Cold War, Britain built an extensive network of defence and atomic energy laboratories, and here Bud chooses the case of Harwell laboratory (1945) and the UK Atomic Energy Authority (1954) to highlight the glamour of applied research in peaceful atomic power.Footnote 7 Accordingly, Harwell became ‘the icon of British ambitions in applied science’ (p. 221); the atomic power plants literally kept the lights on in post-war Britain.
By the 1960s, the Labour Party had begun to question whether increased funding to applied science could ever lead to economic competitiveness. Science of any sort had seemingly failed to produce prosperity. When Labour came to power in 1964, it employed a new vocabulary of technology to diagnose the country’s industrial and social problems, both of which centred on an imminent ‘second industrial revolution’, meaning automation. In founding the powerful Ministry of Technology in 1964, Labour took a much more direct industrial interventionist approach to manage automation and other emerging technologies like computers and electronics. In political and popular parlance, technology replaced applied science, and Bud attributes the sudden demise to an analytical shift among economists, including Prime Minister Harold Wilson, toward a new category called innovation. Innovation policy meant more funding to the technological end of the conveyor belt and less to the scientific origin. Bud draws explicitly on the political scientist and sociologist Benoît Godin in explaining the shift from science-push to demand-pull.Footnote 8 In this reorientation, applied science ceased to be a key component of public policy; it was diminished to an instrumental input. By the 1990s, parliamentary references to applied science had dropped by over 85 per cent; its use in the public sphere had likewise ‘changed radically’. In a rather soulless turn of phrase, Bud concludes that ‘interest in supporting the brand broke down’ (p. 226). But all is not lost. In an interesting epilogue, Buds describes the expanding prominence of biomedicine in the national research enterprise of the twenty-first century and of the increasing demand for translational research, from bench to bedside, which, Bud suggests, may be ‘a new category akin to applied science’ (p. 249).