We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 5 described quantum mechanics in the context of particles moving in a potential. This application of quantum mechanics led to great advances in the 1920s and 1930s in our understanding of atoms, molecules, and much else. But, starting around 1930 and increasingly since then, theoretical physicists have become aware of a deeper description of matter, in terms of fields. Just as Einstein and others had much earlier recognized that the energy and momentum of the electromagnetic field is packaged in bundles, the particles later called photons, so also there is an electron field whose energy and momentum is packaged in particles, observed as electrons, and likewise for every other sort of elementary particle. Indeed, in practice this is what we now mean by an elementary particle: it is the quantum of some field that appears as an ingredient in whatever seem to be the fundamental equations of physics at any stage in our progress.
The successful uses of atomic theory described in Chapter 1 did not settle the existence of atoms in all scientists’ minds. This was in part because of the appearance in the first half of the nineteenth century of an attractive competitor, the physical theory of thermodynamics. With thermodynamics one may derive powerful results of great generality, without ever committing oneself to the existence of atoms or molecules. But thermodynamics could not do everything. This chapter describes the advent of kinetic theory, which is based on the assumption that matter consists of very large numbers of particles, and its generalization to statistical mechanics. From these, thermodynamics could be derived and, together with the atomic hypothesis, it yielded results far more powerful than could be obtained from thermodynamics alone. Even so, it was not until the appearance of direct evidence for the graininess of matter that the existence of atoms became almost universally accepted.
The serious scientific application of the atomic theory began in the eighteenth century, with calculations of the properties of gases, which had been studied experimentally since the century before. This is the topic with which we begin this chapter. Applications to chemistry and electrolysis followed in the nineteenth century, and are considered in subsequent sections. The final section of this chapter describes how the nature of atoms began to be clarified with the discovery of the electron.
This chapter covers the early years of quantum theory, a time of guesswork, inspired by problems presented by the properties of atoms and radiation and their interaction. Later, in the 1920s, this struggle led to the systematic theory known as quantum mechanics, the subject of Chapter 5. Quantum mechanics started with the problem of understanding radiation in thermal equilibrium at a non-zero temperature. It was not possible to make progress in applying quantum ideas to atoms without some understanding of what atoms are. The growth of this understanding began with the discovery of radioactivity.
Atoms were at the center of physicists’ interests in the 1920s. It was largely from the effort to understand atomic properties that modern quantum mechanics emerged in this decade. In the 1930s physicists’ concerns expanded to include the nature of atomic nuclei. The constituents of the nucleus were identified, and a start was made in learning what held them together. And as everyone knows, world history was changed in subsequent decades by the military application of nuclear physics.
Our modern understanding of atoms, molecules, solids, atomic nuclei, and elementary particles is largely based on quantum mechanics. Quantum mechanics grew in the mid-1920s out of two independent developments: the matrix mechanics of Werner and the wave mechanics of Erwin Schrödinger. For the most part this chapter follows the path of wave mechanics, which is more convenient for all but the simplest calculations. The general principles of the wave mechanical formulation of quantum mechanics are laid out and provide a basis for the discussion of spin, identical particles. and scattering processes. The general principles are supplemented with the canonical formalism to work out the Schrödinger equation for charged particles in a general electromagnetic field. The chapter ends with the unification of the approaches of wave and matrix mechanics by Paul Dirac, and a modern approach, known as Hilbert space, is briefly described.
This chapter covers the Special Theory of Relativity, introduced by Einstein in a pair of papers in 1905, the same year in which he postulated the quantization of radiation energy and showed how to use observations of diffusion to measure constants of microscopic physics. Special relativity revolutionized our ideas of space, time, and mass, and it gave the physicists of the twentieth century a paradigm for the incorporation of conditions of invariance into the fundamental principles of physics.
Here we discuss how the use of artificial intelligence will change the way science is done. Deep learning algorithms can now surpass the performance of human experts, a fact that has major implications for the future of our discipline. Successful uses of AI technology all possess two ingredients for deep learning: copious training data and a clear way to classify it. When these two conditions are met, researchers working in tandem with AI technologies can organize information and solve scientific problems with impressive efficiency. The future of science will increasingly rely on human–machine partnerships, where people and computers work together, revolutionizing the scientific process. We provide an example of what this may look like. Hoping to remedy a present-day challenge in science known as the “reproducibility crisis,” researchers used deep learning to uncover patterns in papers that signal strong and weak scientific findings. By combining the insights of machines and humans, the new AI model acheives the highest predictive accuracy.
We begin by discussing the challenges of quantifying scientific impact. We introduce the h-index and explore its implications for scientists. We also detail the h-index’s strengths when compared with other metrics, and show that it bypasses all the disadvantages posed by alternative ranking systems. We then explore the h-index’s predictive power, finding that it provides an easy but relatively accurate estimate of a person’s acheivements. Despite its relative accuracy, we are aware of the h-index’s limitations, which we detail here with suggestions for possible remedies.
To describe coauthorship networks, we begin with the Erdös number, which links mathematicians to their famously prolific colleague through the papers they have collaborated on. Coauthorship networks help us capture collaborative patterns and identify important features that characterize them. We can also use them to predict how many collaborators a scientist will have in the future based on her coauthorship history. We find that collaboration networks are scale-free, following a power-law distribution. As a consequence of the Matthew effect, frequent collaborators are more likely to collaborate, becoming hubs in their networks. We then explore the small-world phenomenon evidenced in coauthorship networks, which is sometimes referred to as “six degrees of separation.” To understand how a network’s small-worldliness impacts creativity and success, we look to teams of artists collaborating on Broadway musicals, finding that teams perform best when the network they inhabit is neither too big or too small. We end by discussing how connected components within networks provide evidence for the “invisible college.”
We introduce the role that productivity plays in scientific success by describing Paul Erdös’ exceptional productivity. How does Erdös’ productivity measure up to other scientists? Is the exponential increase in the number of papers published due to rising productivity rates or to the growing number of scientists working in the discipline? We find that there is an increase in the productivity of individual scientists but that that increase is due to the growth of collaborative work in science. We also quantify the significant productivity differences between disciplines and individual scientists. Why do these differences exist? To answer this question, we explore Shockley’s work on the subject, beginning with his discovery that productivity follows a lognormal distribution. We outline his hurdle model of productivity, which not only explains why the productivity distribution is fat-tailed, but also provides a helpful framework for improving individual scientific output. Finally, we outline how productivity is multiplicative, but salaries are additive, a contradiction that has implications for science policy.
Here we address bias and causality, beginning with the bias against failure in the existing science of science research. Because the data available to us is mostly on published papers, we necessarily disregard the role that failure plays in a scientific career. This could be framed as a surviorship bias, where the “surviving” papers are those that make it to publication. This same issue can be seen as a flaw in our current definition of impact, since our use of citation counts keeps a focus on success in the discipline. We explore the drawbacks and upsides of variants on citation counts, including altmetrics like page views. We also look at how possible ways to expand the science of science to include unobservable factors, as we saw in the case of the credibility revolution in economics. Using randomized controlled trials and natural experiments, the science of science could explore causality more deeply. Given the tension between certainty and generalizability, both experimental and observational insights are important to our understanding of how science works.
While there is plenty of information available about the luminaries of science, here we discuss the relative lack of information about ordinary researchers. Luckily, because of recent advances in name disambiguation, the career histories of everyday scientists can now be analyzed, changing the way we think about scientific creativity entirely. We describe how the process of shuffling a career – moving the works a scientist publishes around randomly in time – helped us discover what we call the “random impact rule,” which dictates that, when we adjust for productivity, the highest impact work in a career can occur at any time. We also see that the probability of landmark work follows a cumulative distribution, meaning that the random impact rule holds true not just for the highest impact work in any career but also for other important works, too. While there is precedent for this rule in the literature – Simonton proposed the “constant probability of success” model in the 1970s – until recently we didn’t have the data on hand to test it. The random impact rule allows us to decouple age and creativity, instead linking periods of high productivity to creative breakthroughs.
We begin by asking how far back in the literature we should go when choosing discoveries to build on. In other words, how myopic is science in the age of Google Scholar? By looking at the age distribution of citations and identifying knowledge “hot spots,” we pinpoint the unique combinations of old and relatively new knowledge that are most likely to produce new breakthroughs. In doing so, we see that the way we build on past knowledge follows clear patterns, and we explore how these patterns shape future scientific discourse. We also look at the the impact that a citation’s jump–decay pattern has on the relevance of research over time, finding that all papers have an expiration date and that we can predict that date based on the jump–decay pattern.
We begin by acknowledging the sheer size of the citation index to date, and then discuss the disparity in citations that these papers receive. These differences in impact among papers can be captured by a citation distribution, which can be approximated by a power-law function. We compare power-law distributions to Gaussian distributions, illustrating the distinctions between the two and what they tell us about citation patterns. We then explore the differences in average number of citations between fields, which can make cross-disciplinary comparisions complicated. Luckily, we find that citation patterns are surprisingly universal relative to the field a paper is published in, which allows us identify common trends in citation and impact regardless of discipline. We end with a discussion of what citations don’t capture, given that they are frequently used as a proxy for impact. We pinpoint some potential flaws in this metric, but see citation patterns as a valuable way to gauge the collective wisdom of the scientific community.
Given the jump–decay citation patterns discussed in the previous chapter, are we forced to conclude that the papers we publish will be relevant for only a few years? We find that while aggregate citations follow a clear pattern, the trajectories of individual citations are remarkably variable. Yet, by analyzing individual citation histories, we are able to isolate three parameters – immediacy, longevity, and fitness – that dictate a paper’s future impact. In fact, all citation histories are governed by a single formula, a fact which speaks the universality of the dynamics that at first seemed quite variant. We end by discussing how a paper’s ultimate impact can be predicted using one factor alone: its relative fitness. We show how papers with the same fitness will acquire the same number of citations in the long run, regardless of which journals they are published in.