1. Introduction
A common view in developmental and clinical psychology is that children have evolved in environments that were mainly safe and characterized by high levels of social, cognitive, and nutritional investment. For instance, models of toxic stress (Shonkoff et al., Reference Shonkoff, Garner, Siegel, Dobbins, Earls, Garner, McGuinn, Pascoe and Wood2012) and allostatic load (Lupien et al., Reference Lupien, Ouellet-Morin, Hupbach, Tu, Buss, Walker, Pruessner, McEwen, Cicchetti and Cohen2006; McEwen & Stellar, Reference McEwen and Stellar1993) assume that the physiological mechanisms supporting responses to stress in humans become “dysregulated” by chronic adversity, because these systems have evolved to deal with fleeting dangers, not with chronic threat (Ellis & Del Giudice, Reference Ellis and Del Giudice2014, Reference Ellis and Del Giudice2019). In contrast, the threat-deprivation model of adversity does acknowledge that chronic threat was a recurrent feature in some societies across human evolution (Humphreys & Zeanah, Reference Humphreys and Zeanah2015; McLaughlin & Sheridan, Reference McLaughlin and Sheridan2016; Sheridan & McLaughlin, Reference Sheridan and McLaughlin2014). However, this model assumes that chronic deprivation was rare, such that children do not have psychological mechanisms for effectively dealing with low levels of social, cognitive, and nutritional support.
Both models make assumptions about which experiences were, and were not, part of the expected childhood environment. The term “expected environment” (or expectable environment) has been widely used but not explicitly defined in past research. Here, we define it as the range of conditions that shaped our species’ evolved developmental mechanisms. Although sometimes discussed in discrete terms in psychology – “Is this experience part of the expected environment or not?” – we instead characterize the expected environment as a distribution of environmental conditions that a species has experienced over evolutionary time, as typically done in biology (Frankenhuis et al., Reference Frankenhuis, Nettle and McNamara2018; Frankenhuis & Walasek, Reference Frankenhuis and Walasek2020). The issue at stake is which types of fitness-relevant adversities have occurred with sufficient frequency across human evolution to have shaped the psychological mechanisms that influence development and behavior today.
Here, we challenge the assumption that the expected childhood was typically safe and supportive and argue that the prevailing views are skewed by an outsized focus on a thin slice of societies. Much of the research that informs developmental and clinical psychology is drawn from “WEIRD” populations – or those that are Western, Educated, Industrialized, Rich, and Democratic (Henrich et al., Reference Henrich, Heine and Norenzayan2010) – which benefit from high levels of safety and material resources on average (Amir & McAuliffe, Reference Amir and McAuliffe2020; Arnett, Reference Arnett2008; Barrett, Reference Barrett2020; Henrich et al., Reference Henrich, Heine and Norenzayan2010; Humphreys & Salo, Reference Humphreys and Salo2020; Nielsen et al., Reference Nielsen, Haun, Kärtner and Legare2017; Nisbett, Reference Nisbett2003; Qu et al., Reference Qu, Jorgensen and Telzer2021; Thalmayer et al., Reference Thalmayer, Toscanelli and Arnett2020). These populations also tend to be culturally similar (Muthukrishna et al., Reference Muthukrishna, Bell, Henrich, Curtin, Gedranovich, McInerney and Thue2020), though clearly there are vast differences in the resources and lived experiences of different groups within WEIRD populations as well (e.g., marginalized vs. privileged groups) (Clancy & Davis, Reference Clancy and Davis2019). In addition to WEIRD populations being a poor representation of the global population, accounting for only 12% of contemporary humans (Henrich et al., Reference Henrich, Heine and Norenzayan2010), they are also a poor representation of Homo sapiens generally, as WEIRD populations are biased toward subsistence modes and social structures that did not exist for the majority of human history. This biased sampling can lead to an inaccurate and narrow view of the expected human childhood with culturally-tethered assumptions, such as parents’ unconditional willingness and ability to provision heavily for their children.
Perspectives on the expected human childhood shape research agendas by informing hypotheses. Consider a scientist who assumes that a given negative experience (e.g., insensitive caregiving, exposure to violence) falls outside the species-typical range, the range of inputs that humans have evolved adaptations to deal with. This scientist might expect to see responses that follow from experience-dependent plasticity – that is, specific experiences resulting in gradual neurobiological changes that tend to be reversible based on later experience. Conversely, they may anticipate dysregulation, or an inability to mount a biologically adaptive response. However, they may be less likely to expect responses that follow from experience-expectant plasticity – that is, experiences at a specific developmental stage triggering major and rapid neurobiological changes that are difficult to reverse – as those responses are thought to occur only when dealing with species-typical conditions (Frankenhuis & Nettle, Reference Frankenhuis and Nettle2020a; Gabard-Durnam & McLaughlin, Reference Gabard-Durnam and McLaughlin2019; Greenough et al., Reference Greenough, Black and Wallace1987; Nelson & Gabard-Durnam, Reference Nelson and Gabard-Durnam2020). In other words, if a scientist assumes an adverse experience falls outside the species-typical range, they may anticipate either reversible change or dysregulation. If, however, they assume it falls within this range, they may expect either experience-expectant or experience-dependent plasticityFootnote 1 (McLaughlin & Gabard-Durnam, Reference McLaughlin and Gabard-Durnam2021). In short, judgments about whether or not an adverse experience falls within the species-typical range or not has consequences for our scientific understanding of adaptation and impairment as well as for specific research agendas. Therefore, the field needs an accurate portrait of the species-typical range to better inform our view of the expected human childhood.
In the sections that follow, we bring together evidence from history, anthropology, and primatology to argue that over evolutionary time, human infants and children have on average been exposed to higher levels of threat and (some forms of) deprivation than is typical in industrialized societies; and that because these levels were highly variable across time and space (Roser et al., Reference Roser, Ritchie and Dadonaite2019a; Stearns, Reference Stearns2006; Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013), natural selection has likely favored phenotypic plasticity, the ability to tailor development to different conditions.
Child-centeredness across societies
Before discussing adversity exposures, we note that societies vary in their degree of child-centeredness – or, the extent to which adults curate their environment to conform to the preferences of children. While caregivers in hunter-gatherer societies tend to respond quickly to children’s needs, like nursing (Konner, Reference Konner, Meehan and Crittenden2016), they tend to be lower on child-centeredness, expecting children to accommodate and adapt to a more adult-centered lifeway (Rogoff, Reference Rogoff2011). The degree of child-centeredness is generally high in many contemporary WEIRD societies, and this feature is likely to be an outlier in the distribution of societies in human history (Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015); so much so, it might be a violation of the expected human childhood. This is clearly not to say that parents don’t care for their children in non-WEIRD societies; that is patently false. In all societies, children are cared for. Also, as small-scale societies are more susceptible to harsh and unpredictable environments, and as risks (e.g., pathogens) particularly affect children because of their immature immune systems, caregiving must be attuned to children’s needs for them to have a decent chance at survival (Martin et al., Reference Martin, Ringen, Duda and Jaeggi2020; Tronick et al., Reference Tronick, Morelli and Winn1987). However, in small-scale societies, children’s preferences may play a smaller role in shaping adult behavior than they do in WEIRD societies. For instance, children may be expected to adapt to the daily schedules of adult caregivers, as opposed to the other way around, and older children may be expected to take on more responsibilities, such as contributing to the household economy by participating in food production, household chores, and childcare (Blurton Jones et al., Reference Blurton Jones, Hawkes, O’Connell, Standen and Foley1989; Crittenden et al., Reference Crittenden, Conklin-Brittain, Zes, Schoeninger and Marlowe2013; Lee & Kramer, Reference Lee and Kramer2002).
We are not arguing that behaviors such as extra attention to children’s preferences are unnatural and therefore undesirable. That would be committing the naturalistic fallacy, or inferring “ought” from “is.” Rather, we argue that discourse in developmental and clinical psychology can benefit from a greater incorporation of evidence from diverse fields when considering which types of experiences fall within the species-typical range.
We are not the first to recognize that the assumptions of many psychological theories do not generalize as widely as commonly assumed. This point has been repeatedly made in psychology journals (e.g., Keller et al., Reference Keller, Bard, Morelli, Chaudhary, Vicedo, Rosabal-Coto, Scheidecker, Murray and Gottlieb2018; Rogoff et al., Reference Rogoff, Coppens, Alcalá, Aceves-Azuara, Ruvalcaba, López and Dayton2017; Sternberg, Reference Sternberg2014), and has been the focus of work by evolutionary developmental psychologists (e.g., Barrett, Reference Barrett2020; Bjorklund & Ellis, Reference Bjorklund and Ellis2014; Geary & Berch, Reference Geary and Berch2016; House et al., Reference House, Silk, Henrich, Barrett, Scelza, Boyette, Hewlett, McElreath and Laurence2013; Legare, Reference Legare2019). Reaching out from the other side of the bridge, clinical psychologists have connected their work with that of biological anthropologists and evolutionary psychologists (e.g., Callaghan & Tottenham, Reference Callaghan and Tottenham2016; Ganz, Reference Ganz2018; Richardson et al., Reference Richardson, Blount and Hanson-Cook2019; Rifkin-Graboi et al., Reference Rifkin-Graboi, Goh, Chong, Tsotsi, Sim, Tan, Chong and Meaney2021; Tooley et al., Reference Tooley, Bassett and Mackey2021). Particularly relevant is a recent paper by Humphreys and Salo (Reference Humphreys and Salo2020), which argues that developmental and clinical psychologists need to empirically update their notions of the expected human childhood in a way that better aligns with the high and variable levels of adversity documented in the historical and cross-cultural record.
Outline
Here, we synthesize the main findings of systematic reviews, meta-analyses, and cross-cultural investigations, each of which have analyzed dimensions of adversity (e.g., infant mortality due to exposures to pathogens or violence), in humans or nonhuman primates, during a particular historical or contemporary time period. We focus on two broad dimensions of the early environment known to impact key developmental outcomes, threat and deprivation, as these dimensions are the central focus of the threat-deprivation model of adversity (Humphreys & Zeanah, Reference Humphreys and Zeanah2015; McLaughlin & Sheridan, Reference McLaughlin and Sheridan2016; Sheridan & McLaughlin, Reference Sheridan and McLaughlin2014). We define threat as experiences involving the potential for harm imposed by other agents, and deprivation as low levels of social, cognitive, and nutritional inputs, all of which should be contextualized within the larger cultural expectations and norms they take place in (see section 6). In the harshness-unpredictability framework (Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009), threat and deprivation are the primary causes of harshness, defined as age-specific rates of morbidity and mortality. This framework defines unpredictability as stochastic variation in harshness over space and time (Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009).
Our analysis covers unpredictability in three main ways. First, we discuss the idea that high levels of climate variability in human evolution lowered the correlation between nutritional conditions early and later in life, reducing the adaptive value of using the former to developmentally adapt to the latter (Nettle et al., Reference Nettle, Frankenhuis and Rickard2013; Wells, Reference Wells2007). Second, we discuss the fact that higher infant and child survival in recent history has reduced variance in the age at death, thus increasing predictability in mortality, though not necessarily the correlation between environmental conditions early and later in life (Young et al., Reference Young, Frankenhuis and Ellis2020). Third, we discuss evidence suggesting that parent-child interactions may be less predictable, for instance due to less consistent parenting (Eltanamly et al., Reference Eltanamly, Leijten, Jak and Overbeek2021; Mesman et al., Reference Mesman, van IJzendoorn, Behrens, Carbonell, Cárcamo, Cohen-Paraira, de la Harpe, Ekmekçi, Emmen, Heidar, Kondo-Ikemura, Mels, Mooya, Murtisari, Nóblega, Ortiz, Sagi-Schwartz, Sichimba, Soares and Zreik2016), when families live in extremely harsh conditions (e.g., high pathogen loads, famine, warfare) (Quinlan, Reference Quinlan2007).
In section 2, we begin with a broad discussion of infant and child mortality across human history. In section 3, we examine the dimension of threat – acts of commission that inflict direct harm or violence – followed by a discussion of deprivation – acts of omission, such as restricting investment – in section 4 Footnote 2 . In section 5, we briefly address the ways in which threat and deprivation have been associated with each other during human evolution; that is, were children who were exposed to threat also more likely to be deprived and vice versa? Finally, in section 6, we discuss major developmental and clinical implications of our two main claims; (1) that the mean level of adversity for our species was higher than developmental and clinical psychologists often assume; and (2) that variation in adversity across societies and individuals, not uniformity, was common across human history (Figure 1). We argue that in response to such variation, natural selection has likely favored phenotypic plasticity, the ability to tailor development to different conditions, including harsh and unpredictable environments. This means that a given person can be highly plastic in response to the environment.
2. Infant and child mortality across human evolution
People often think of art or music as the greatest of human achievements, but this honor really belongs to the global reduction of infant and child mortality, and associated psychosocial adversities (e.g., bereavement), in recent history (Roser et al., Reference Roser, Ritchie and Dadonaite2019a; Stearns, Reference Stearns2006; Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013). In this section, we strive to make two points: (1) that mean infant and child mortality was higher in the past; and that (2) infant and child mortality were, and continue to be, variable across societies. Though there is substantial variation between geographical regions, and in some places infant and child mortality continue to be high, children’s welfare, on average, has improved greatly in recent history.
A survey of small-scale and mainly recent historical societies suggests that prior to the advent of agriculture, more than a quarter of infants did not survive their first year of life, and nearly half did not survive to puberty (Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013; for surveys focusing on small-scale societies, see Gurven & Kaplan, Reference Gurven and Kaplan2007; Hewlett, Reference Hewlett1991; Walker et al., Reference Walker, Gurven, Hill, Migliano, Chagnon, De Souza, Djurovic, Hames, Hurtado, Kaplan, Kramer, Oliver, Valeggia and Yamauchi2006). Many others suffered morbidity, that is disability and damage, caused by environmental hazards. To compare: infant and child mortality rates are less than 1% in WEIRD societies (Human Mortality Database, 2008; Roser et al., Reference Roser, Ritchie and Dadonaite2019a). In 2017, global infant and child mortality rates were 2.9% and 4.6%, respectively, with the highest contemporary child mortality rates in Sub-Saharan Africa – where in some countries 10% of children never reach their 5th birthday – and the lowest in Iceland, below 0.3% (Roser et al., Reference Roser, Ritchie and Dadonaite2019a). In societies where gains have been made in recent history, these have often been attributed to agriculture and economic growth that resulted in improved nutrition, housing, infrastructure, hygiene, the advent of public health, and technological and medical advances. However, while global mean rates of infant and child mortality have declined over time, there is and long has been substantial variation in mortality rates across societies (Human Mortality Database, 2008; Roser et al., Reference Roser, Ritchie and Dadonaite2019a). Thus, it is not only the case that our conception of mortality rates is skewed by the affluent West, but also that these patterns are different from the majority of human experience until very recently.
It would be one-sided to sketch a portrait of human history that only emphasizes adversity. Human societies are better characterized as diverse (Barrett, Reference Barrett2021; Singh & Glowacki, Reference Singh and Glowacki2021), and throughout history, many societies were highly cooperative, egalitarian, and practiced extensive alloparenting, with children learning valuable skills and knowledge in mixed-age peer groups (Kelly, Reference Kelly2013; Meehan & Crittenden, Reference Meehan and Crittenden2016; Lew-Levy et al., Reference Lew-Levy, Reckin, Lavi, Cristóbal-Azkarate and Ellis-Davies2017). Our goal is therefore to synthesize a nuanced portrait of childhood throughout human history, particularly one that can accommodate variation across cultures and ecologies (Barnard, Reference Barnard2004; Humphreys & Salo, Reference Humphreys and Salo2020; Page & French, Reference Page and French2020).
We begin this section by describing how infant and child mortality rates have historically varied, and continue to vary, by subsistence mode. Then, we discuss two facts that have promoted the evolution of childhood adaptations to stress. First, the force of natural selection declines with age. That is, the extent to which traits affect lifetime reproductive success is stronger earlier relative to later in life. Second, infants and children have been able to exert some degree of influence over their survival via their own behavior and by influencing their caregivers (e.g., evocative effects of temperament). This background sets the stage for discussing threat, deprivation, and their associations, in sections 3–5.
The demographic transition
Prior to the advent of crop and animal domestication in some regions of the world during the Neolithic Revolution – as early as 13,000 years ago – the predominant mode of subsistence for Homo sapiens was centered on hunting and gathering (Weisdorf, Reference Weisdorf2005). As Homo sapiens has existed for about 200,000 years, this means that for roughly 95% of our species’ history, children were typically born into hunter-gatherer societies (van Schaik, Reference Van Schaik2016). More generally, the genus Homo, which stretches back 2 million years to Homo habilis, also relied on foraging as the primary (but not the only) subsistence strategy, suggesting that this lifeway has deep evolutionary roots.
In hunter-gatherer societies, life expectancy tends to be lower than in contemporary, industrialized societies. This difference in life expectancy, however, is not driven much by adult mortality. For instance, in contemporary hunter–gatherer and forager-horticulturalist populations, the human mortality hazard curve is typically U-shaped, with high mortality hazards early and late in life (Gurven & Kaplan, Reference Gurven and Kaplan2007; Hill et al., Reference Hill, Hurtado and Walker2007; Walker et al., Reference Walker, Gurven, Hill, Migliano, Chagnon, De Souza, Djurovic, Hames, Hurtado, Kaplan, Kramer, Oliver, Valeggia and Yamauchi2006). The per-year survival odds for adults are high: once an individual has reached the age of 15, the mode in adulthood is approximately 72 years, with a range of 68–78 years of age (Gurven & Kaplan, Reference Gurven and Kaplan2007). Note that life expectancy at age 15 will be lower than the mode, because the distribution around the mode is not symmetrical; there are more deaths to the left of the mode than to the right (Walker et al., Reference Walker, Gurven, Hill, Migliano, Chagnon, De Souza, Djurovic, Hames, Hurtado, Kaplan, Kramer, Oliver, Valeggia and Yamauchi2006). Though estimates of prehistoric humans are more uncertain, the predicted longevity of H. habilis is 52–56 years and that of H. erectus 60–63 years (Charnov & Berrigan, Reference Charnov and Berrigan1993; Hammer & Foley, Reference Hammer and Foley1996; see also Page & French, Reference Page and French2020). Thus, the difference in life expectancy between past and present societies seems to be mainly driven by mortality in early life.
Contemporary hunter-gatherer societies are characterized by high birth and high death rates. These features overlap with the central features of what is sometimes described as Stage 1 of the Demographic Transition, typically observed among hunter-gatherer or nonindustrial societies (Figure 2). The Demographic Transition Model (Thompson, Reference Thompson1929) is a descriptive model of the demographic shift from high birth and mortality rates to low birth and mortality rates in response to industrialization and accompanying changes, such as advances in technology, education, and economic development. In Stage 1, populations exhibit both high birth and death rates, leading to roughly stable or slowly increasing population sizes. In Stage 2, death rates begin to fall rapidly but birth rates remain high, leading to rapid increases in population size. In Stage 3, birth rates also begin to fall, leading to a slower increase in population size which culminates in a falling, then more stable population size in Stage 4, where both birth and death rates remain low. In Stage 5, there may be a slight increase in birth rates, leading to small increases in population size. The main point illustrated in Figure 2 is that the demographic shifts in Stages 2–5 have occurred in the last 5% of our species’ history. Thus, we can assume that the other 95% of that time was spent in environments that more closely resembled the features of Stage 1; environments which, as noted earlier, were highly variable.
We can gain additional insight by viewing demographic data through an evolutionary lens (Kaplan & Lancaster, Reference Kaplan, Lancaster, Cronk, Chagnon and Irons2000; Mace, Reference Mace2000; Sear, Reference Sear and Wright2015, Reference Sear2021). A primary engine of evolution – defined as change in the genetic composition of a population over time – is natural selection, defined as the differential reproductive success of inherited variations (Buss, Reference Buss1999). The currency of natural selection is inclusive fitness, or the number of offspring an individual produces throughout their life (lifetime reproductive success), plus the effect they have on the reproduction of relatives (indirect fitness), who are more likely to share their genes. Under pressure from selection, traits or adaptations that help an organism improve their reproductive success are favored and thus propagate in a population. If an organism dies before reproducing, their genes are less represented in the next generation and are at a disadvantage. Over time, this process results in physiological and behavioral adaptations with the capacity to effectively respond to the species-typical range of environmental inputs.
The force of selection declines with age
In line with demographic research, we distinguish between infant mortality rate, the likelihood of dying prior to age 1, and child mortality rate, the cumulative probability of dying prior to approximate sexual maturity at age 15 (Volk & Atkinson, Reference Volk and Atkinson2013). As the latter mortality rate subsumes the former, these two rates are not exclusive. Nonetheless, this distinction is useful because the causes of mortality might differ for infants and children. In the contemporary United States, infants are more likely to die from abuse and neglect than older children are. For instance, in 2019, infants younger than 1 year old died from abuse and neglect at more than 3 times the rate (22.94 per 100,000 children) of children who were 1 year old (6.87 per 100,000 children), and this difference only increases for older age groups (U.S. Department of Health & Human Services, 2021, see p. 55).
Knowledge about causes of death informs our expectations about which adaptations may have been favored by natural selection at different developmental stages. The age at maturity provides a logical cutoff, because natural selection acts differently before and after this age. More specifically, the force of selection is uniform before the age at maturity and declines exponentially after this age: steeply in early adulthood, and less steeply in old age (Caswell, Reference Caswell2007; Charlesworth, Reference Charlesworth2000; Hamilton, Reference Hamilton1966; Jones, Reference Jones2009), even if it rarely reaches zero (Pavard & Coste, Reference Pavard and Coste2021). In other words, traits affect lifetime reproductive success substantially more before the age at maturity, and less so at later ages, when organisms have used some of their reproductive potential and have less of it to spare. The significance of this fact for human evolution, which is characterized by high infant and child mortality, cannot be overstated: we should expect strong selection for childhood adaptations to potentially stressful conditions, that is, mechanisms that enable infants and children to deal with harsh and unpredictable environments as well as possible, under the constraints posed by such environments.
At this point, we wish to prevent four potential misunderstandings. First, the fact that responses to chronic stress may entail costs to survival and reproduction later in life (e.g., allostatic load), does not mean they are not adaptive. What matters for natural selection is whether these responses increase (or decrease) lifetime reproductive success. In many cases they will, because the force of selection is much stronger earlier than later in life. Second, the fact that adaptations for dealing with adversity exist does not mean that people living in harsh and unpredictable conditions attain the same levels of survival and reproductive success as people living in safe and supportive conditions; people are merely making the best of a difficult situation. Third, if people have evolved adaptations for dealing with adversity, this by no means implies that infants’ and children’s survival and well-being does not increase with higher levels of caregiver investment; in fact, it often does (see section 4). Children in all but the most dire circumstances depend on receiving high levels of care, even if caregiving looks very different across different societies. Fourth, as already noted, if people have evolved to “expect” certain forms and variation in levels of adversity and are able to developmentally adjust to them (within the species-typical range), this by no means implies that we should reduce efforts to eradicate adversity. Our bodies have adaptations for responding to cancer (e.g., the immune system eliminates cancer cells on a regular basis), but cancer is harmful to survival and well-being, and therefore, we should reduce carcinogens. In the same way that biologists and medical doctors acknowledge the existence of adaptations for responding to cancer, psychologists should acknowledge the existence of adaptations for responding to adversity. Such adversity has always been with us; it is no stranger.
Children’s influence on their own survival
Natural selection could only favor childhood adaptations to stress if responses to early adversity affected survival or reproduction. In this subsection, therefore, we describe some (but not all) of adaptive responses to early adversity, including ways in which infants and children have been able to influence their own chances of survival.
The main cause of infant and child mortality during human evolution is thought to be gastrointestinal or respiratory disease (70%–80% of deaths) (Volk, Reference Volk2011; see also Lancy, Reference Lancy2015; Volk & Atkinson, Reference Volk and Atkinson2013). Disease remains the primary modern cause of infant death, especially in countries with high mortality rates (Bryce et al., Reference Bryce, Boschi-Pinto, Shibuya and Black2005; Volk & Atkinson, Reference Volk and Atkinson2013), and is more likely to co-occur with low protein and/or caloric intake (McDade, Reference McDade2003; Urlacher et al., Reference Urlacher, Ellison, Sugiyama, Pontzer, Eick, Liebert, Cepon-Robins, Gildner and Snodgrass2018). Before the demographic transition, there was much more variability in mortality rates due to the periodic effects of infectious disease (e.g., cholera, smallpox, measles), potentially favoring the evolution of phenotypic plasticity. Improved nutrition, better living conditions, and public health interventions smoothed mortality variability (Gonzaga et al., Reference Gonzaga, Queiroz and De Lima2018; Omran, Reference Omran1983; Wilmoth & Horiuchi, Reference Wilmoth and Horiuchi1999). In human history, the probability of death has decreased at younger ages and become concentrated (or compressed) at old ages. This “mortality compression” (Stallard, Reference Stallard2016) implies that a narrower range of outcomes (reaching old age) has become more likely, thus increasing predictability in mortality. However, mortality compression does not imply that the correlation between environmental conditions early and later in life has also changed. At least in principle, early adversity might predict lower life expectancy as much, more, or less in populations before compared with after the demographic transition. So, mortality compression implies that the age at death has become more predictable in one component (variability) but not necessarily in another (cue reliability) (for a discussion of components of unpredictability, see Young et al., Reference Young, Frankenhuis and Ellis2020).
Infants and children are not helpless in the face of disease threat; they have some ability to influence their exposures and responses to pathogens. They might influence their exposures, for instance, by modifying their behavior in ways that reduce risk of ingesting pathogens (e.g., reducing exploration when near likely sources of pathogens, such as rotten meat, as adults do; Curtis et al., Reference Curtis, De Barra and Aunger2011; Tybur & Lieberman, Reference Tybur and Lieberman2016; Tybur et al., Reference Tybur, Lieberman, Kurzban and DeScioli2013; though see Rottman, Reference Rottman2014). Infants and children might also influence their responses to pathogens by changing their allocation of internal energetic resources. For instance, they can allocate more energy to immune function, if exposed to high levels of pathogens, thus increasing their chances of survival in pathogen-rich environments (Blackwell et al., Reference Blackwell, Snodgrass, Madimenos and Sugiyama2010; Garcia et al., Reference Garcia, Blackwell, Trumble, Stieglitz, Kaplan and Gurven2020; McDade, Reference McDade2003; McDade et al., Reference McDade, Georgiev and Kuzawa2016; McDade et al., Reference McDade, Reyes-García, Tanner, Huanca and Leonard2008; Urlacher et al., Reference Urlacher, Ellison, Sugiyama, Pontzer, Eick, Liebert, Cepon-Robins, Gildner and Snodgrass2018).
The amount and quality of caregiving infants and children receive has a major impact on their survival (Lancy, Reference Lancy2015; Quinlan, Reference Quinlan2007; Volk & Atkinson, Reference Volk and Atkinson2013). Caregivers influence children’s risk of morbidity (i.e., age-specific rates of damage) and mortality (i.e., age-specific rates of death) in many ways. Two primary ways of influencing children’s wellbeing is by providing nutrition and protection. Nutrition is a basic resource for life (which young children cannot produce) and which affects physical growth, in addition to children’s capacity to mount successful immune responses to pathogens. Protection takes a variety of forms, including carrying, which is widespread in contemporary human societies and likely has been throughout our evolutionary history, and which in some ecologies serves to reduce exposures to pathogens and predation (Lozoff & Brittenham, Reference Lozoff and Brittenham1979; Tracer, Reference Tracer2002a, Reference Tracer2002b). For instance, among the Au and Gnau forager-horticulturalists in Papua New Guinea, mothers carry young infants a large portion of the time, in part to protect them from the pathogenic environment. As they get older, mothers carry their children less and less, gradually exposing them to antigens and pathogens in the environment, and enabling the incremental development of immunocompetence (Tracer & Wyckoff, Reference Tracer and Wyckoff2020).
Caregiver investment tends to be lower in harsh environments with high extrinsic risk, meaning morbidity and mortality caused by factors that individuals, be they infants, children, or caregivers, cannot control (Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009; Quinlan, Reference Quinlan2007). From an evolutionary perspective, extrinsic risk creates diminishing returns to parental effort (Quinlan, Reference Quinlan2007). A landmark cross-cultural study of several dozen mostly nonindustrial societies with various subsistence modes suggests that when infant and child mortality results from famine or warfare, mothers tend to invest less in their offspring (Quinlan, Reference Quinlan2007). However, the relation between pathogen risk and maternal investment is shaped like an inverted-U: maternal investment increases in environments with low to moderate levels of pathogens, and then decreases from moderate to high levels (Quinlan, Reference Quinlan2007). Quinlan (Reference Quinlan2007) speculates this might be because in environments where pathogen stress is low, infants and children need little protection; where it is high, they cannot be protected; and where it is moderate, caregiver investment pays off the most. Consistent with this pattern of higher parental investment at moderate levels of adversity, a recent meta-synthesis of qualitative studies found that during times of war, parenting practices were harsher, more hostile, less inconsistent, and less warm in extremely dangerous settings and warmer and more protective when only living under threat (Eltanamly et al., 2021). A meta-analysis of quantitative studies, however, found a linear pattern with small effect sizes: parents who had more exposure to war were harsher (r = .12) and less warm (r = –.02) toward their children (note: this meta-analysis coded hostility under harshness and did not measure inconsistency; Eltanamly et al., 2021).
Infants’ and children’s ability to influence their mortality risk depends largely (if not primarily) on their ability to influence investment by caregivers. Empirical research shows that parental investment generally improves infant and child survival (Volk & Atkinson, Reference Volk and Atkinson2013). There are many specific ways, through appearance and behavior, in which infants and children might influence the quality and amount of investment they receive; for instance, by having neotenous (cute) features, following gaze, attending to facial expressions, and responding contingently (Hrdy, Reference Hrdy, Meehan and Crittenden2016). Functionally, such behaviors may convey information about the child’s health status.Footnote 3 Additionally, children can influence their own survival through independent foraging and caloric provisioning. Indeed, research suggests that hunter-gatherer children participate in foraging and hunting from an early age, and are able to furnish a significant number of calories by middle childhood (Blurton Jones et al., Reference Blurton Jones, Hawkes, O’Connell, Standen and Foley1989; Crittenden et al., Reference Crittenden, Conklin-Brittain, Zes, Schoeninger and Marlowe2013).
Throughout human history and in a variety of cultures, caregivers applied a triage, investing more in offspring judged to be more likely to survive and become productive members of the family, who will be able to pay back the investment made in them (Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015; Volk & Atkinson, Reference Volk and Atkinson2013). This is called the Banker’s Paradox: only loan money to people who need it the least, because they are most likely to repay (Tooby & Cosmides, Reference Tooby, Cosmides, Runciman, Maynard Smith and Dunbar1996). However, whether a given caregiver follows the Banker’s Paradox may depend on parental condition. For instance, studies in the United States suggest that whereas parents with low resources invest more in low-risk than high-risk children, parents with higher resources invest more in high-risk than low-risk children (Beaulieu & Bugental, Reference Beaulieu and Bugental2008; Bugental et al., Reference Bugental, Beaulieu and Silbert-Geiger2010). In general, across human history caregivers had multiple social roles and faced competing demands, with demands increasing in times of resource stress (e.g., famine) and chronic danger (e.g., war); parents could not always prioritize each child equally. In some cases, children were able to extract more resources from their caregivers via behaviors deemed “undesirable” in developmental and clinical psychology (e.g., “acting out”). For instance, de Vries (Reference De Vries1984, Reference De Vries and Scheper-Hughes1987b) found in the Masai that calmer babies received fewer resources than more temperamental babies, resulting in higher survival rates for more temperamental babies during a famine. In this case, it seems the squeaky wheel gets the grease.
3. Threat
In this section, we explore how threat – experiences involving the potential for harm imposed by other agents – may have shaped human development. We focus on three primary threats to children throughout human evolution: infanticide, violent conflict with noncaregivers, and predation. Infanticide is widely studied in primatology and anthropology, but receives less attention in developmental and clinical psychology, which focus on living children. We discuss infanticide for three reasons. First, infanticide appears to account for a nontrivial percentage of infant deaths in human history, so it should be included in a characterization of the expected human childhood. Second, the psychological mechanisms that infants and children use to survive and thrive in contemporary societies – for instance, by soliciting investment from caregivers who have little to spare – may have been shaped by past selection pressures created by infanticide. Third, constraints may force caregivers to limit their investment (e.g., nutrition) for some period to see whether a child is strong enough to survive. Some children who are alive today, especially in harsh and unpredictable environments, have passed this triage, but may still be experiencing the mental and physical consequences of this form of early adversity.
Infanticide
Following disease, another leading cause of infant and child mortality during human evolution may have been infanticide, the killing of infants (Budnik & Liczbińska, Reference Budnik and Liczbińska2006; Cunningham, Reference Cunningham2005; Gurven & Kaplan, Reference Gurven and Kaplan2007; Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015; Rawson, Reference Rawson2003; Volk & Atkinson, Reference Volk and Atkinson2013). Infanticide appears to account for a nontrivial percentage of infant deaths among societies in the past thousands of years (Volk, Reference Volk2011; Volk & Atkinson, Reference Volk and Atkinson2013) and among contemporary hunter-gatherers (Gurven & Kaplan, Reference Gurven and Kaplan2007), but estimates are relatively uncertain because infanticide is often a hidden behavior. Infanticide may have been carried out for a variety of reasons, such as poor maternal or infant health, unsupportive social and ecological conditions, or being born out of wedlock (Daly & Wilson, Reference Daly and Wilson1988; Hrdy, Reference Hrdy1999, Reference Hrdy2009; Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015; Volk, Reference Volk2011; Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013).
Infanticide is widespread among mammals. Phylogenetic analyses have shown that infanticide occurs in 182 (63%) of the 289 species that have been studied (Lukas & Huchard, Reference Lukas and Huchard2019). Breaking down by sex of the perpetrator, infanticide by females has been documented in 89 (31%) of 289 species (Lukas & Huchard, Reference Lukas and Huchard2019), and infanticide by males in 119 (46%) of 260 species, including nearly all of the great apes (Lukas & Huchard, Reference Lukas and Huchard2014). Infanticide rates are highly variable across the mammals, mainly as a function of social organization and life history (Lukas & Huchard, Reference Lukas and Huchard2019). Both of these analyses have only focused on instances where individuals kill offspring that are most likely not their own, excluding instances where mothers kill their own offspring. This appears to be different in humans where the main perpetrators of infanticide may include parents or other family members, as discussed below.
The infanticide statistics provided in this section should not be misunderstood as implying that “maltreatment was common” in humans or other mammals. In fact, there is only limited evidence for chronic physical abuse by caregivers in prehistoric human skeletal material (Walker, Reference Walker2001). We may speculate that if infants and children were killed by their caregivers, this likely occurred in a punctuated violent event or prolonged deprivation, rather than through the cumulative effects of repeated physical abuse over the course of months or years. That infanticide may have accounted for a substantial percentage of infant deaths in human history, and that rates of infanticide varied across societies, should inform estimates of the mean level of harshness (age-specific rates of morbidity and mortality), and stochastic variation in the level of harshness over space and time, of the expected human childhood.
The anthropological record suggests that if infanticide occurs, it is mainly carried out by primary caregivers, not strangers or familiar nonrelatives; though there are exceptions, such as when infants and children whose father had died, or who had abandoned them, were killed by the new partner of their mother (Hill & Hurtado, Reference Hill and Hurtado1996; Hill & Kaplan, Reference Hill, Kaplan, Betzig, Turke and Borgerhoff-Mulder1988). Infanticide is typically described as an emotionally painful event for caregivers, who consider it either necessary or the best choice among a set of terrible options (Chagnon, Reference Chagnon2012; Hrdy, Reference Hrdy1999; Lancy, Reference Lancy2015; Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013). Other work claims that there has been a shift in attitudes toward infants and children in the 19th century; that before then, people considered it less of a need to cherish infants, to offer them safety and security, and to help them develop (Mitterauer & Sieder, Reference Mitterauer and Sieder1997; Zelizer, Reference Zelizer1985). Regardless, culturally sensitive understandings recognize competing demands on mothers, which vary by setting. For instance, in conditions of nutritional scarcity, mothers may not have access to sufficient resources for growing a baby or for lactation, which entails even greater energetic costs than pregnancy (Beehner & Lu, Reference Beehner and Lu2013; Worthington-Roberts et al. Reference Worthington-Roberts, Vermeersch and Williams1985). To give an impression of these painful experiences, we provide ethnographic excerpts in this footnoteFootnote 4 .
Infanticide might appear at odds with evolutionary theory, but it is not (Hrdy, Reference Hrdy1999, Reference Hrdy2009). Natural selection favors strategies (e.g., genes, developmental systems) that optimize fitness; that is, which increase their own representation in future generations, relative to other strategies in the population. In evolutionary biology, individuals are viewed as instantiating strategies. Mathematical theory shows that the lifetime reproductive success of individuals is under many conditions a good measure of the fitness of a strategy (Grafen, Reference Grafen2007). Therefore, although fitness should, strictly speaking, be assigned to strategies rather than individuals, for practical purposes, individual survival and reproduction are taken as measures of fitness. For caregivers, the fitness benefits of infanticide might outweigh the costs. These benefits include diverting resources to current offspring that have greater chances of surviving, and saving resources for future offspring that are healthier, or which are born into more favorable circumstances (Daly & Wilson, Reference Daly and Wilson1988; Dickeman, Reference Dickeman1975).
Infanticide typically happened, and still happens, in the context of cultural beliefs that justify or legitimize the difficult act. For instance, in Japan, infanticide used to be rationalized by the view that the newborn’s death was not the extinction of a life but a return to the other world, potentially allowing rebirth at a more favorable future time (Kojima, Reference Kojima, Koops and Zucherman2003). We do not cover such beliefs, and variation in them across time and space. For information on these topics, we refer readers to work by David Lancy (Reference Lancy, Otto and Keller2014, Reference Lancy2015). Here, we only mention one common cultural response to high infant and child mortality rates, which is that in many societies, infants do not acquire “personhood” (i.e., humanity) until weeks, months, or even years after being born, often once their chances of survival have increased (Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015). Before then, they are often considered to be in a liminal state, between two worlds, the living and the dead (e.g., “little angels,” “little demons”), and essentially not fully human. This may be one reason why children are underrepresented in the historical record: they are not yet “socially born” and recognized (Fabian, Reference Fabian1992), viewed as “persons” worthy of being incorporated into historical recordings, literature, burials, and censuses (Perry, Reference Perry2006; Woods, Reference Woods2007). Anthropologists have argued that delaying personhood can be functional, helping to limit caregiver attachment, making it somewhat easier to deal with the loss of the child (de Vries, Reference De Vries and Super1987a; Eible-Eibesfeldt, Reference Eibl-Eibesfeldt, Oliverio and Zappella1983; Hagen, Reference Hagen1999; Konner, Reference Konner2010; Laes, Reference Laes2011; Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015). We are not aware of quantitative tests of this hypothesis, though interestingly, Canadian adults rate babies as increasingly “cute” from birth to 6 months of age, which is also when babies are better at surviving illnesses (Franklin et al., Reference Franklin, Volk and Wong2018). There is, however, a body of work examining how cultural frames used to interpret adverse experience shape subsequent trauma, which we discuss in section 6.
Violent conflict with noncaregivers
A second source of threat for infants and children is violent conflict with noncaregivers, inflicted by members of their own group (e.g., bullying, physical sanctions imposed by peers in response to a norm violation) and by members of other groups (e.g., cattle raids, warfare). In a survey of eight hunter–gatherer and forager-horticulturalists societies, 17% of infant and child deaths can be attributed to violence, either by caregivers (e.g., infanticide) or by noncaregivers. This percentage drops to 5% if two groups, the Ache and Hiwi, are excluded (Gurven & Kaplan, Reference Gurven and Kaplan2007).Footnote 5 The range varied from 1.4% to 63.5%, which illustrates diversity in the human childhood experience. Overall, violent conflict accounts for a much lower percentage of deaths than disease – as noted earlier, disease is estimated to account for 70%–80% of deaths (Volk, Reference Volk2011) – but for a higher percentage of deaths than predation (Volk & Atkinson, Reference Volk and Atkinson2013), discussed below.
Comparisons with other primates and bioarcheological evidence suggest that violent conflict has long been a part of primate life and human evolutionary history (Bribiescas, Reference Bribiescas2021; Martin & Harrod, Reference Martin and Harrod2015; Wrangham & Peterson, Reference Wrangham and Peterson1996). Though it is challenging to estimate base rates of lethal violence in past societies – in part, because not all forms of lethal violence leave a trace the fossil record – bioarcheologists do agree there has been substantial variability in the use and types of violence across time and space (Martin & Harrod, Reference Martin and Harrod2015; Roser, Reference Roser2013). Ethnographic studies show large variation in the share of violent deaths (out of all deaths), ranging from a few percent up to 60% (for a compilation of resources, see Roser, Reference Roser2013). A cross-cultural study of 21 foraging bands suggests that 50.0% incidents of lethal violence result from interpersonal events (i.e., homicides) and 33.8% from intergroup events (e.g., war) (Fry & Söderberg, Reference Fry and Söderberg2013). These percentage change to 63.3% and 15.2%, respectively, if one group (the Tiwi) are excluded.
Historical trends suggest that violence has declined over the course of human history, including the percentage of people who died by the hands of individuals other than their primary caregivers (Roser, Reference Roser2013). This decline has not been smooth, however, and violence rates are certainly not down to zero. In 2000, the World Health Organization estimated the median national homicide rate among countries to be 6 per 100,000 per year, and the age-adjusted homicide rate (i.e., weighted sums of age-specific rates) to be 8.8 per 100,000 per year (Krug et al., Reference Krug, Mercy, Dahlberg and Zwi2002). The differences between countries were large, and in all countries, many more people suffered from nonlethal violence. Nonetheless, both estimates are markedly lower than the triple-digit values documented in some nonindustrial societies (Roser, Reference Roser2013).
The historical decline in violence encompasses many different forms, including domestic abuse (e.g., spousal beating), physical punishment (e.g., social sanctions imposed by peers in response to a norm violation; Mathew & Boyd, Reference Mathew and Boyd2014), interpersonal violence (e.g., competition over resources or mates), and intergroup conflict (e.g., raiding, ambushing, or warfare) (Fry & Söderberg, Reference Fry and Söderberg2013; Keeley, Reference Keeley1996). In all forms of violence, infants and children could be passive victims, and in some cases, children were actively encouraged to participate. For instance, historically among the hunter-horticulturalist Shuar of southeastern Ecuador, boys as young as seven were encouraged to actively participate in raids to gain war experience (Stirling, Reference Stirling1938), as children still do in some contemporary societies (Krug et al., Reference Krug, Mercy, Dahlberg and Zwi2002). Thus, across human evolution, children have likely been exposed to higher rates of violence by noncaregivers than they are in contemporary societies.
Violent conflict tends to be more common in places where resources are scarce and unpredictable (Daly & Wilson, Reference Daly and Wilson1988; Gat, Reference Gat2008; Homer-Dixon, Reference Homer-Dixon1994; Krug et al., Reference Krug, Mercy, Dahlberg and Zwi2002; Lancy, Reference Lancy2015; Nettle, Reference Nettle2015). However, this correlation is far from perfect. Some societies solve challenges posed by resource scarcity and unpredictability through peaceful systems of mutual interdependence, sharing resources within and between communities (Winterhalder, Reference Winterhalder, Dunbar and Barrett2007); others through within-group competition or raids of neighboring villages. Nonetheless, in the aggregate, scarcity and unpredictability tend to increase competition-related violence. When individuals are close to a “desperation threshold,” a level of resources below which it is highly undesirable or even fatal to fall (De Courson & Nettle, Reference De Courson and Nettle2021; Mishra et al., Reference Mishra, Barclay and Sparks2017; Stephens, Reference Stephens1981), individuals might resort to aggression to obtain vital resources (Ellis et al., Reference Ellis, Del Giudice, Dishion, Figueredo, Gray, Griskevicius, Hawley, Jacobs, James, Volk and Wilson2012; Hawley, Reference Hawley2015; Hawley et al., Reference Hawley, Little and Rodkin2007; Turnbull, Reference Turnbull1972; Volk et al., Reference Volk, Camilleri, Dane and Marini2012). When there are enough vital resources, cooperative strategies may re-emerge (Townsend et al., Reference Townsend, Aktipis, Balliet and Cronk2020). Still, in more favorable conditions, men may compete for resources that increase their chances of having multiple partners, even in hunter-gatherer societies, which tend to be more egalitarian than industrialized societies (Daly, Reference Daly2016; for a recent analysis of the emergence of institutionalized inequality, see Smith & Codding, Reference Smith and Codding2021). In sum, violent encounters have long been a part of human history, and thus human infants and children could not necessarily “expect” safe and supportive conditions.
Predation
Dangerous animals have historically posed a threat to infants and children, and continue to do so in certain contemporary societies. Although this threat is considered to be a relatively minor cause of mortality in modern humans (Volk & Atkinson, Reference Volk and Atkinson2013), there are well-documented cases of people being killed while hunting big game with simple tools (Walker, Reference Walker2001). For earlier hominids, predation was likely a more significant selection pressure (Hart & Sussman, Reference Hart and Sussman2009), just like it is for many contemporary primates (Anderson, Reference Anderson1986; Cheney et al., Reference Cheney, Seyfarth, Fischer, Beehner, Bergman, Johnson, Kitchen, Palombit, Rendall and Silk2004). Predator-caused mortality rates have been observed as high as 65% in chimpanzees (Boesch & Boesch-Achermann, Reference Boesch and Boesch-Achermann2000) and 40% in baboons (Bulger & Hamilton, Reference Bulger and Hamilton1987). Steep declines in group size due to predation have been recorded for most nonhuman primates that have been studied for a sufficiently long period of time (Hill & Dunbar, Reference Hill and Dunbar1998; Hart & Sussman, Reference Hart and Sussman2009). Thus, these nonhuman primate species suffer high predation rates alongside other stressors, such as infanticide (Anderson, Reference Anderson1986; Cheney et al., Reference Cheney, Seyfarth, Fischer, Beehner, Bergman, Johnson, Kitchen, Palombit, Rendall and Silk2004; Hrdy, Reference Hrdy1979, Reference Hrdy1999, Reference Hrdy2009; Hrdy et al., Reference Hrdy, Janson and Van Schaik1994). It can be reasonable to conclude, then, that predation has always been part of primate life, and that the strength of predation on human survival has likely decreased across time (Volk & Atkinson, Reference Volk and Atkinson2013).
This section has focused on three significant threats: infanticide, violent conflict with noncaregivers, and predation. The first two threats are thought to be the most likely sources of morbidity and mortality, after disease, for ancestral human infants and children. The third, predation, is thought to be a relatively minor cause of morbidity and mortality among Homo sapiens, but a major cause of morbidity and mortality in the earlier stages of our lineage.Footnote 6
4. Deprivation
We have argued that, over evolutionary time, human infants and children have been exposed to higher and more variable levels of threat than those in contemporary, industrialized societies. Therefore, it is reasonable to assume that natural selection has favored phenotypic plasticity, the ability to tailor development to different conditions, over mechanisms that more narrowly “expect” safe and supportive conditions. In this section, we make a similar argument for deprivation: infants and children have been exposed to a wide range of social, cognitive, and nutritional input, including – though certainly not always – lower levels than those typical in contemporary, industrialized societies, and therefore likely have the capacity to adjust to this variation to a large extent. The difference with our analysis of threat is that while variation in all three forms of deprivation – social, cognitive, and nutritional – can be found across human societies, we only see substantive evidence for a reduction in the mean levels of nutritional deprivation across time. It is much harder to use objective benchmarks to compare levels of social and cognitive input than it is to compare nutritional input, because what counts as adequate social and cognitive inputs varies by culture. Thus we argue that the mean level of nutritional input was typically lower in the past, and that children are generally adapted to a wide range of other forms of social and cognitive input.
Social deprivation
Social deprivation refers to low levels in the quantity and quality of human interactions. We focus on contact with primary caregivers, such as the mother and father. We also include alloparents – such as siblings and grandparents – due to evidence suggesting that alloparents provide nearly half of caretaking in nonindustrialized societies (Hrdy, Reference Hrdy1999, Reference Hrdy2009; Kramer, Reference Kramer2005; Meehan & Hawks, Reference Meehan, Hawks, Otto and Keller2014).Footnote 7 Social deprivation is likely to occur when caregivers die (e.g., complications of childbirth, violent conflict), or when caregivers are alive but provide limited investment in a child (e.g., due to their own poor health or scarce resources). Perhaps in part for this reason, alloparenting in humans is more common in harsh and unpredictable environments (Lancy, Reference Lancy, Otto and Keller2014; Martin et al., Reference Martin, Ringen, Duda and Jaeggi2020; Simpson & Belsky, Reference Simpson, Belsky, Cassidy and Shaver2016). Moreover, in such environments alloparenting may have more impact on child outcomes than in safe and supportive environments (Nenko et al., Reference Nenko, Chapman, Lahdenperä, Pettay and Lummaa2021).
Alloparenting can act as a buffer against social, cognitive, and nutritional deprivation. Alloparents may provide not only material resources that improve survival (Sear & Mace, Reference Sear and Mace2008), but also cognitive and social inputs that promote the attainment of motor and social milestones (Singletary, Reference Singletary2020), often through play (Meehan & Crittenden, Reference Meehan and Crittenden2016). Despite these benefits, a cross-cultural survey found that in all of 28 populations (examined in 45 studies), the death of the mother – who is typically the primary caregiver in most cultures – was associated with much higher child mortality (Sear & Mace, Reference Sear and Mace2008; see also Konner, Reference Konner2010; Strassmann, Reference Strassmann2011). Only a tiny proportion of children survived if their mothers died giving birth to them; for instance, 1.6% of Swedish children in the 19th century and 5% of Bangladeshi children in the late 1960s (this percentage had increased to 26% in the same Bangladeshi population by the 1980s). However, the catastrophic effects of the mother’s death on child outcomes depended strongly on the age of the child: the effects weaken or even disappear entirely after children are weaned. Nonetheless, the effects of the mother’s death on child morbidity and mortality are sometimes found even among weaned children (Ronsmans et al., Reference Ronsmans, Chowdhury, Dasgupta, Ahmed and Koblinsky2010), suggesting that at least in some cases, the children of deceased mothers experience more precarious circumstances and may suffer from reduced care more generally (Konner, Reference Konner2010; Perry, Reference Perry2021; Strassmann, Reference Strassmann2011). Thus, because maternal mortality was more common in historical than contemporary societies, and because maternal care was not always (fully) substituted by other caregivers, we may speculate that children would have been more at risk of social and other forms of deprivation in the past. Moreover, because the degree of alloparental care and investment varies substantially across cultures (Gibson & Mace Reference Gibson and Mace2005; Hrdy Reference Hrdy2009; Konner Reference Konner2010), children may have evolved the capacity to adjust to a wide range of variation in the quantity and quality of human interactions.
Parental investment generally improves infant and child survival (Volk & Atkinson, Reference Volk and Atkinson2008, Reference Volk and Atkinson2013; section 2). In mammals, offspring especially depend on their mothers for nutrition, protection, transportation, and learning (Clutton-Brock, Reference Clutton-Brock1991). Thus, the amount that a mother can or does invest in their offspring is an important determinant of whether the offspring will experience social deprivation. If a mother dies prior to weaning, the infant is also more likely to die, not only among humans but also among other mammals (Balshine, Reference Balshine, Royle, Smiseth and Kolliker2012; Hasegawa & Hiraiwa, Reference Hasegawa and Hiraiwa1980; Lahdenperä et al., Reference Lahdenperä, Mar and Lummaa2016; van Noordwijk, Reference Van Noordwijk, Mitani, Call, Kappeler, Palombit and Silk2012). When mothers have poor health during gestation and lactation, their offspring tend to have lower fitness outcomes (Altmann & Alberts, Reference Altmann and Alberts2005; Bales et al., Reference Bales, French and Dietz2002; Cameron et al., Reference Cameron, Smith, Fancy, Gerhart and White1993; Clutton-Brock et al., Reference Clutton-Brock, Major, Albon and Guinness1987; Fairbanks & McGuire, Reference Fairbanks and McGuire1995; Keech et al., Reference Keech, Bowyer, Ver Hoef, Boertje, Dale and Stephenson2000; Théoret-Gosselin et al., Reference Théoret-Gosselin, Hamel and Côté2015; Zipple et al., Reference Zipple, Altmann, Campos, Cords, Fedigan, Lawler, Lonsdorf, Perry, Pusey, Stoinski, Strier and Alberts2021). In several nonhuman primates, maternal condition affects offspring survival and reproductive success post weaning. For instance, in both chimpanzees and bonobos, the presence of mothers enhances the reproductive success of their adult sons, likely by helping them in status competition with other males for social rank (Crockford et al., Reference Crockford, Samuni, Vigilant and Wittig2020; Surbeck et al., Reference Surbeck, Mundry and Hohmann2011). Further, a recent comparative study showed that in five of seven primate species studied, offspring born in the last 4 years before a female’s death are more likely to die at a young age, possibly because her general condition tends to be worse in the last years of her life (Zipple et al., Reference Zipple, Altmann, Campos, Cords, Fedigan, Lawler, Lonsdorf, Perry, Pusey, Stoinski, Strier and Alberts2021).
We have already argued that parental investment tends to be lower in environments characterized by famine or warfare, and shaped like an inverted-U in relation to pathogen risk: maternal investment first increases from low to moderate levels of pathogens, then decreases from moderate to high levels (Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009; Quinlan, Reference Quinlan2007; section 2). In most nonhuman primates, mothers of lower social rank tend to invest less in their offspring than mothers of higher rank (Suomi, Reference Suomi, Cassidy and Shaver2016). However, these relations vary across primates. For instance, in olive baboons, mothers who experienced higher levels of adversity early in their lives tend to invest more in their offspring (spent more time nursing and carrying) than mothers who experienced lower levels of adversity (Patterson et al., Reference Patterson, Hinde, Bond, Trumble, Strum and Silk2021). So, across several taxa and throughout primate evolution, infants and children have experienced different degrees of social deprivation, both due to variation in exposure to maternal loss and the ability of living parents to invest. Our claim that adverse events occurring in past and present societies often fall within the species-typical range does not, of course, imply that all forms of adversity do. For instance, a commonly discussed example of species-atypical caregiving environments, institutionalized child rearing, is indeed likely to be an evolutionary novelty (Humphreys & Salo, Reference Humphreys and Salo2020; Tottenham, Reference Tottenham2012).
Father absence is often construed as a form of social deprivation. This view is motivated by findings showing that, at least in WEIRD societies, father absence is negatively associated with children’s socio-emotional development (e.g., increased externalizing behavior) and with lower adult mental health and educational attainment (McLanahan et al., Reference McLanahan, Tach and Schneider2013). In WEIRD societies, father absence is also associated with accelerated reproductive development and early childbearing in women (Belsky et al., Reference Belsky, Steinberg and Draper1991; Ellis, Reference Ellis2004; Mishra et al., Reference Mishra, Cooper, Tom and Kuh2009; Webster et al., Reference Webster, Graber, Gesselman, Crosier and Schember2014). There is a tendency in this literature to assume that a high levels of investment from both parents is normative. This may be true for the majority of children in some societies such as the United States, but this is not the case across cultures. For instance, father absence may be associated with limited paternal investment in societies where father absence is due to death, abandonment, or divorce; however, in societies where absence is due to migration labor, it may actually be associated with high paternal investment (Draper & Harpending, Reference Draper and Harpending1982; Shenk et al., Reference Shenk, Starkweather, Kress and Alam2013).
Cultural differences explain in large part why cross-cultural research does not provide universal support for the acceleration of puberty in father-absent households (Sear et al., Reference Sear, Sheppard and Coall2019). Specifically, in societies where father absence is associated with energetic deprivation, the rate of maturation is not accelerated but delayed (Ellis, Reference Ellis2004). Puberty can only be accelerated when there are adequate energetic resources to support growth and development (Coall & Chisholm, Reference Coall and Chisholm2003; for an exception, see Painter et al., Reference Painter, Westendorp, de Rooij, Osmond, Barker and Roseboom2008). If father absence is not associated with energetic deprivation, but instead with reduced social capital and limited future prospects (due to social stigma, higher morbidity and mortality, and other factors), a preference to have children at a young age may be a “reasonable response”; that is, a response to the costs and benefits associated with living in disadvantaged conditions. This response may result from ancestral cues that were correlated with extrinsic mortality across human evolution, cultural expectations, deliberation, or a combination of these factors (Frankenhuis & Nettle, Reference Frankenhuis and Nettle2020a; Geronimus, Reference Geronimus1996; Pepper & Nettle, Reference Pepper and Nettle2017). Thus, the extent and direction of the influence of father absence on child development illustrates our larger point: we cannot assume that patterns from WEIRD societies generalize to other cultural contexts, nor can we base our assumptions about the expected environment based on a small slice of humanity.
It is challenging to quantify how much variation in caregiver investment infants and children have been exposed to, in part, because caregiver investment may take different forms both across and within societies. Some researchers have argued that cultures converge in their beliefs about the ideal mother, and that these beliefs overlap with attachment theory’s notion of the sensitive mother (Mesman et al., Reference Mesman, van IJzendoorn, Behrens, Carbonell, Cárcamo, Cohen-Paraira, de la Harpe, Ekmekçi, Emmen, Heidar, Kondo-Ikemura, Mels, Mooya, Murtisari, Nóblega, Ortiz, Sagi-Schwartz, Sichimba, Soares and Zreik2016, Reference Mesman, Minter, Angnged, Cisse, Salali and Migliano2017). Others have argued that sensitive responsiveness reflects a cultural ideal of good parenting specific to WEIRD societies, where infants are viewed as emotionally expressive, entitled, and independent agents (Keller, Reference Keller2008; Keller et al., Reference Keller, Bard, Morelli, Chaudhary, Vicedo, Rosabal-Coto, Scheidecker, Murray and Gottlieb2018). In many societies, parents face severe constraints on their time and resources, which are reflected in cultural expectations, norms, and ideals about parenting (Chisholm, Reference Chisholm1996; Del Giudice, Reference Del Giudice2009; Keller, Reference Keller2008; Kramer, Reference Kramer2005; Simpson & Belsky, Reference Simpson, Belsky, Cassidy and Shaver2016), and in actual parenting practices (Bornstein et al., Reference Bornstein, Putnick, Lansford, Deater-Deckard and Bradley2015, Reference Bornstein, Putnick, Park, Suwalsky and Haynes2017). Even within WEIRD societies, child-centered parenting may not be representative of the majority (Brown et al., Reference Brown, Hawkins-Rodgers and Kapadia2008; Ganz, Reference Ganz2018). For instance, mothers with low family income or many children are less likely to describe the ideal mother as highly sensitive (Mesman et al., Reference Mesman, van IJzendoorn, Behrens, Carbonell, Cárcamo, Cohen-Paraira, de la Harpe, Ekmekçi, Emmen, Heidar, Kondo-Ikemura, Mels, Mooya, Murtisari, Nóblega, Ortiz, Sagi-Schwartz, Sichimba, Soares and Zreik2016), and behavioral studies have shown large variation in social and cognitive input within communities (Kuchirko & Tamis-LeMonda, Reference Kuchirko, Tamis-LeMonda, Henry, Votruba-Drzal and Miller2019), and between and even within families (von Stumm & Latham, Reference Von Stumm and Latham2018). In short, convergent evidence suggests that, rather than “expecting” high levels of caregiver investment in a specific form (e.g., sensitive responsiveness), infants and children may have evolved adaptations for dealing with a wide range of quantity and quality of caregiving experiences.
Cognitive deprivation
Cognitive deprivation refers to low levels in the quantity and quality of inputs that afford learning; that is, the acquisition of knowledge, abilities, or responses as a result of experience (Frankenhuis et al., Reference Frankenhuis, Panchanathan and Barto2019). In this section, we focus only on cognitive inputs (e.g., child-directed speech, active instruction) provided by caregivers. The inputs we focus on are highly valued in WEIRD societies, and are often used as indices of cognitive deprivation in such societies. As noted earlier, however, what counts as adequate social and cognitive input varies by culture. For this reason, we do not make claims about differences in the mean levels of social and cognitive inputs across history and cultures. Rather, we emphasize variation in these inputs, and how such variation may have shaped developmental adaptations. The main point of this section is thus that certain patterns of input that qualify as “deprivation” in WEIRD societies are actually normative in non-WEIRD societies (and vice versa). In those societies, children develop the ecological and social skills necessary to survive and thrive, showing that developmental mechanisms have the capacity to adjust to a wide range of cognitive inputs.
An oft-discussed form of cognitive input during development is the quality and quantity of infant- and child-directed speech produced by adults. There is considerable support for the notion that child-directed linguistic input from adults helps shape children’s language development (Cristia et al., Reference Cristia, Dupoux, Gurven and Stieglitz2019), leading to gains in skills such as vocabulary size (Hart & Risley, Reference Hart and Risley1995; Rowe, Reference Rowe2008). Consequently, common policy objectives for those seeking to promote child outcomes, such as reading comprehension or academic success in later ages (Chall et al., Reference Chall, Jacobs and Baldwin1990), are focused on increasing opportunities for child-directed speech, by promoting activities such as storybook readings at home and in the classroom (Christ & Wang, Reference Christ and Wang2011). Many of these interventions are focused on closing what is known as the “vocabulary gap,” or the phenomenon in many industrialized societies that children raised in higher socioeconomic households have considerably larger vocabulary sizes than those in lower socioeconomic households (Quigley, Reference Quigley2018). There is a tendency, in this framing, to view the linguistic performance of children raised in lower-income households as falling short of a standard or ideal set by their higher-income peers. There is also a tendency to focus specifically on verbal input from a single adult, usually the mother, to a single child. However, both of these beliefs may be cast into doubt when adopting a broader perspective on human development.
When examining verbal interactions between children and adults in non-WEIRD societies, researchers regularly document significantly less infant- and child-directed speech than in WEIRD societies (e.g., Bavin, Reference Bavin and Slobin1992; Heath, Reference Heath1983; Ochs & Schieffelin Reference Ochs, Schieffelin, Shweder and LeVine1984; Pye, Reference Pye1986; Richman et al., Reference Richman, Miller and LeVine1992; Vogt et al., Reference Vogt, Mastin and Schots2015). In a recent study of child-directed speech among the Tsimane forager-horticulturalists of Bolivia, for example, researchers found that children under the age of four received less than 1 minute of one-on-one verbal input from adults during daylight hours (Cristia et al., Reference Cristia, Dupoux, Gurven and Stieglitz2019). Instead, it appears a large portion of children’s verbal input comes from other children, most commonly older siblings (Barton & Tomasello, Reference Barton, Tomasello, Gallaway and Richards1994; Lieven, Reference Lieven, Gallaway and Richards1994). In a similar vein, children’s learning of number words displays considerable cross-cultural variation. Comparing the ability of children to acquire and use number words in the United States, Russia, Japan, and among the Tsimane in Bolivia, Piantadosi et al. (Reference Piantadosi, Jara-Ettinger and Gibson2014) find that while children from all societies acquire the ability to count in incremental stages as they age, this ability develops substantially later in the Tsimane compared to the other populations, on the order of 2–6 years. These differences are likely driven by variation in the level of adult-directed input of number words (LeFevre et al., Reference LeFevre, Polyzoi, Skwarchuk, Fast and Sowinski2010), particularly as parent–child interaction about numbers is relatively important and valued in industrialized societies. In sum, these studies tend to suggest that patterns such as limited one-on-one input from adults and a diversity of verbal input from other caretakers and peers are more likely to reflect the experiences of human children throughout history than the patterns we observe in contemporary, industrialized societies. Thus, it appears that the high levels of child-directed speech from one-on-one interactions with adults found commonly in the West is actually a rather unusual and relatively recent pattern of development, and likely not one that is to be necessarily “expected” by a young mind. This does not imply that there are no benefits to child-directed speech or its promotion; rather, we simply make the claim that the limited child-directed speech from adults was likely common in our evolutionary history.
One last domain of cognitive input we will cover here is the primacy of adult teaching and instruction in children’s development. When Western adults consider the word “teaching,” they may be imagining a formal school setting in which an adult is explicitly and verbally instructing a class of same-aged children. This scenario is actually a less common and more evolutionarily novel form of teaching which does not occur with the same regularity in non-WEIRD societies (e.g., Clegg et al., Reference Clegg, Wen, DeBaylo, Alcott, Keltner and Legare2021; Lancy, Reference Lancy2010; Little et al., Reference Little, Carver and Legare2016; Rogoff et al., Reference Rogoff, Mejía-Arauz, Correa-Chávez, Correa-Chávez, Mejía-Arauz and Rogoff2015). For instance, Marshall (Reference Marshall1958) notes that there is no formal instruction among !Kung hunter-gatherers; rather, most children learn through observing those who are more experienced. In many of these societies, children primarily learn by watching, listening, and attending, by taking initiative, and by contributing and collaborating in more informal learning settings (Paradise & Rogoff, Reference Paradise and Rogoff2009). Meta-ethnographic reviews of hunter-gatherer children’s learning support a similar conclusion, namely that children largely learn through a mixture of play, observation, and participation (Lew-Levy et al., Reference Lew-Levy, Reckin, Lavi, Cristóbal-Azkarate and Ellis-Davies2017). A broader definition of teaching, then, that incorporates both informal and formal instruction makes room for teaching through opportunity provisioning, teaching through evaluative feedback, teaching through local enhancement, in addition to the less common direct and active teaching model found in formal education (Kline, Reference Kline2015). Much attention in industrialized, contemporary societies is paid to the importance of this last type of instruction, with many interventions and public policies aimed at increasing it both in the home and in school, but there is much cross-cultural evidence to suggest that children’s learning can accommodate many different forms of teaching, including the often indirect forms prevalent in non-WEIRD societies. We are not suggesting that formal education is unnecessary or unhelpful for development, rather that our assumptions of the type of cognitive inputs that children “expect” to receive should incorporate the high degree of diversity found across human societies.
Nutritional deprivation
Nutritional deprivation refers to low levels in the quantity and quality of nutritional inputs. We have already argued that historically in many human societies, caregivers applied a triage, investing more in infants and children judged to be more likely to survive and become productive family members, and less in infants judged less likely to survive, due to such factors as poor health or severe competition with siblings (Lancy, Reference Lancy, Otto and Keller2014, Reference Lancy2015; Volk & Atkinson, Reference Volk and Atkinson2013; section 3). We also explained that in some cases caregivers committed infanticide, for instance, by terminating supply of nutrition to kill an offspring. In this section, we focus on nutritional deprivation that results not from infanticide, but rather from ecological constraints (e.g., famine) that lead to low quantity and quality of nutritional input, which are independent of an active reduction in provisioning from caretakers.
It is well-established that both food scarcity (lack of nutrition) and food insecurity (unpredictable availability of nutrition) have generally posed major challenges for the human lineage, and also that levels of food scarcity and insecurity have varied across time and space. These two forms of adversity likely have deep evolutionary roots, as food scarcity and food insecurity have been documented in various species of nonhuman primates (Chapman et al., Reference Chapman, Rothman, Lambert, Mitani, Call, Kappeler, Palombit and Silk2012; Hanya & Chapman, Reference Hanya and Chapman2013; Harris et al., Reference Harris, Chapman and Monfort2010; Koomen & Herrmann, Reference Koomen and Herrmann2018). In more recent human history, there is solid evidence for the existence of food scarcity and food insecurity in both past and present industrialized and nonindustrialized societies (Ellison, Reference Ellison2001; Howell, Reference Howell2010; Kaplan & Lancaster, Reference Kaplan, Lancaster, Wachter and Bulatao2003; Prentice, Reference Prentice2005; Walker et al., Reference Walker, Gurven, Hill, Migliano, Chagnon, De Souza, Djurovic, Hames, Hurtado, Kaplan, Kramer, Oliver, Valeggia and Yamauchi2006). Moreover, despite substantial improvements in food access and security, nearly 7.5% of children are still classified as under-nourished (Baker & Anttila-Hughes, Reference Baker and Anttila-Hughes2020) and across all ages 821 million people were chronically undernourished in 2018 (Food and Agriculture Organization of the United Nations et al., 2019).
A challenging question is whether our ancestors experienced food scarcity and food insecurity only over short timescales, relative to the human lifespan (e.g., days, weeks), or over longer timescales as well (e.g., years, decades). If food scarcity regularly occurred over longer timescales, infants and children may have evolved mechanisms that use nutritional deprivation early in life as a cue to nutritional deprivation later in life (e.g., in adulthood) and tailor their development accordingly (e.g., by adjusting their metabolic profile). However, it is not known whether such “weather forecasting” is feasible for long-lived species, such as humans, if there are mainly short-term ecological fluctuations, for instance, due to seasonality (Kuzawa, Reference Kuzawa2005; Kuzawa & Quinn, Reference Kuzawa and Quinn2009), or due to high levels of climate variability during hominid evolution generally (reviewed by Antón et al., Reference Antón, Potts and Aiello2014). If food scarcity and food insecurity tended to occur on shorter timescales, it might not have been adaptive to use early nutrition to predict nutritional conditions in adulthood (Nettle et al., Reference Nettle, Frankenhuis and Rickard2013; Wells, Reference Wells2007). In such conditions, natural selection might instead favor organisms to use “internal cues” to somatic degradation (e.g., telomere erosion) – which were correlated with life expectancy across evolutionary time – to adaptively tailor long-term development (Rickard et al., Reference Rickard, Frankenhuis and Nettle2014). The statistical structure of past environments is thus a crucial piece of the puzzle in evaluating hypotheses about developmental adaptations (Frankenhuis et al., Reference Frankenhuis, Panchanathan and Barto2019). Whether such adaptations increase survival and reproduction in contemporary societies depends, of course, on the structure of current environments.
Though there is debate about the timescale of nutritional deprivation in human evolution, the prevailing view is that hunter-gatherer populations regularly experienced food shortages, but rarely suffered from famines that caused significant mortality (Prentice, Reference Prentice2005; Speakman, Reference Speakman2013; note that rare famines may still have shaped the human genome through effects on fertility; Speakman, Reference Speakman2013). The expected human childhood is thus likely to include periodic hunger, but unlikely to include famine; at least until the onset of agriculture, which occurred in some societies as early as 13,000 years ago. Agriculture appears to be a mixed blessing in this regard (Berbesque et al., Reference Berbesque, Marlowe, Shaw and Thompson2014; Diamond, Reference Diamond1993). On the one hand, agriculture enabled populations to produce an excess of staple foods, to trade foods, and to create buffers against future shortages. On the other, agriculture relies on predictable weather patterns, stable governance, and the absence of major conflict (Prentice, Reference Prentice2005). When these conditions break down, agriculture is vulnerable to famines, perhaps more so than hunter-gatherer lifestyles, characterized by living in small groups, high mobility, and an omnivorous and variable diet (Prentice, Reference Prentice2005). For instance, in contemporary egalitarian forager societies resource-sharing often (though not ubiquitously) helps buffer variation in caloric access, and its downstream consequences on children’s energetics (Boyette et al., Reference Boyette, Lew-Levy, Sarma, Valchy and Gettler2020; Meehan et al., Reference Meehan, Helfrecht and Quinlan2014). With agriculture, the rate of famines seems to have increased by an order of magnitude, from about once every 150 years, to about once every 10 years (Speakman, Reference Speakman2013). With such high rates, it is possible, but not certain, that famines over the past 13,000 years have favored the evolution of developmental adaptations for dealing with famine (Prentice, Reference Prentice2005; Speakman, Reference Speakman2013).
In short, over the course of human evolution, in both past and present societies, there has been large variation in the availability of nutrition (Ó Gráda, Reference Ó Gráda2009). In response to this variation (i.e., the expected nutritional environment), humans have evolved adaptations that tailor development based on the quantity and quality of nutrition in their environment.
5. Associations between dimensions of adversity
We have argued that, over evolutionary time, human infants and children have on average been exposed to higher levels of threat and nutritional deprivation than is typical in industrialized societies, and that because these levels were variable over time and space, natural selection has likely favored phenotypic plasticity. In this section, we explore the co-occurrence of different forms of adversities within lifetimes during human evolution. Were individuals who were exposed to higher levels of threat also exposed to higher levels of deprivation and vice versa?
What do we know about adversity co-occurrence?
In contemporary industrialized (WEIRD) societies, correlations between different forms of adversity are consistently small to moderate (Dong et al., Reference Dong, Anda, Felitti, Dube, Williamson, Thompson and Giles2004; Finkelhor et al., Reference Finkelhor, Ormrod and Turner2007; Green et al., Reference Green, McLaughlin, Berglund, Gruber, Sampson, Zaslavsky and Kessler2010; Matsumoto et al., Reference Matsumoto, Piersiak, Letterie and Humphreys2020; McLaughlin et al., Reference McLaughlin, Green, Gruber, Sampson, Zaslavsky and Kessler2012; McLaughlin et al., Reference McLaughlin, Sheridan, Humphreys, Belsky and Ellis2021; Smith & Pollak, Reference Smith and Pollak2021a), though which forms of adversity cluster together is inconsistent across studies (Jacobs et al., Reference Jacobs, Agho, Stevens and Raphael2012). The existence of correlations among forms of adversity is not surprising. For instance, receiving lower levels of parental investment implies being less protected, thus increasing vulnerability to threats (Callaghan & Tottenham, Reference Callaghan and Tottenham2016; Hanson & Nacewicz, 2021); and, low-quality nutrition increases vulnerability to infectious disease (Katona & Katona-Apte, Reference Katona and Katona-Apte2008). Consistent with such dependencies are findings showing that children who experience energy sufficiency but receive low levels of parental care tend to mature faster and toward more adult-like functioning in physiological and neurobiological processes related to fear and stress (Callaghan & Tottenham, Reference Callaghan and Tottenham2016; Gee et al., Reference Gee, Gabard-Durnam, Flannery, Goff, Humphreys, Telzer, Hare, Bookheimer and Tottenham2013; Gee, Reference Gee2020; Tooley et al., Reference Tooley, Bassett and Mackey2021; see also Belsky et al., Reference Belsky, Steinberg and Draper1991; Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009). Recent evidence suggests that such reprioritization may even be passed down to subsequent generations. For instance, babies of mothers who experienced neglect as children might become predisposed to detecting threat in their environment (Hendrix et al., Reference Hendrix, Dilks, McKenna, Dunlop, Corwin and Brennan2020). It is tempting to speculate that natural selection favored this developmental response – which takes one form of adversity (neglect) as input to adapt to another (threat) – because deprivation and threat were correlated in human evolution.
Nonetheless, we urge researchers to be cautious. First, a meta-analysis and systematic review shows that exposure to threat (e.g., violence) is associated with accelerated maturation in humans, whereas exposure to deprivation (e.g., neglect) is not (Colich et al., Reference Colich, Rosen, Williams and McLaughlin2020). Second, there is evidence suggesting that correlations between threat and deprivation do not generalize across primates. For instance, in a longitudinal study of wild baboons, the correlations between different forms of adversity were weak or even absent (Snyder-Mackler et al., Reference Snyder-Mackler, Burger, Gaydosh, Belsky, Noppert, Campos, Barolomucci, Yang, Aiello, O’Rand, Mullan-Harris, Shively, Alberts and Tung2020; Tung et al., Reference Tung, Archie, Altmann and Alberts2016). Third, the evidence basis on correlations between different forms of adversity in both historical and contemporary non-WEIRD societies is too limited to afford confident conclusions. Fourth, because human social organization and provisioning systems are highly flexible, our species may have evolved sensitivity to a broader range of social cues than other primates (Kuzawa & Bragg, Reference Kuzawa and Bragg2012), and the correlations between such cues and forms of adversity likely varied by cultural context (see Section 6).
Challenges to estimating adversity co-occurrence
There are a number of challenges to estimating the co-occurrence of adversity in human societies. The first challenge is that estimation requires individual-level data, rather than population-level data. It is one thing to estimate population statistics (e.g., infant and child mortality), and another to estimate whether individuals who have experienced one form of adversity were also more likely to experience others, because an aggregate statistic may come about in different ways (equifinality). For instance, data from the Standard Cross-Cultural Sample, a survey of 186 largely nonindustrial societies, suggests that the frequency of corporal punishment is related to higher prevalence of violence at a societal level (Lansford & Dodge, Reference Lansford and Dodge2008). Such data shows that different forms of violence co-occur at a societal level, but they do not show that individuals who experience one form of violence are also more likely to experience other forms of violence. The direction of an association in a population may be reversed within the subgroups comprising that population (“Simpson’s paradox”; Kievit et al., Reference Kievit, Frankenhuis, Waldorp and Borsboom2013). A scenario in which one subgroup experiences threat and a different one deprivation might result in the same societal average as a scenario in which all individuals experience moderate levels of threat and deprivation. These scenarios, however, create different evolutionary selection pressures.
A second challenge to studying adversity co-occurrence is that threat and deprivation are broad categories. For instance, in section 3, we have discussed three forms of threats: infanticide, violent conflict with noncaregivers, and predation; and in section 4, three forms of deprivation: social, cognitive, and nutritional. So, there are really two questions: (1) to what extent did different forms of threat, and different forms of deprivation, co-occur with each other? and (2) to what extent did threat and deprivation co-occur with each other? For instance, in a cohort of young adult males from a population in Metropolitan Cebu City, the Philippines, the correlation between sibling death, an index of local mortality (threat), and maternal absence and paternal instability, two indices of parental investment (deprivation), was low; but the correlation among indices of deprivation, paternal instability and maternal absence, was high (Gettler et al., Reference Gettler, McDade, Bragg, Feranil and Kuzawa2015). In other cases, certain forms of threat may be correlated with some forms of deprivation, across categories, but not with other forms of threat, within this category. Thus, different patterns of correlations between specific forms of threat and deprivation within a society might produce the same aggregate correlation between the broad constructs of threat and deprivation. We also note that aggregating estimates is complicated by: (a) different studies measuring different forms of adversity; (b) measuring the same form of adversity using different instruments (Pollak & Wolfe, Reference Pollak and Wolfe2020); and (c) by the extent of measurement invariance in many longitudinal studies being unknown (DeJoseph et al., Reference DeJoseph, Sifre, Raver, Blair and Berry2021).
A third challenge is that the published record does not reflect a complete picture of the correlations between measures of adversity observed in empirical studies. This is not only true because researchers are more likely to publish positive findings (e.g., by selectively reporting measures of adversity showing correlations with the dependent variables of interests), but also because researchers might validate measures of adversity by examining their correlations with other measures of adversity. For instance, if one particular measure of adversity shows a low or no correlation with other adversity measures, and those measures do correlate highly with each other, a researcher might infer that the uncorrelated measure has low validity in this particular population (e.g., participants misunderstood the items). We are not criticizing this nomological network approach; in fact, we think it can have merit. However, a byproduct of this validation method can be overestimation of adversity co-occurrence in the published record. A potential solution to this challenge is to report in full the correlations between all measures of adversity – assuming these measures have desirable univariate properties (e.g., no restriction of range) – before: (a) excluding measures that do not show the expected correlations with other adversity measures; or (b) creating composites of those measures that do show the expected patterns of correlations with other adversity measures.
To summarize: the evidentiary base for adversity co-occurrence across human history is too limited to afford strong conclusions. Future research should explore this question.
6. Developmental and clinical implications
In this section, we discuss three major developmental and clinical implications of our main claims that the mean level of adversity in our species was higher in the past than the present, and that variation in adversity across societies and individuals, not uniformity, was common across human history (Figure 1).
Recognizing adaptive responses to threat and deprivation
Ideas about the expected childhood have consequences for which responses are viewed as deficits or adaptations, and these views may affect research agendas, clinical practice, people’s self-views, and their reputations in the eyes of others.
Our claims imply that infants and children might be able to developmentally adjust to a wider range of adversities, as well as higher levels of adversity, than often assumed. Researchers may use this insight to reconsider which responses are adaptive and which are deficits, in addition to refining the criteria used to classify responses as adaptive or deficits. To refine their criteria, developmental and clinical psychologists can draw on discussions by evolutionary psychologists and anthropologists (Andrews et al., Reference Andrews, Gangestad and Matthews2002; Cosmides & Tooby, Reference Cosmides and Tooby1999; Ketelaar & Ellis, Reference Ketelaar and Ellis2000; Lewis et al., Reference Lewis2017; Nesse & Stein, Reference Nesse and Stein2012; Syme & Hagen, Reference Syme and Hagen2020; Wakefield, Reference Wakefield1999). For instance, as noted in section 2, it is a misunderstanding that developmental adaptations should only generate benefits. There being costs to responses does not disqualify them as adaptive, as long as the developmental response produces a positive contribution to lifetime reproductive success on average (Del Giudice, Reference Del Giudice2018; Ellis & Del Giudice, Reference Ellis and Del Giudice2014, Reference Ellis and Del Giudice2019).
We have argued that infants and children are likely equipped with phenotypic plasticity for dealing with certain forms of adversity. As noted in section 1, organisms can respond to experiences within the species-typical range either with expectant or dependent plasticity (McLaughlin & Gabard-Durnam, Reference McLaughlin and Gabard-Durnam2021). However, plasticity in response to species-typical experience can take other forms as well, especially when considering organisms across diverse taxa (Barrett, Reference Barrett2015; Frankenhuis & Nettle, Reference Frankenhuis and Nettle2020a; Frankenhuis & Walasek, Reference Frankenhuis and Walasek2020). Take multiple sex reversals in fish. This ability has some properties of expectant plasticity (e.g., a specific cue triggers a major and rapid reorganization of the phenotype) and others of dependent plasticity (e.g., reversals can occur at nearly all points in development, and even multiple times over the life course in sequentially hermaphroditic fish). Still other properties do not fit either type of plasticity (for examples, see Frankenhuis & Nettle, Reference Frankenhuis and Nettle2020a). Generally, the features of plasticity depend on the specific nature of the adaptive problem, including but not limited to: the rate of environmental change relative to generation time, the extent to which organisms can learn about environmental conditions, the fitness payoffs to different degrees of phenotype-environment match, the costs of building, maintaining, and running the systems supporting plasticity, the preexisting structures and processes in a species (e.g., genes, gene regulatory mechanisms), and other factors (e.g., population size). As Barrett (Reference Barrett2015) quipped, the first law of adaptationism is: it depends.
Further, in studying adaptive developmental plasticity, it is key to distinguish between developmental processes and outcomes. For instance, the Hidden Talents program focuses on abilities that are enhanced by adversity (Ellis et al., Reference Ellis, Abrams, Masten, Tottenham, Sternberg and Frankenhuis2020; Frankenhuis & de Weerth, Reference Frankenhuis and de Weerth2013). If Jim is exposed to adversity and John is not, Jim might perform better on a task measuring a relevant ability (e.g., memory of threats) compared with John. However, this is not always the case. It depends on how impairment and adaptation processes “jointly” affect performance (Frankenhuis et al., Reference Frankenhuis, Young and Ellis2020). For instance, John might perform better on two tasks (e.g., memory of threats and memory of abstract geometric shapes) than Jim, who has suffered impairment, but on one task Jim nearly closes the performance gap (e.g., memory of threats), because this task measures an ability that is enhanced through adversity in Jim. Thus, to understand interacting processes, we need research designs that compare not only performance across individuals, but also different abilities within the same person (enhanced vs. nonenhanced abilities). Within-between designs allow developmental adaptation (process) to manifest in performance (outcome), even if impairment has also occurred and affected performance.
Understanding and learning from cultural variation
Human cultures and norms can vary dramatically across contexts. As such, we believe developmental science would benefit from a greater acknowledgment and integration of the cultural contexts in which development occurs (Amir & McAuliffe, Reference Amir and McAuliffe2020). We argue that future work in the field should be focused on either a generalizable definition of childhood adversity that can be broadly applied across different cultures, and/or more specialized definitions of adversity nested within specific cultural contexts, accompanied by a “constraints on generality” statement (Simons et al., Reference Simons, Shoda and Lindsay2017). Such a statement makes explicit to which human populations or cultural contexts ideas and findings apply and opposes the implicit assumption that findings are necessarily generalizable to humans as a species. This practice is important to normalize, because claims of universality (e.g., children who receive little child-directed speech are deprived) may inadvertently derogate people in cultures that have other norms (e.g., in which child-directed speech is rare). In cases where WEIRD norms are the exception in the global distribution of norms, this means (inadvertently) derogating more than half of humanity. In other words, our current conceptions of the “ideal” caregiving environment may not be either culturally or phylogenetically sensitive (Ganz, Reference Ganz2018; Humphreys & Salo, Reference Humphreys and Salo2020). Assuming generalization from WEIRD populations to all populations may also lead to arguably incorrect conclusions, for instance, that complex language input is necessarily required for the development of executive function skills (McLaughlin, Reference McLaughlin2016).
Greater attention to cultural diversity and variation is also important when considering how adversity is experienced, processed, and culturally understood. Cognitive culture theory may be helpful in these endeavors (Dressler et al., Reference Dressler, Balieiro and Dos Santos2018). In this framework, culture is conceptualized as cognitive models of life that are constructed and shared among members of a social group. Individuals within the group may have differing degrees of cultural competence – the degree to which their own representations align with these shared models – and various degrees of cultural consonance – or, the degree to which their own experiences align with these models (Dressler, Reference Dressler2012). Techniques to measure cultural consonance exist, and have been used with good reliability and validity across differing cultural contexts (Dressler et al., Reference Dressler, Borges, Balieiro and Dos Santos2005). So, for instance, some societies may have a shared cultural model of parenting that expects maternal presence but does not apply the same expectations to fathers. In these communities, if a child is raised largely by their mother, these experiences may be viewed as consonant with cultural expectations and paternal absence may not be viewed as deprivation or a form of adversity. Indeed, levels of paternal investment vary substantially across environments, with male provisioning viewed as more preferable in ecologies where it is more difficult for women to obtain resources themselves (Marlowe, Reference Marlowe2003). Further, as discussed earlier, the extent and direction of influence of father absence on child development varies across societies, depending on its association with energetic deprivation, suggesting that the cultural context is crucial for understanding how this experience can influence child development (Sear et al., Reference Sear, Sheppard and Coall2019; Shenk et al., Reference Shenk, Starkweather, Kress and Alam2013). These patterns align with the broader argument that the frequency and meaning of experiences can vary dramatically across societies, and should be considered when determining whether an experience is considered adverse.
The contextualization of experiences within shared cultural models, in addition to the diverse ways in which adverse experiences are culturally processed, can have consequences for people’s own self-views and for how adverse experiences are framed, understood, and treated in clinical settings.Footnote 8 People interpret their experiences through a complex web of cultural customs, attitudes, and beliefs. Consistent with this perspective is research showing that perceived, rather than objective (i.e., actual), experience of childhood adversity is associated with well-being and psychopathology (e.g., Danese & Widom, Reference Danese and Widom2020; Smith & Pollak, Reference Smith and Pollak2021b), potentially in a causal manner (Baldwin & Degli Esposti, Reference Baldwin and Degli Esposti2021). Ignoring how experiences can vary across different cultural contexts may lead to ineffective policy and interventions. For instance, marriage education workshops based on studies of predominantly white and middle-class couples failed to improve outcomes among working class couples of color, who tended to view other concerns, such as paying the rent or keeping their children safe, as more deserving of their attention (Johnson, Reference Johnson2012; Loeterman & Kotlowitz, Reference Loeterman and Kotlowitz2002).
Conversely, sensitivity to cultural variation can provide important insights into the ways in which adversity is socially constructed and processed. In a striking example of the role cultural practices can play in shaping lived experiences, Zefferman and Mathew (Reference Zefferman and Mathew2020) explore how trauma associated with warfare can vary between U.S. combat veterans and Turkana pastoralists. Their field interviews with Turkana pastoralists suggest that cultural practices, such as rituals of healing, support, and endorsement of warriors who have killed in battle help reassure the warrior that their actions were morally justified and can potentially protect against the negative psychological effects of moral injury that combatants may experience. Though these warriors do suffer from high rates of symptoms associated with protecting against danger, such as flashbacks and startle responses, they are less likely than American service members with similar PTSD severity to experience symptoms associated with moral violations, such as low mood and depression (Zefferman & Mathew, Reference Zefferman and Mathew2021). In sum, we argue that culturally sensitive approaches to the study of adversity and development which acknowledge societal variation are integral to the future of the field.
Reconsidering the definitions of adversity and deprivation
A common approach in developmental and clinical psychology is to define “childhood adversity” in relation to the “expected” human childhood environment (Fox et al., Reference Fox, Levitt and Nelson2010; Gabard-Durnam & McLaughlin, Reference Gabard-Durnam and McLaughlin2019; Humphreys & Zeanah, Reference Humphreys and Zeanah2015; McLaughlin & Sheridan, Reference McLaughlin and Sheridan2016; McLaughlin, Reference McLaughlin2016; McLaughlin et al., Reference McLaughlin, Sheridan and Nelson2017; McLaughlin et al., Reference McLaughlin, Weissman and Bitrán2019; Nelson, Reference Nelson2007; Nelson & Gabard-Durnam, Reference Nelson and Gabard-Durnam2020; Sheridan & McLaughlin, Reference Sheridan and McLaughlin2014; Wismer Fries et al., Reference Wismer Fries, Ziegler, Kurian, Jacoris and Pollak2005). If one defines childhood adversity in terms of deviation from an “expected environment,” then it matters what the expected environment is for which experiences qualify as “adverse.” This holds irrespective of whether experiences are treated as binary (e.g., neglected vs. not neglected), continuous, as univariate or multivariate (e.g., distinguishing between emotional and cognitive neglect), and so on (King et al., Reference King, Humphreys and Gotlib2019). We have argued in the sections above that the expected environment has regularly included what are typically defined as adverse experiences. For instance, infanticide is an expected experience for many species of primates, but it is also an adverse experience for the infant and its mother. Thus, experiences can be both species-expected and adverse. We think it is problematic to deny such experiences the label “adverse” just because they occurred with some regularity across human evolution.
Adopting a different definition of adversity could leave frameworks that have defined this concept in terms of the expected environment largely intact and even strengthen them. These approaches could still define the expected (or expectable) environment as “a wide range of species-typical environmental inputs that the human brain requires to develop normally” (McLaughlin, Reference McLaughlin2016, p. 363). They could also maintain that “experience-expectant mechanisms utilize environmental information that has been common to all members of a species across evolutionary history” (Galván, Reference Galván2010, p. 880), a concept referred to as the “phylogenetic norm” (Galván, Reference Galván2010). However, these frameworks would benefit from revising a number of components. First, we should reconsider the definition of childhood adversity as “negative environmental experiences that are likely to require significant adaptation by an average child and that represent a deviation from the expectable environment” (McLaughlin et al., Reference McLaughlin, Weissman and Bitrán2019, p. 279), and its implication that “environmental circumstances or stressors that do not represent deviations from the expectable environment should not be classified as childhood adversity” (McLaughlin, Reference McLaughlin2016, p. 364). Second, we should revise the associated claim that “adversity is not itself an expectable experience that the brain prepares for” (Nelson & Gabard-Durnam, Reference Nelson and Gabard-Durnam2020, p. 137). The realization that threat and deprivation are part of the species-expected range might help to accommodate and recontextualize findings in the literature. For instance, although stressful events increase the probability of negative physical and mental health outcomes, most people who experience stressful events do not develop psychopathology, with the caveat that specific estimates of “rates of resilience” vary substantially depending on statistical model specifications (Infurna & Luthar, Reference Infurna and Luthar2016). This is true both for normative stressful events that happen to most people, such as losing a valued relationship, and for less common traumatic events, such as experiencing physical abuse (Bonanno et al., Reference Bonanno, Westphal and Mancini2011; Cohen et al., Reference Cohen, Murphy and Prather2019).
As noted earlier, our claim that adverse events occurring in past and present societies often fall within the species-typical range does not, of course, imply that all forms of adversity do. We provided institutionalized child rearing as a likely example of an evolutionary novelty (Humphreys & Salo, Reference Humphreys and Salo2020; Tottenham, Reference Tottenham2012). WEIRD societies also include standard parenting practices that likely fall outside the species-typical range, which may not be considered adverse by most people in WEIRD countries, but which are evaluated more negatively by people in non-WEIRD countries, such as caregivers sleeping apart from their babies and sleep training their babies by leaving them on their own to “cry it out” (Mileva-Seitz et al., Reference Mileva-Seitz, Bakermans-Kranenburg, Battaini and Luijk2017). However, the fact that certain forms of adversity likely fall within the species-typical range invites us to reconsider definitions of deprivation as “the absence of species- or age-expectant environmental inputs, specifically a lack of expected cognitive and social inputs” (Sheridan & McLaughlin, Reference Sheridan and McLaughlin2014, p. 581). We have deliberately used a definition that is similar to this definition – namely deprivation as low levels of social, cognitive, and nutritional inputs – but we have omitted the word “expected.” By omitting this word, our definition is in need of a different benchmark against which to compare “lack of inputs.” Future work should endeavor to create a definition that takes these concerns into account.
7. Limitations and future directions
We now turn to five limitations of our analysis. The first two concern limitations of the available data, and the third and fourth limitations in our scope. The fifth limitation concerns our approach to synthesizing data.
Limitations
First, there are challenges to drawing inferences about historical populations from archeological data, and these challenges are often exacerbated for infants and children, who are underrepresented in burial remains, death records, and written life histories (Konigsberg & Frankenberg, Reference Konigsberg and Frankenberg1994; Lewis & Gowland, Reference Lewis and Gowland2007; Perry, Reference Perry2006; Rawson, Reference Rawson2003; Trinkaus, Reference Trinkaus1995; Volk & Atkinson, Reference Volk and Atkinson2013; Walker et al., Reference Walker, Johnson and Lambert1988; Woods, Reference Woods2007). The task of archeologists is like that of detectives, who piece together puzzles of the past based on limited evidence. In many cases, not all uncertainty will be resolved. It would also be a mistake to infer from some degree of uncertainty that different hypotheses are equally likely. Archeologists triangulate across different types of evidence and different data sets to draw nuanced conclusions, and make predictions that are subsequently tested on new data. Through this iterative process, some hypotheses receive more support and others less. We believe the literature supports our claims, but would certainly welcome any evidence we have overlooked or different interpretations of the same evidence. Our overarching recommendation is to engage with evidence from history, archeology, and primatology, rather than assume features of the expected human childhood.
Second, we have used data from contemporary hunter-gatherer societies to inform estimates of historical populations, because for roughly 95% of our species’ evolutionary history, children were likely born into a hunter-gatherer society. However, such inferences should be qualified by the fact that there are important differences between historical and contemporary hunter-gatherer societies (Kelly, Reference Kelly2013; Page & French, Reference Page and French2020). First, some contemporary hunter-gatherer societies have experienced devastating consequences from coming into contact with Western populations, such as catastrophic disease and resource deprivation (Diamond, Reference Diamond2013; Hill & Hurtado, Reference Hill and Hurtado1996). Second, there is debate over whether the lives of contemporary hunter-gatherers are indeed harsher (i.e., higher mortality rates) than those of historical populations, as some contemporary hunter-gatherers have been pushed to marginalized environments by agriculturalists who have displaced them (for different viewpoints, see Barrett, Reference Barrett2021; Bigelow, Reference Bigelow, Nettleship, Givens and Nettleship1975; Cunningham et al., Reference Cunningham, Worthington, Venkataraman and Wrangham2019; Lee & DeVore, Reference Lee and DeVore1968; Marlowe, Reference Marlowe2005; Page & French, Reference Page and French2020; Porter & Marlowe, Reference Porter and Marlowe2007; Silberbauer, Reference Silberbauer1981); though similar mortality rates have been documented in at least one historical hunter-gatherer society that lived in a resource-rich environment (Johnston & Snow, Reference Johnston and Snow1961; Volk & Atkinson, Reference Volk and Atkinson2013). Thus, we should not simply assume that statistics (e.g., mortality rates) of contemporary hunter-gatherer societies automatically generalize to hunter-gatherer societies of the past. In addition, there is significant variation between contemporary hunter-gatherer societies, including between different hunter-gatherer groups, depending on factors such as climate, technology, and societal structure (Kelly, Reference Kelly2013). Yet, because more of these societies are reflected in the statistics we have reported, estimates are likely to be more representative than estimates about historical populations.
Third, we have restricted our scope to discussing findings, not methods. Specifically, we have not discussed which sources of evidence (e.g., skeletal remains) are used, and how, to infer features of past and present populations and their environments (e.g., infant and child mortality rates). For such information, we refer readers to the following resources (Frei et al., Reference Frei, Mannering, Kristiansen, Allentoft, Wilson, Skals, Tridico, Nosch, Willerslev, Clarke and Frei2015; Halcrow et al., Reference Halcrow, Warren, Kushnick and Nowell2020; Lewis, Reference Lewis2017; Muthukrishna et al., Reference Muthukrishna, Henrich and Slingerland2021; Page & French, Reference Page and French2020; Walker, Reference Walker2001).
Fourth, we have also restricted our scope to discussing the species-typical range of adversity for humans, rather than the adaptations that evolved in such environments. This topic merits its own analyses (for overviews, see Del Giudice et al., Reference Del Giudice, Gangestad, Kaplan and Buss2015; Ellis et al., Reference Ellis, Figueredo, Brumbach and Schlomer2009; Kaplan & Lancaster, Reference Kaplan, Lancaster, Wachter and Bulatao2003; Sear, Reference Sear2020). However, what theory predicts and how we interpret empirical observations, both depend on an accurate picture of the expected childhood.
Finally, our paper does not present a systematic review or meta-analysis based on preset search terms, inclusion criteria, and statistical plans. It is therefore possible that we have (inadvertently) reported a nonrepresentative selection of evidence that matches our preexisting beliefs about the expected human childhood. That said, our analysis is far from arbitrary. It draws on systematic reviews, meta-analyses, and large-scale cross-cultural datasets (e.g., the Standard Cross-Cultural Sample) regarded as authoritative in the field.
Future directions
Over the past decade, notions of the expected childhood environment have received more attention in developmental and clinical psychology. We support this progress, but are concerned that this notion has been untethered from, rather than anchored in, evidence from other disciplines, including history, anthropology, and primatology. This special issue represents an opportunity for psychologists to take a productive turn by connecting with this work, and contributing to an interdisciplinary science that advances understanding of human childhood, both past and present, in all its richness and diversity. This turn could start by removing the term “expected” from the definitions of “adversity,” and by taking stock of the information that allied disciplines have collected and integrating it into a picture of the expected human childhood.Footnote 9
Acknowledgments
We thank Hend Eltanamly, Irene Godoy, Michael Gurven, Kathryn Humphreys, Sheina Lew-Levy, Dieter Lukas, Katie McLaughlin, Abigail Page, Max Roser, Rebecca Sear, Margaret Sheridan, Tony Volk, Ethan Young, and an anonymous reviewer, for providing outstanding feedback that improved this manuscript. We also thank Esther Weijman for help with formatting.
Funding statement
WEF’s research has been supported by the Dutch Research Council (V1.Vidi.195.130) and the James S. McDonnell Foundation (https://doi.org/10.37717/220020502). DA’s research has been supported by the John Templeton Foundation (61138).
Conflicts of interest
The first author (WEF) is a close collaborator of one of the editors of the special issue (BJE) to which this paper has been submitted.