Introduction
The lifetime of a communicative civilization, L, plays a critical role in the Drake Equation (Drake Reference Drake, Mamikunian and Briggs1965; Ćirković Reference Ćirković2004; Maccone Reference Maccone2010; Glade et al. Reference Glade, Ballet and Bastien2012). Little is known about the possible range that this value can take (Burchell Reference Burchell2006). Our limited temporal existence provides a basis to estimate that L likely typically takes a value greater than or equal to modern civilization's age thus far. Pessimists might suggest that the history of past human civilizations indicates that L will be brief, no greater than a few hundred years (Shermer Reference Shermer2002). Optimists could equally argue that we will soon pass a critical juncture where comparable civilizations could ultimately enjoy long lifetimes, perhaps even billions of years (Grinspoon Reference Grinspoon2004).
Although Drake cast L as the communicative lifetime, modern SETI has evolved to include both deliberate and unintentional signatures of technology – ‘technosignatures’ (Wright Reference Wright, Deeg and Belmonte2017). We go further by relaxing the assumption that the technosignature need originate from what we would recognize as a ‘civilization’ – the source is an intelligence of some kind (e.g. an artificial intelligence) which is capable of producing detectable technological signatures. In what follows, we consider L as representing the lifetime over which techosignatures from this intelligence manifest.
One basic question concerning this hypothetical intelligence is – what would first contact look like? This has been the playground of science fiction writers for generations, and clearly this question has existential consequences for our way of life. Although we have no information about other intelligences yet, it is not unreasonable to assume that the nature of this contact will depend considerably upon the relative technological capabilities of this newfound entity. Humanity would surely treat communication with a comparably developed civilization in quite a different manner from one with far greater technological capabilities. The longer lived an intelligence, L, the greater the opportunity for technological development. Accordingly, the probability distribution of the lifetime of detected intelligences will be of central importance to our decisions regarding contact.
We note that it is of course possible for artefacts from an intelligence (or indeed a civilization) to persist far longer than the age of that entity. Sometimes referred to as artefact SETI, there is particular interest in applying to this Solar System objects (Freitas Reference Freitas1983; Wright Reference Wright2018; Lacki Reference Lacki2019), for example. However, the detection of an artefact from a now extinct intelligence presents no opportunity for direct communication or interaction (even if this is unclear from the initial detection).
In this work, we therefore ask – what is the likely age of a detected and extant intelligence. Certainly speculation on this topic exists elsewhere. Carl Sagan famously wrote that civilizations were unlikely to be in technological lockstep with us (Sagan Reference Sagan1994) and thus would either be far less advanced or far more advanced. Since the less advanced ones would be undetected, this simple argument suggests contact would be with an older intelligence. Similarly, Stephen Hawking warned that contact would likely be with a more advanced and thus potentially dangerous entity. In what follows, we attempt to formalize the logic behind this problem and establish some statistical results for L using a simple but plausible analytic model.
A model for technosignature lifetime
Exponential distribution for L
At its core, we are asking a statistical question – what is the likely age of a detected intelligence. The first requirement to make progress is to assign a probability distribution for L. The simplest lifetime model we can posit is an exponential distribution (Lawless Reference Lawless2011). We do not claim that this is necessarily the true distribution, and encourage the reader to treat this as an approximate-yet-instructive model for making analytic progress. Further discussion about the suitability of this model is offered in the Discussion.
With such a model, amongst the ensemble of all intelligences that will ever arise, there would be a large number of short-lived intelligences (potentially such as ourselves) and a much smaller number of long-lived counterparts. On this basis, one might naively posit that communication with another intelligence would surely be with one of the more abundant short-lived intelligences. We proceed by first writing down the probability density function of L given our exponential distribution assumption:
where τ is the mean lifetime from the distribution. The exponential distribution assumes that the so-called hazard functionFootnote 1 is constant over time – much like a decaying atomic nucleus. Certainly more sophisticated lifetime formulae have been suggested for species survival. For example, a Weibull distribution is a commonly used generalization of the exponential, that enables a time-dependent (specifically a power-law) hazard function (Lawless Reference Lawless2011), but comes at the expense of an extra unknown parameter.
We note that a power-law distribution has also been adopted in ecology studies (Pigolotti et al. Reference Pigolotti, Flammini, Marsili and Maritan2005), but we found it to exhibit several disadvantages over the exponential. First, it does not have semi-infinite support and thus requires truncation parameterized some additional bounding parameter, either a minimum lifetime or a maximum. Since no clear minimum exists, bounding at the maximum leads to a function which is only monotonically decreasing for indices between 0 and 1. This leads to an overly restrictive distribution compared to the exponential and for these reasons it is not used in what follows.
An exponential distribution, with its constant hazard function of 1/τ, could be criticized as being unrealistic since a longer lived species presumably has developed successful traits that improve its odds of future survival (Shimada et al. Reference Shimada, Yukawa and Ito2003). On the other hand, as technology advances, so too does an intelligence's capacity for self-destruction (Cooper Reference Cooper2013). If we consider the observed distribution for the life span of families obtained from Benton (Reference Benton1993) based on fossil evidence (see Fig. 1), the exponential distribution appears quite capable of describing the overall pattern out to 400 million years. Of course, intelligences producing technosignatures cannot be assumed to necessarily follow the same distribution as these fossils, although this gives confidence that the model is at least plausible. Although not a representative nor unbiased sample, we note that approximate lifetimes of past human civilizations are also well-described by an exponential distribution with τ = 336 years.Footnote 2 In the absence of any other information, we invoke Ockham's razor in that the simplest viable model is the presently favoured one. Accordingly, we will adopt the exponential distribution in what follows.
Inferring the a-posteriori distribution of τ
Although we have a functional form for the probability distribution of L, it is governed by a shape parameter, τ (mean lifetime), which one needs to also assign. This would typically be handled through statistical inference. For example, if we had N known examples of intelligences with lifetimes L = {L 1, L 2, …, L N}T (analogous to the data presented in Fig. 1), then we could write that the likelihood of measuring these values for a mean lifetime equal to τ would be
where one can see that the above is a straight-forward extension of equation (1). Conventionally, one would then apply Bayes’ theorem to constrain/measure τ using Pr(τ|L) ∝ Pr(L|τ)Pr(τ).
Unfortunately, we do not have a sample of L i values, and thus our likelihood function will certainly not be as constraining as this. Rather, we only know of N = 1 intelligence – ourselves. However, the problem is even worse than this because we do not even know L 1 for this one datum. Human civilization has been producing a technosignature for an age $A_{\oplus }$ years, and the lifetime of this intelligence must at least exceed this value (i.e. $L_{\oplus } \geq A_{\oplus }$). We emphasize that it is somewhat unclear what numerical value to assign to $A_{\oplus }$ at this point. Although we have been transmitting radio signals for ~ 102 years, one might argue that an advanced civilization could remotely detect our settlements (Kuhn and Berdyugina Reference Kuhn and Berdyugina2015) and polluted atmosphere (Schneider et al. Reference Schneider, Léger, Fridlund, White, Eiroa, Henning, Herbst, Lammer, Liseau, Paresce, Penny, Quirrenbach, Roettgering, Selsis, Beichman, Danchi, Kaltenegger, Lunine, Stam and Tinetti2010; Lin et al. Reference Lin, Abad and Loeb2014) as unintentional technosignatures, which could increase $A_{\oplus }$. Regardless, we will proceed symbolically for the moment.
The likelihood of observing one civilization with $L_1\gt A_{\oplus }$, given that the mean lifetime is τ, is given by
In order to derive an a-posteriori distribution for τ, conditioned upon the constraint that $L_1\gt A_{\oplus }$, we first need to write down an a-priori distribution for τ. One is always free to choose any prior one wishes, but a strongly informative prior, such as a tight Gaussian, would naturally return a result which closely equals the prior. In other words, one has not really learned anything and no inference really occurred. Ideally, we wish to select a prior which is as uninformative as possible (Jaynes Reference Jaynes1968). This is not simply a flat prior, since such priors can place insufficient weight on small values, especially when the parameter has high dynamic range. Instead, we can define an objective Jeffrey's prior, which provides a means of expressing a scale-invariant distribution via the Fisher information matrix, ${\cal I}$ (Jeffreys Reference Jeffreys1946):
Evaluating the above, we obtain Pr(τ) ∝ τ−1/2. Combining the likelihood and prior together, we obtain
To normalize the above, one must define an upper limit on τ, for which we use the symbol τmax. At this point, it is also convenient to work in temporal units of $A_{\oplus }$ in what follows, such that any timescales used will always be in that unit. Accordingly, the posterior is
We plot the posterior, with comparison to the prior and likelihood, in Fig. 2.
Properties of the posterior
There are several useful properties of the posterior above that we highlight. First, equation (6) has a maximum at $\hat {\tau } = 2$ (the mode), irrespective of τmax, which can be demonstrated through differentiation of the expression and setting to zero. If we set $\tau = \hat {\tau }$, then the mean lifetime of an intelligence would be twice of that of ourselves. But it is important to remember that this is the entire lifetime of this intelligence, not its age at the time of their detection, A. Assuming that the technosignature is no more or less likely to be detected at any point during its manifested lifetime, then $A \sim {\cal U}\lsqb 0\comma\; L\rsqb$ (where ${\cal U}$ denotes a uniform distribution). Accordingly, if $\tau = \hat {\tau }$, then the mean age at the time of detection would be = 1, i.e. our current age. Of course, fixing $\tau = \hat {\tau }$ does not correctly account for the broad posterior distribution of τ, but this exercise provides some intuition as to why the modal value of τ occurs at 2.
Although the mode can be solved for independent of τmax, it is somewhat limited as an interpretable summary statistic. The expectation value of a distribution provides better intuition as to the ‘typical’ value of the distribution. This can be seen by simple consideration of the exponential distribution. Its mode is zero but the average draw will be around the mean of the distribution, not zero. We may calculate the a-posteriori expectation value for τ using
where we define the symbol
where erfc[x] is the complementary error function. One may show that μ ≃ τmax/3 for τmax ≫ 1. The dependency upon τmax can be understood by the fact that although the mode of the distribution does not depend on τmax, pushing the upper limit ever higher naturally drags the tail out and thus pulls the expectation value over.
Marginalized distribution for L
Given that we have now obtained an a-posteriori distribution for τ, we need to propagate that into a posterior distribution for L. As discussed earlier, simply fixing τ to an a-posteriori summary statistic, like $\hat {\tau }$ or E[τ|L 1 > 1], is inadequate, as it does not propagate the (considerable) uncertainty on τ into the resulting distribution. This propagation can be conducted through marginalizing out τ (i.e. integrating over τ):
The above represents the probability distribution of the lifetime of technosignature-producing intelligences, given the singular constraint imposed by humanity's existence. It has a maximum at L → 0, which is a property shared by the original exponential distribution used for Pr(L). We also note that the expectation value satisfies E[L|L 1 > 1] = μ.
Observationally weighting the model
Lifetime weighting
The distribution Pr(L|L 1 > 1) describes the probability distribution of the lifetime of intelligences producing detectable technosignatures. This is the underlying true population – but it does not represent the intelligences that we are most likely to detect. It is worth pausing to clearly distinguish between detection and contact. If and when an intelligence is detected, that detection may either be in the form of a directed attempt at communication on their behalf, or it may simply be passive detection of their technology on our behalf. Regardless, humanity's decision as to whether to send a message back – to initiate contact – will be likely somewhat dependent on the technological development and, by proxy, age (A) of said intelligence.Footnote 3 If the technosignature itself provides little information regarding the age, we would be left with the a-priori distribution – which is the focus of this paper. Yet this distribution will not simply equal P(A|L 1), since a critical selection effect sculpts our observations that we will account for here.
The start time of these other intelligences is presumably arbitrary (except when one pushes into timescales of $\gg$Gyr, over which time variability is expected for the rates of star formation and high-energy astrophysical phenomena, e.g. SNe, AGNs, GRBs). A start time 10 million years ago is just as a-priori likely as 100 years ago. Thus, a longer lived intelligence is more likely to be detected than one which is very short lived, since the requirement for contemporaneity (modulo the light cone) is clearly sensitive to how long the technosignature persists. An equivalent statement is that at any single snapshot in time (representing our current epoch, e.g.), the fraction of worlds that go on to produce long-lived intelligences may be relatively rare, but their persistence through time means that one must account for their overrepresentation amongst the extant intelligences. This is simply a product of their longevity and is independent of their activities or behaviour. This situation is analogous to the ages of trees in an old growth forest – if we assigned a unique identity to each tree that will ever live, 1000+ year old trees are rare amongst the ensemble, perhaps representing just 1%, yet a visit to the forest will show them to be seemingly more common due to their longevity, for example, comprising 10% of the extant trees.
Accordingly, we will assume that the probability of detecting an intelligence's technosignature is proportional to its lifetime, L. The validity of this assumption is discussed later in the Discussion section, as well as an explanation as to why distance does not affect the results presented hereafter.
This simple weighting will substantially change the picture, meaning that the long tail of rare long-lived intelligences will have a considerable increase on their relative probability of detection. We write that the probability distribution of L, conditioned upon both a mean lifetime, τ, and the assumption of detection, ${\cal D}$, is
or after normalization
Since we have already learnt τ from before, we can use this acquired information to express a marginalized posterior for L conditioned upon both ${\cal D}$ and the fact L 1 > 1, using:
where
We find that in the limit of τmax ≫ 1, this distribution peaks at 2. The expectation value is given by
For comparison, without the conditional ${\cal D}$, the a-posteriori expectation value was μ but including it doubles it. We plot the posterior ${\rm Pr}\lpar L\vert L_1{\gt }1\comma\; {\cal D}\rpar$, and compare it to Pr(L|L 1 > 1), in Fig. 3.
The age distribution of detected intelligences
The final step is to account for the fact that detection would not occur with an intelligence at the end of its lifetime, L, but rather one drawn randomly from across its lifespan. In other words, an intelligence's age (at the time of detection) does not equal its lifetime. If we assume that the age at the time of detection is uniformly distributed from 0 to L, then
Equipped with our final form for the a-posteriori probability distribution of the age of detected intelligences, we can deduce several basic properties. First, it is interesting to ask whether the civilization is likely to be older or younger than our own. The probability that the civilization is older is given by
A useful summary statistic to interpret the above is the median – above which half the cases will lie. This may be solved for by setting the above to 0.5 and numerically solve for τmax, which gives the result that if τmax > 2.6776 · · · ≃e, then the age of a detected intelligence will most likely exceed that of our own, i.e. ${\rm Pr}\lpar A{\gt }1\vert L_1{\gt }1\comma\; {\cal D}\rpar \gt 0.5$. In other words, if this condition is true then we are most likely to detect an older intelligence than ourselves.
It is important to remember that τmax does not represent the maximum allowed lifetime of a civilization, L – rather it is simply the maximum a-priori mean lifetime. Fundamentally, there is no obvious reason why τmax could not be many billions of years (Grinspoon Reference Grinspoon2004), and thus detection would almost always occur with an older civilization, however one defines $A_{\oplus }$.
The expectation value for the intelligence's age is given by simply μ, whereas we found the expectation value for their lifetime to be 2μ. Since E[A|L 1 > 1] = E[L|L 1 > 1]/2, then we can see that the effect of including this observational bias is that the mean age of detected, and thus contacted, intelligences is twice that of the overall population – as expected.
Contact inequality
Using our results, it is instructive to compare the underlying age population, Pr(A|L 1 > 1), with the population which goes on to be detected, ${\rm Pr}\lpar A\vert L_1{\gt }1\comma\; {\cal D}\rpar$. Recall that A is age of the intelligence at the time of detection/contact, whereas L is the total lifetime of said intelligence. The fact that older intelligences are assumed in this work to be more likely to be detected, and thus contacted (by virtue of having simply more opportunities to do so), introduces an inequality. The rare long-lived intelligences make a disproportionate number of contacts.
This ‘contact inequality’ can be thought of as being analogous to wealth inequality in economics. One way to quantify the degree of inequality comes from the Gini coefficient (Gini Reference Gini1909), which takes the value of 1 for a maximally unequal distribution, and 0 for a fully equal one. It may be calculated for a probability density function Pr(x) using
Although we were not able find a closed-form solution to the above using ${\rm Pr}\lpar x\rpar = {\rm Pr}\lpar A\vert L_1\gt 1\comma\; {\cal D}\rpar$, one may numerically integrate the expression for a specific choice of τmax.
We argue here that a conservative choice of τmax is one which causes our current age to be the median age of the entire population of technosignature producing intelligences. This is a form of the mediocrity principle, since we posit humanity lives close to the centre of the age-ordered list of intelligences in the cosmos (Gott Reference Gott1993; Simpson Reference Simpson2016). It requires us to solve τmax such that
We solved the above numerically and obtain τmax = 9.43. This also somewhat passes the astronomer's logic of going up by an order-of-magnitude as one's upper limit on a variable. However, we suggest here that this limit is somewhat conservative though, since it makes million-/billion-year intelligences essentially non-existent, which is itself a strong assumption.
Nevertheless, using τmax = 9.43, we compute a Gini coefficient of 0.57. The value does not grossly change by varying τmax. For example, setting τmax = 103 increases G to 0.63, and decreasing it to τmax = 1 yields G = 0.52. Interestingly, we find that in the limit of τmax → 0 (which would make humanity an incredibly long lived civilization),Footnote 4 G → 0.5. Thus, under the assumptions of our simple model, we find that G ≥ 0.5, which is similar to the wealth inequality of many developed nations.
To visualize the inequality, we show a stacked histogram of the a-posteriori age distribution of intelligences in Fig. 4 using τmax = 9.43. Specifically, one can see the effect of the bias weighting longer lived intelligences. We find that the top 1% of the oldest intelligences are over-represented in the fraction of first contacts by a factor of 4.
Discussion
In this work, we have suggested a simple model for the lifetime distribution of civilizations (or more generally intelligences) producing technosignatures – specifically an exponential distribution. This is motivated by its monotonic, single-parameter form and is a simple but effective description of the lifetime of biological families on Earth. Amongst these hypothetical intelligences, we may plausibly detect their technosignatures in the coming years, which may either take the form of direct contact or open the door for us to contact them. We have argued that the fact that longer lived intelligences simply have had more time available to them makes them more likely to be detected – and thus the contacted population is weighted towards older intelligences.
Another framing of the above is that at any given time, the number of extant long-lived intelligences is disproportionately represented simply by the fact they persist longer than their short-lived counterparts.
We are able to establish that the expectation age of a contacted intelligence is twice that of the ensemble, without any assumption about the maximum mean lifespan of this population. Further, we show that if the maximum mean lifespan of intelligences is any greater than ~e times our current age, then we will most likely detect an older intelligence than ourselves.
Finally, we use this simple model to show that a ‘contact inequality’ should exist, where the older intelligences represent a disproportionate fraction of galactic first contacts. Using this analogy, we can define a Gini coefficient to quantify the inequality, which we show must be greater than 0.5 for any choice of the maximum mean intelligence lifetime.
In this discussion, we would like to highlight two points. First, in what ways might this model be invalid? And second, what are the consequences for us if this model is correct?
Validity of the employed model
First, we fully acknowledge here that the exponential distribution model is indeed extremely simplistic and may not fully describe the true distribution. The hazard function is a constant with respect to age and it is deeply unclear whether a more advanced intelligence poses a greater risk to itself through emerging technologies (e.g. Cooper Reference Cooper2013), or, on the other hand, is more likely to persist due to their track record of survival thus far. The lifespan of biological families from fossil evidence shows that an exponential distribution may not always be the best fit, but it does broadly capture the overall behaviour (see Shimada et al. Reference Shimada, Yukawa and Ito2003 and Fig. 1). It also satisfies the basic expectation of a monotonically decreasing smooth function. Without any other evidence in hand, we argue that at present there is no justification for invoking a more complex model.
The assumption of lifetime-weighted contact also deserves scrutiny. In this work, we have very simply assumed that the longer an intelligence lasts, the more opportunities it has to be spotted. For example, if a civilization builds a beacon which lasts for an interval L at some random point in the Universe's history, the probability that we will detect that beacon must be directly proportional to L. But of course one could challenge this picture from both the direction of increased or decreased detectability.Footnote 5
For example, as an intelligence becomes more advanced, it could construct more powerful beacons, with greater range, at lower cost, and in greater number (Benford et al. Reference Benford, Benford and Benford2008), even sending them out between the stars to add coverage. Those are intentional contact scenarios, but even unintended technosignatures might be argued to become more detectable as intelligences advance, such as the production of Dysonian artefacts (Dyson Reference Dyson1960). On this basis, one might conclude that our assumption here that the probability of contact is proportional to L greatly underestimates the true value. If so, then older intelligences would dominate the number of first contacts by an even more extreme degree, raising the Gini index yet higher.Footnote 6 This fundamentally does not change our hypothesis that a contact inequality likely exists, in fact it exacerbates the inequality.
On the other hand, one might argue that as intelligences develop, their detectability decreases. Science fiction writer Karl Schroeder captures this hypothesis in his twist on Arthur C. Clarke's famous line ‘Any sufficiently advanced civilization is indistinguishable from nature’ (Schroeder Reference Schroeder2003). They might also simply lose interest in communicating with far less advanced intelligences and elect to hide themselves (Smart Reference Smart2012; Kipping and Teachey Reference Kipping and Teachey2016). If their detectable presence is suddenly eliminated altogether, then they are technically no longer a member of the assumed underlying population – which is specifically one which produces (potentially detectable) technosignatures. Thus, they are effectively extinct and thus do not actually affect the arguments laid out here. However, if the detectability of intelligences diminishes with age, in particular in a way such that the time-integrated probability of detection culminates in a scaling of L α where α < 0, then this would reverse our conclusion – contact would likely occur with less advanced members of the population.
Although we certainly do not discount this possibility, extrapolation of our own behaviour does not generally favour this conclusion. Whilst radio leakage into space has been decreasing, many other aspects of human's detectability projected into the future suggest that we could still be easily found through other technosignatures. Some examples include space mining (Forgan and Elvis Reference Forgan and Elvis2011), leakage from relativistic light sails (Guillochon and Loeb Reference Guillochon and Loeb2015), thermal heat islands (Kuhn and Berdyugina Reference Kuhn and Berdyugina2015), our polluted atmosphere (Schneider et al. Reference Schneider, Léger, Fridlund, White, Eiroa, Henning, Herbst, Lammer, Liseau, Paresce, Penny, Quirrenbach, Roettgering, Selsis, Beichman, Danchi, Kaltenegger, Lunine, Stam and Tinetti2010; Lin et al. Reference Lin, Abad and Loeb2014), geostationary satellites (Socas-Navarro Reference Socas-Navarro2018), geoengineering projects (Gaidos Reference Gaidos2017), photovoltaic cells (Lingam and Loeb Reference Lingam and Loeb2017), space weathering monitoring systems (Kipping Reference Kipping2019) and ever growing energy needs (Wright et al. Reference Wright, Mullan, Sigurdsson and Povich2014). We thus consider that if our own experience and future projections are in any way representative, a future decrease in our technosignature detectability would likely require a deliberate and expensive effort, which is itself unlikely to be considered a good use of resources in the absence of any evidence for other intelligences. Accordingly, we argue that such a scenario is unlikely to dominate until intelligences become much older than our own – which is essentially captured by our assumption that τmax is an order-of-magnitude greater than our current age.
Together, whilst we accept that our model is surely an oversimplification, the qualitative result that older intelligences should be overrepresented in the ensemble of detections may actually be quite robust.
A note on distance
Our detection bias model assumes that the probability of detection is proportional to an intelligence's lifetime, but the distance to that intelligence does not feature. Why not? Certainly, closer intelligences will be more likely to be detected than more distant ones, since signals generally decrease as 1/d 2. But this work only concerns itself with the lifetime distribution of detected intelligences, not their distance (which can be thought of as being marginalized over). The real question for this work is – do we expect there to be some off-diagonal covariance between lifetime and distance of the detected population? More simply, is there any reason to suspect that the intrinsic lifetimes of detected intelligences is dependent upon their displaced location from the Earth?
As discussed in detail in the last section, one could invoke an argument that longer lived intelligences are more detectable, which would exacerbate the contact inequality result of this work. Only if detectability rapidly diminished with time would our basic conclusion change.
A separate aspect to the distance issue is not with detectability per say, but rather with intrinsic lifetimes varying with distance. Do we expect an intelligence's lifetime to depend upon how far away from us they are? At distances of hundreds, even thousands, of light years – the answer is no. There is nothing inherently special about where we live and thus a civilization emerging a few hundred light years should not have any particular reason to live longer or shorter than ourselves. Extending further afield, where effects such as galactic chemical gradients (Gonzalez et al. Reference Gonzalez, Brownlee and Ward2001), supernovae rates (Lineweaver et al. Reference Lineweaver, Fenner and Gibson2004), active galactic nuclei (Balbi and Tombesi Reference Balbi and Tombesi2018; Lingam et al. Reference Lingam, Ginsburg and Bialy2019), stellar encounter rates (McTier et al. Reference McTier, Kipping and Johnston2020) may vary, would indeed require formally building a model which described this covariance. Accordingly, the results of our work should be understood to be formally only applicable to cases where L is not expected to be intrinsically linked to location, such as our local stellar neighbourhood.
Implications
Let us proceed under the assumption that the hypothesis is correct: probabilistically, we are more likely to make first contact with an intelligence that is considerably older than ourselves. It should be noted that this age difference could be quite extreme, perhaps millions or even billions of years, in principle. Although age does not necessarily ensure greater technological advancement, that is the obvious expectation from such a scenario. Of course, we may never detect any technosignatures and thus never have the opportunity for first contact, but under the premise that we will one day succeed, it is interesting to ask what the implications of our suggested contacted inequality are.
Some have voiced concerns that humanity's historical record of encounters between societies of different technological capabilities generally ends poorly for the less advanced entity. Of course, it is unclear that human behaviour can be extrapolated to another intelligence that is far older than ourselves. Accordingly, we prefer to avoid speculating about the impact of such a contact directly.
However, the contact inequality hypothesis does have significant bearing on our own active searches for technosignatures. Focusing on searching for technology similar to that of our own may be unlikely to lead to success. If an intelligence is much more advanced than us, then planet-integrated transient signatures associated with disequilibrium (such as climate change and pollution) are less likely to be the means of detection, since they are simply unsustainable for a long-lived entity.
Further, to ensure their own survival, such intelligences may have relocated or expanded their presence off-world, thus favouring technosignatures associated with such activities. Ultimately, this work concerns itself with a formalism for establishing the hypothesis rather the consequences of it. But from our work, we encourage the formalism and prediction established here to be considered in future efforts to seek out technosignatures, including more detailed exploration of the assumptions and analytic forms of civilization longevity and technological age.
Acknowledgements
DK is supported by the Alfred P. Sloan Foundation. CS acknowledges support from the NASA Astrobiology Program through participation in the Nexus for Exoplanet System Science and NASA Grant NNX15AK95G. Special thanks to Tom Widdowson, Mark Sloan, Laura Sanborn, Douglas Daughaday, Andrew Jones, Jason Allen, Marc Lijoi, Elena West & Tristan Zajonc.
Conflict of interest
None.
David Kipping received a degree in physics from the University of Cambridge in 2003 and received his Ph.D. degree in astrophysics in 2011. He is now a faculty member at Columbia University. His main research interests are the detection/characterization of exoplanets/exomoons and applied astrostatistics.
Adam Frank received his B.S. in 1984 from the University of Colorado and his Ph.D. from the University of Washington in 1992. He is now a faculty member of the Physics and Astronomy Department at the University of Rochester. His main research actives are in computational magneto-fluid dynamics and astrobiology.
Caleb Scharf received his B.Sc. in 1989 from Durham University and his Ph.D. from The University of Cambridge in 1994. He is now Director of Astrobiology at Columbia University. His main research activities are in astrobiology and exoplanetary science.