Published online by Cambridge University Press: 28 November 2017
The fine-tuning argument purports to show that particular aspects of fundamental physics provide evidence for the existence of God. This argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions (from both popular and philosophical sources), and explain why they do not undermine the basic cogency of the fine-tuning argument.
1 See John Hawthorne and Yoaav Isaacs, ‘Fine-Tuning Fine-Tuning’, in Benton, Hawthorne, and Rabinowitz eds: Knowledge, Belief, and God: New Insights in Religious Epistemology (Oxford University Press, forthcoming).
2 A dimensionless quantity is not measured in units and thus is not unit-relative. Height, by contrast, is measured in units (inches, centimeters, and so on) and thus is unit-relative. There is, therefore, nothing particularly deep about someone being exactly one unit of height tall according to some popular system of measurement. Literally everyone is exactly one unit of height tall according to some system of measurement, and there's nothing deep about the difference between popular systems of measurement and unpopular systems of measurement. But the ratio of the mass of the proton to the mass of the electron is not measured in units and thus is not unit-relative. It would be deep if that ratio were exactly one; that would mean that protons and electrons had the same mass.
3 Here ‘fundamental’ means something like ‘non-derived’. What is derived from what is obviously theory-dependent, and thus need not reflect metaphysical priority. For example, the ratio of the mass of the proton to the mass of the electron is no metaphysically deeper than the ratio of the mass of the electron to the mass of the proton. It was a matter of convention which ratio made it into the standard model.
4 For more about such physics, see Weinberg, Steven, ‘The cosmological constant problem’, Reviews of Modern Physics 61 (1), (1989), 1–23 CrossRefGoogle Scholar.
5 As is customary, we individuate parameter-values somewhat coarsely to avoid triviality. Since parameter-values can vary continuously, nearly any maximally specific parameter-value must have prior probability 0. Of course, we don't actually know the maximally specific numerical parameter for any parameter, and it's easy enough to divvy possible parameter-values into equivalence classes according to their observational consequences. There's an identical sense in which someone who is a little over 7′ 9″ has a stranger height than someone who is a little under 5′ 11″, even if all maximally specific heights have probability 0. The probability of the former height plus-or-minus a nanometer and the probability of the latter height plus-or-minus a nanometer are each non-zero, making comparison unproblematic.
6 Our point is that this sort of divine artifice of the laws of physics was antecedently implausible, and not that the mere existence of God was antecedently implausible.
7 It seems plausible that the mere existence of life is a bit more probable given theism than given atheism, and thus that the mere existence of life constitutes a bit of evidence for theism. But if the fine-tuning argument is legitimate (and it is) further facts about physics constitute substantial further evidence for theism.
8 We've even said some of it. For more on the epistemological details see John Hawthorne and Yoaav Isaacs, ‘Fine-Tuning Fine-Tuning’, in Benton, Hawthorne, and Rabinowitz eds: Knowledge, Belief, and God: New Insights in Religious Epistemology (Oxford University Press, forthcoming) and for more on the details of the underlying physics see John Hawthorne, Yoaav Isaacs, and Aron Wall, The Foundations of Fine-Tuning, (manuscript in progress).
9 Philipse, Herman, God in the Age of Science?: A Critique of Religious Reason, (Oxford University Press, 2012)CrossRefGoogle Scholar.
10 Ibid.
11 We assume that these historical arguments do not provide compensatory evidence against theism itself.
12 Readers are perhaps more familiar with pessimistic induction as an argument against scientific realism. We note that our contentions about fine-tuning do not presuppose realism about contemporary scientific theories, but only confidence about some of the standard model's empirical predictions. For more about the pessimistic induction against scientific realism see Lange, Marc, ‘Baseball, pessimistic inductions and the turnover fallacy’, Analysis 62 (4), (2002), 281–285 CrossRefGoogle Scholar.
13 It's far from clear how one determines ‘the’ company that an argument keeps. It's very natural to think of any argument as keeping very different companies, and of potentially different calibers. Consider Gödel's ontological argument––what company does it keep? One natural answer is that it keeps the company of other ontological arguments, and that is poor company indeed. But another answer is that it keeps the company of logical arguments made by Kurt Gödel, and that's some of the finest company any argument could have. For more about Gödel's ontological argument see Sobel, Jordan Howard, Logic and Theism: Arguments For and Against Belief in God, (Cambridge University Press, 2004)Google Scholar.
14 See Goodman, Nelson, Fact, Fiction, and Forecast, (Harvard University Press, 1955)Google Scholar.
15 See Lewis, David, ‘New work for a theory of universals’, Australasian Journal of Philosophy 61 (December), (1983), 343–377 CrossRefGoogle Scholar.
16 It is guaranteed that there are myriad bad arguments available for any position at all, so the possibility of concocting the arguments for some particular position has no evidential force.
17 If you protest that it doesn't make sense to have something written on the interior of an atom, we would remind you that this thought experiment involves physics working rather differently than we anticipated.
18 Richard Dawkins, an exceedingly staunch atheist, conceded that ‘[a]lthough atheism may have been logically possible before Darwin, Darwin made it possible to be an intellectually fulfilled atheist’ Dawkins, Richard, The Blind Watchmaker, (Norton & Company, Inc, 1986)Google Scholar. And the argument from atomic inscriptions would be rather more forceful than the argument from biological design ever was.
19 Bonhoeffer, Dietrich, Letters and Papers from Prison (Simon & Schuster, 1997)Google Scholar.
20 We note that it is very odd to be certain that, even given the supposition that God exists, there are naturalistic explanations for everything. Why be so confident that God wouldn't do anything that's best explained by God having done it?
21 The theory that carbon dioxide releasing bacteria caused holes in Swiss cheese was traditional, having been first laid out by William Clark in 1917. This theory was undermined by the discovery that over the past 15 years fewer and fewer holes were appearing in Swiss cheese. There was thus a period of time in which we did not know what caused the holes in Swiss cheese and moreover knew that we did not know what caused the holes in Swiss cheese. As it turns out, the real cause is microscopic particles of hay (which became less common as cheesemaking conditions became more sanitary).
22 Thanks to Aron Wall for this example.
23 See Davey, Kevin, Debating Design: From Darwin to DNA, Edited by Dembski, William A. and Ruse, Michael. Philosophical Books 47 (4), (2006), 383–386 Google Scholar.
24 Hans Halvorson ‘Fine-Tuning Does Not Imply a Fine-Tuner’, (Retrieved from: http://cosmos.nautil.us/short/119/fine-tuning-does-not-imply-a-fine-tuner, 2017).
25 Whether one is more inclined to believe in a God who was interested in creating life or a God who was interested in creating rocks will depend on one's prior probabilities in those hypotheses. For more see the sections, ‘The God of Tungsten’ and ‘Back to Tungsten’ in John Hawthorne and Yoaav Isaacs, ‘Fine-Tuning Fine-Tuning’, in Benton, Hawthorne, and Rabinowitz eds: Knowledge, Belief, and God: New Insights in Religious Epistemology (Oxford University Press, forthcoming).
26 It needs to presuppose a little, but not that much.
27 The existence of life-unfriendly laws is evidence against the existence of God if and only if the existence of life-unfriendly laws is less likely given the existence of God than it is given the non-existence of God.
28
Margaret, are you grieving
Over Goldengrove unleaving?
Leaves, like the things of man, you
With your fresh thoughts care for, can you?
Ah! as the heart grows older
It will come to such sights colder
By and by, nor spare a sigh
Though worlds of wanwood leafmeal lie;
And yet you will weep and know why.
Now no matter, child, the name:
Sorrow's springs are the same.
Nor mouth had, no nor mind, expressed
What heart heard of, ghost guessed:
It is the blight man was born for,
It is Margaret you mourn for.
29 Note that one need not have any substantial theory of explanations in order to make this inference work. One need not claim that everything has to have an explanation nor that ‘That's just the way it is.’ could not count as an explanation. There might be no need to explain why the leaves spell out Spring and Fall, and ‘That's just the way it is’ might be an entirely acceptable candidate explanation for why the leaves spell out Spring and Fall. Regardless, that pattern of leaves is massively more likely to have been written by your friend than to have come about by the random blowing of the wind. And for our purposes the probabilities are what matter.
30 If one knew a conditional such as ‘If there were a God then there wouldn't be life-unfriendly laws.’ then the existence of life-unfriendly laws would entail the non-existence of God. Such claims are obviously tendentious, however, and are (quite properly) not generally part of skeptical responses to the fine-tuning argument.
31 See Bostrom, Nick, Anthropic Bias: Observation Selection Effects in Science and Philosophy (Routledge, 2002)Google Scholar.
32 A simple case: You can learn that something exists. You could not have learned that nothing exists. Yet the fact that something exists is obviously devastating evidence against the hypothesis that nothing exists.
33 According to Smolin, a confirmable theory is one that makes definite predictions that could (given favorable experimental results) redound to the theory's credit, a falsifiable theory is one that makes definite predictions that could (given unfavorable experimental results) entail the theory's falsity, and a unique theory is one such that no other simpler or more plausible theory makes the same predictions.
34 Lee Smolin, ‘Scientific Approaches to the Fine-Tuning Problem’, (Retrieved from: http://www.pbs.org/wgbh/nova/blogs/physics/2012/12/scientific-approaches-to-the-fine-tuning-problem/, 2012).
35 If Smolin said that theories had only to be disconfirmable, then this objection would not apply. But in that case he could not thereby claim that theism is an illegitimate hypothesis, as theism is disconfirmable.
36 At least if one wants to evaluate the overall evidential impact of the fine-tuning argument.
37 Of course, we know more than merely that the actual parameter-values are life permitting. We have a decently good sense of what those values are––scientists can tell you the values with a modest margin for error. And the comparative likelihoods afforded by theism and atheism to such particular regions of parameter-space need not correspond to the comparative likelihoods afforded by theism and atheism to the entirety of life-permitting parameter-space. But there seems to be nothing particularly significant about the region of parameter-space in which we find ourselves beyond its life-permittingness (and rock-permittingness, and so on), so our additional evidence about what the parameter-values are shouldn't make much of a difference, if any difference at all.
38 Again, rock-permitting would work just as well.
39 For example, the ratio of the mass of the proton to the mass of the electron should be a positive number (at least assuming that there are no negative masses). The cosmological constant specifies the energy density of the vacuum, and could sensibly be any real number.
40 Note that we do not endorse such indifference-driven reasoning even over finite ranges.
41 Colyvan, Garfield, and Priest flirt with the notion that probability 0 events are automatically impossible. This is emphatically not so.
42 McGrew, McGrew, and Vestrup go on to note that such a probability assignment would not be countably additive, which probabilities are standardly required to be. Countable additivity requires that the probabilities assigned to countably many non-overlapping regions of parameter-space sum to the probability assigned to the union of those regions of parameter-space. The entirety of parameter-space can be divided into countably many equally sized regions (the region between 0 and 1, the region between 1 and 2, the region between 2 and 3, and so on). But there is no way for countably infinitely many regions to each receive the same probability such that the sum of those probabilities is 1. If each region receives probability 0 the sum will be 0, and if each region receives probability greater than 0 the sum will be infinite. McGrew, McGrew, and Vestrup consider this violation of countable additivity to be fatal. We are less convinced that the violation of countable additivity is fatal; there are some reasons to prefer mere finite additivity, which would not impose unsatisfiable restrictions. But our reasons for being sympathetic to possible violations of countable additivity have nothing to do with this case––we emphatically reject the indifference-driven reasoning which posed problems for countable additivity in the first place.
43 McGrew, McGrew, and Vestrup write that ‘[t]he difficulty lies in the fact that there is no way to establish ratios of regions in a non-normalizable space. As a result, there is no meaningful way in such a space to represent the claim that one sort of universe is more probable than another. Put in non-mathematical language, treating all types of universes even- handedly does not provide a probabilistic representation of our ignorance regarding the ways that possible universes vary among themselves––whatever that means.’ McGrew, Timothy, McGrew, Lydia, and Vestrup, Eric, ‘Probabilities and the fine-tuning argument: A sceptical view’, Mind 110 (440), (2001), 1027–1038 CrossRefGoogle Scholar. Colyvan, Garfield, and Priest write that ‘[t]he fine tuning argument, on its most plausible interpretation, hence not only shows that life-permitting universes are improbable, but, arguably, that they are impossible!’ Colyvan, Mark, Garfield, Jay L., and Priest, Graham, ‘Problems With the Argument From Fine Tuning’, Synthese 145 (3), (2005), 325–338 CrossRefGoogle Scholar. Worry about uniform probabilities is the central focus of both papers. McGrew, McGrew, and Vestrup and Colyvan, Garfield, and Priest only briefly consider the possibility of non-uniform probabilities for the values of fundamental constants, and quickly dismiss that possibility as a non-starter. Against this, it bears emphasis that such non-uniform probabilities are needed for a great deal of physics, not just for the physics of fine-tuning. For a defense of such non-uniform probabilities see John Hawthorne, Yoaav Isaacs, and Aron Wall, The Foundations of Fine-Tuning, (manuscript in progress).
44 In particular, a Wilsonian dimensional analysis of effective field theory. We kind of know what that is. This criticism of the fine-tuning argument is not based on any understanding of it whatsoever. The rough idea of the physics is this: The values of the constants do not exist in complete isolation. The constants make contributions to the values of the other constants; they nudge each other around, so to speak. So the cosmological constant has received numerous contributions from the other constants, and physicists can know how big they are––order 10∧120 bigger than the actual value. So we've got many numbers of magnitude 10∧120, some positive and some negative, they get added together, and the sum is a small, positive value. Physicists did not expect that. They expected that the numbers of magnitude 10∧120 would sum to something of magnitude 10∧120. Trying to figure out why that sum worked out as conveniently as it did is a major project in physics. But the crucial point here is that this claim of fine-tuning isn't based on any sort of judgment that all parameter-values are equally likely. It is instead based on an expectation––an expectation rooted in a serious understanding of physics––that the cosmological constant would have a hugely different value than it does.
A toy model is helpful. Suppose that Bill and Melinda Gates decide to start living particularly lavishly, spending billions of dollars every year, buying islands, commissioning movies, and generally living it up. At the end of the year, their accountant finds something remarkable––their expenditures were almost perfectly cancelled out by the appreciation of Microsoft stock. Over the course of the year, their net worth increased by just under a dollar. It's very improbable to have the pluses and minuses cancel out so closely. Given the magnitude of the fine-tuning of the cosmological constant, it'd be more like the Gates' expenditures being almost perfectly cancelled out by stock gains 10 years in a row.
Now if there's literally no alternative account for why that happened other than that something weird happened, then there's no alternative account for why that happened other than that something of weird happened. But there are always alternatives. If someone wearing a robe had told Bill and Melinda that he was casting a spell on them to make that happen, we'd be much more inclined to believe that he was a wizard than that it was just a coincidence, and we'd be much more inclined to believe that he was running some sort of scam than that he was a wizard. But the point here is just that such near-perfect cancellation of increases and decreases of net worth are shockingly improbable unless there's something funny going on. That's exactly the kind of reasoning that the fine-tuning argument relies on, and it is beyond reproach.
45 It is worth thinking about how to reason in contexts in which probabilities behave the way these critics have laid out––not because it is relevant to fine-tuning, but just because it is interesting. Let us therefore allow violations of countable additivity. Suppose that we know that a natural number will be generated by one of two processes, and that each process is equally likely to do the generating. The first process is not uniform and does obey countable additivity, while the second process is uniform and does not obey countable additivity. The first process will generate ‘1’ with probability ½, ‘2’ with probability ¼, ‘3’ with probability ⅛, and so on. (The probability that any number ‘n’ will be generated is 1/2∧n.) The second process will generate any number ‘n’ with probability 0. In effect, the second process randomly selects a natural number. Now suppose that you learn what number was generated. What should you think about whether that number was generated by the first process or the second process? There's a good argument that, no matter what number was generated, you should be certain that it was generated by the first process and not the second process. After all, no matter what number was generated, the first process had non-zero probability of generating it while the second process had 0 probability of generating it. This does seem quite odd however; it seems wrong for the hypothesis that the first process was selected to be destined for confirmation and for the hypothesis that the second process was selected to be destined for disconfirmation. The unconditional probability falls outside the range of the conditional probabilities of each element of the outcome space. When this happens, mathematicians say that the distribution is non-conglomerable. Now it's well-known that violations of countable additivity can easily produce non-conglomerability, so it's not surprising that this happened in the case above. And it is not clear to us how one should reason in the case above. We are open to the possibility that one should simply follow the conditional probabilities where they lead and accept their non-conglomerability.
46 A different––but related––worry about measure-0 probabilities is worth thinking about. Since there are continuum many possible parameter-values, the actual values are likely to have probability 0 given either theism or atheism. But probabilities conditional on measure-0 events are not generally well-defined. Colyvan, Garfield, and Priest (with something else in mind) write, ‘Accepting that the universe as we find it has probability zero means that the conditional probability of any hypothesis relative to the fine-tuning data is undefined. This makes the next move in the argument from fine tuning––that the hypothesis of an intelligent designer is more likely than not, given the fine-tuning data––untenable.’ Colyvan, Mark, Garfield, Jay L., and Priest, Graham, ‘Problems With the Argument From Fine Tuning’, Synthese 145 (3), (2005), 325–338 CrossRefGoogle Scholar. Happily, there are two good responses to this worry. First, it doesn't matter if the probabilities given the actual parameter-values are undefined so long as the probabilities given our evidence about the parameter-values is not undefined. And since our measuring instruments are only finitely sensitive, our evidence is coarse-grained enough to unproblematically receive non-zero prior probability. Second, although probabilities conditional on measure 0 events are not generally well-defined, they are sometimes well-defined––and this is one of the cases in which they are. Probabilities conditional on measure 0 events are well-defined when they can be taken as the limits of continuous random variables. If you can think of a measure 0 event as the limit of other events with non-zero measure, then everything is OK. For example, suppose you have two dart players throwing darts at a continuously dense dartboard. One player, the amateur, will hit a random point on the board. The other player, the expert, will hit a random point in the bullseye. There's a sense in which the expert is no more likely to hit any spot in the bullseye than the amateur is––they each hit each spot in the bullseye with probability 0. But if you think about shrinking regions around some spot in the bullseye, the expert is more likely to get in that region than the amateur is. Because parameter-values vary continuously, even if we did know the actual parameter-values we could use this approach to keep our conditional probabilities well-defined. For more on this approach to measure-0 conditional probabilities, see Urbach, Peter and Howson, Colin, Scientific Reasoning: The Bayesian Approach, (Open Court, 1993)Google Scholar.
47 Fishing cases are often given as examples about fine-tuning, so we'll follow suit (though the analogy is quite rough). Suppose that––barring divine intervention––one had a uniform probability distribution over possible fish-widths for the finite number of fish around. According to such a probability distribution the probability that any fish will be less than a mile wide is 0. (This is an outlandish probability distribution.) Suppose one had fishing equipment that can catch any fish that is less than a mile wide but no fish that is more than a mile wide. Successfully catching a fish would then be overwhelming evidence that God graciously arranged a suitable fish; that's dramatically more likely than getting measure-0 lucky. It is, of course, unreasonable to think that catching a fish with such excellent fishing gear is overwhelming proof of God's existence, but that's only because it's unreasonable to have a uniform distribution over possible fish-widths conditional on God's non-existence.
48 And we don't hold out much hope for non-prominent objections.
49 For helpful feedback, we're grateful to Cameron Domenico Kirk-Gianni, Neil Manson, the audience at Heythrop College, and the online audience at academia.edu. This publication was made possible by the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.