We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
I shall argue in this chapter that the discussion of nonreductive materialism has been conducted under the shadow of an ambiguity in the sense of reductive. One sense is specific to the philosophy of mind, and here the reductive tradition is marked by the attempt to give an account of the mind in behavioral or functional terms, without remainder. The other sense derives from the philosophy of science, and it concerns the possibility of giving some kind of systematic account of “higher” sciences in terms of “lower” ones, and, ultimately, in terms of physics. I shall argue that failure to distinguish these senses in Davidson's “Mental Events” has led to serious confusions in the discussion of “nonreductive materialism” and in the attendant notion of ‘supervenience.’ Davidson has clarified the confusion in “Mental Causes”, but in a way that makes his original contribution much less interesting than it had seemed to be. In the course of the discussion, I hope to clarify the various senses in which theories, properties, and predicates can be ‘reduced’ or ‘emergent.’
Story 1. In order to vindicate a materialist theory of the mind it is necessary to show how something that is a purely physical object can satisfy psychological predicates. Those features of the mind which seem to be, prima facie, incompatible with this physicalism – such as consciousness and the intentionality of thought – must, therefore, be explained in a way that purges them of their apparently Cartesian elements, which would be incompatible with materialism.
The word physicalism, when introduced into philosophical conversation by Neurath and Carnap, seemed theirs to define, much as a century earlier the word positivism had been Comte's to define. Not everyone is so lucky as to introduce a label by which they will later become known, and such was the lot of Locke who has been tossed with Hobbes and Hume into the catchall bin of Empiricism. Whether original with Locke or presaged in Leviathan, the idea that Ideas were all the mind could contemplate seems distinctive enough to deserve its own ‘ism.’ In any event, the marriage of Locke's internal Empiricism with Comte's cold Positivism produced the uneasy union that the Vienna Circle styled ‘physicalism,’ but that the world has since come to call by turns ‘Logical Positivism’ and ‘Logical Empiricism.’ That a philosophical position could be defined by conjoining two seemingly mismatched themes would itself be of at least historical interest. But it gains a more topical interest if we could show how antiphysicalist theses more recently bandied about were born of the same unhappy union. To that end we will begin in the middle.
Consistent with their antimetaphysical approach to philosophy, Neurath and Carnap cast their original definition of physicalism in linguistic terms. Roughly, physicalism was the name they gave to the thesis that every meaningful sentence, whether true or false, could be translated into physical language. Although both thought the thesis obviously true, neither thought it knowable a priori.
It is a commonplace that much of contemporary metaphysics is deeply bound up with the metaphysical modalities: metaphysical possibility and necessity. To take one central instance, the mind-body problem, in its most familiar contemporary form, appears as a problem about property identities, and it is hard to imagine discussing any issue about property identity without calling on the idea of metaphysical possibility. If we want to ask whether the property of being conscious, or being in pain, or having this sort of pain S, is identical with some physical or functional property P – say, the property of having such-and-such neurons firing in such-and-such a way – we typically begin by asking whether I could have had these neurons firing in this particular way, without experiencing S. And the could here is the could of metaphysical possibility.
As we all know, these questions about what could be the case – metaphysically could – are far from easy to answer. There are, it seems to me, two features of the notion of metaphysical possibility that combine to make them hard to settle, either negatively or positively. What makes them hard to settle negatively is that because metaphysical possibility is supposed to be a kind of possibility distinct from physical possibility, styles of argument that work very well to show that various describable situations are not physically possible do not carry over to show that the same situations are not metaphysically possible. Most of us would agree that the standard correlations between brain and pain already give us excellent reasons for believing that it is not physically possible for there to be a perfect neurological duplicate of me who feels no pain at the dentist's.
This chapter is an attempt to understand the content of and motivation for a popular form of physicalism, which I call nonreductive physicalism. Nonreductive physicalism claims that although the mind is physical (in some sense), mental properties are nonetheless not identical to (or reducible to) physical properties. This suggests that mental properties are, in earlier terminology, emergent properties of physical entities. Yet many nonreductive physicalists have denied this. In what follows, I examine their denial, and I argue that on a plausible understanding of what emergent means, the denial is indefensible: nonreductive physicalism is committed to mental properties being emergent properties. It follows that the problems for emergentism – especially the problems of mental causation – are also problems for nonreductive physicalism, and they are problems for the same reason.
The structure of the chapter is as follows. In the first section, I outline what I take to be essential to nonreductive physicalism. In the second section I attempt to clarify what is meant by emergent, and I argue that the notion of emergence is best understood in terms of the idea of emergent properties having causal powers that are independent of the causal powers of the objects from which they emerge. This idea, ‘downward causation,’ is examined in the third section. In the final section I draw the lessons of this discussion for the contemporary debate on the mind-body problem.
There is a big difference between building a prototype system and a piece of production software. In his classic book The Mythical Man-Month, Fredrick Brooks estimates that it takes nine times the effort to create a complete, reliable system as opposed to an initial program that starts to do the job.
With Meena's graduation, I needed a fresh student to turn our prototype into a production system. I got to know Roger Mailler when he took CSE 214, undergraduate Data Structures, with me in the fall of 1997. Roger was the bored-looking student in the front row – too bright and knowledgeable to get very much from the course, but too disciplined to cut class or hide in the back. Roger finished first out of the 126 students in the course (by a substantial margin) and was untainted by a programming assignment cheating scandal that claimed many of his classmates.
Roger is an interesting fellow whose career path to Stony Brook followed a very non-standard course. His first attempt at college (at the Rochester Institute of Technology) was, to be charitable, unsuccessful. In one year at RIT he amassed a grade point average (GPA) of 0.96, where 4.0 is an A and 1.0 is a D. Any mammal with a pulse ought to be able to do better. Indeed, this is the lowest GPA I've ever seen sustained over a full academic year because students capable of such performance usually manage to get themselves expelled before the year is out.
Classical logic—including first order logic, which we studied in Chapter 2—is concerned with deductive inference. If the premises are true, the conclusions drawn using classical logic are always also true. Although this kind of reasoning is not inductive, in the sense that any conclusion we can draw from a set of premises is already “buried” in the premises themselves, it is nonetheless fundamental to many kinds of reasoning tasks. In addition to the study of formal systems such as mathematics, in other domains such as planning and scheduling a problem can in many cases also be constrained to be mainly deductive.
Because of this pervasiveness, many logics for uncertain inference incorporate classical logic at the core. Rather than replacing classical logic, we extend it in various ways to handle reasoning with uncertainty. In this chapter, we will study a number of these formalisms, grouped under the banner nonmonotonic reasoning. Monotonicity, a key property of classical logic, is given up, so that an addition to the premises may invalidate some previous conclusions. This models our experience: the world and our knowledge of it are not static; often we need to retract some previously drawn conclusion on learning new information.
Logic and (Non)monotonicity
One of the main characteristics of classical logic is that it is monotonic, that is, adding more formulas to the set of premises does not invalidate the proofs of the formulas derivable from the original premises alone. In other words, a formula that can be derived from the original premises remains derivable in the expanded premise set.
Jai alai is a sport of Basque origin in which opposing players or teams alternate hurling a ball against the wall and catching it until one of them finally misses and loses the point. The throwing and catching are done with an enlarged basket or cesta. The ball or pelota is made of goatskin and hard rubber, and the wall is of granite or concrete – which is a combination that leads to fast and exciting action. Jai alai is a popular spectator sport in Europe and the Americas. In the United States, it is most associated with the states of Florida, Connecticut, and Rhode Island, which permit parimutuel wagering on the sport.
In this chapter, we will delve deeper into the history and culture of jai alai. From the standpoint purely crass of winning money through gambling, much of this material is not strictly necessary, but a little history and culture never hurt anybody. Be my guest if you want to skip ahead to the more mercenary or technical parts of the book, but don't neglect to review the basic types of bets in jai alai and the Spectacular Seven scoring system. Understanding the implications of the scoring system is perhaps the single most important factor in successful jai alai wagering.
Much of this background material has been lifted from the fronton Websites described later in this chapter and earlier books on jai alai.
Economists are very concerned with the concept of market efficiency. Markets are efficient whenever prices reflect underlying values. Market efficiency implies that everyone has the same information about what is available and processes it correctly.
The question of whether the jai alai bettors' market is efficient goes straight to the heart of whether there is any hope to make money betting on it. All of the information that we use to predict the outcome of jai alai matches is available to the general public. Because we are betting against the public, we can only win if we can interpret this data more successfully than the rest of the market. We can win money if and only if the market is inefficient.
Analyzing market efficiency requires us to build a model of how the general public bets. Once we have an accurate betting model, we can compare it with the results of our Monte Carlo simulation to look for inefficiencies. Any bet that the public rates higher than our simulation is one to stay away from, whereas any bet that the simulation rates higher than the public represents a market inefficiency potentially worth exploiting.
The issue of market efficiency rears its head most dramatically in the stock market. Billions of dollars are traded daily in the major markets by tens of thousands of people watching minute-by-minute stock ticker reports. Quantitative market analysts (the so-called quants) believe that there are indeed inefficiencies in the stock market that show up as statistical patterns.
This is a book about predicting the future. It describes my attempt to master a small enough corner of the universe to glimpse the events of tomorrow, today. The degree to which one can do this in my tiny toy domain tells us something about our potential to foresee larger and more interesting futures.
Considered less prosaically, this is the story of my 25-year obsession with predicting the results of jai alai matches in order to bet on them successfully. As obsessions go, it probably does not rank with yearning for the love of one you will never have or questing for the freedom of an oppressed and downtrodden people. But it is my obsession – one that has led me down paths that were unimaginable at the beginning of the journey.
This book marks the successful completion of my long quest and gives me a chance to share what I have learned and experienced. I think the attentive reader will come to understand the worlds of mathematics, computers, gambling, and sports quite differently after reading this book.
My interest in jai alai began during my parents' annual escape from the cold of a New Jersey winter to the promised land of Florida. They stuffed the kids into a Ford station wagon and drove a thousand miles in 2 days each way. Florida held many attractions for a kid: the sun and the beach, Disney World, Grampa, Aunt Fanny, and Uncle Sam. But the biggest draw came to be the one night each trip when we went to a fronton, or jai alai stadium, and watched them play.
Mom was the biggest jai alai fan in the family and the real motivation behind our excursions. We loaded up the station wagon and drove to the Dania Jai-Alai fronton located midway between Miami and Fort Lauderdale. In the interests of preserving capital for later investment, my father carefully avoided the valet parking in favor of the do-it-yourself lot. We followed a trail of palm trees past the cashiers' windows into the fronton.
Walking into the fronton was an exciting experience. The playing court sat in a vast open space, three stories tall, surrounded by several tiers of stadium seating. To my eyes, at least, this was big-league, big-time sport. Particularly “cool” was the sign saying that no minors would be admitted without a parent. This was a very big deal when I was only 12 years old.
We followed the usher who led us to our seats. The first game had already started.
This book is the outgrowth of an effort to provide a course covering the general topic of uncertain inference. Philosophy students have long lacked a treatment of inductive logic that was acceptable; in fact, many professional philosophers would deny that there was any such thing and would replace it with a study of probability. Yet, there seems to many to be something more traditional than the shifting sands of subjective probabilities that is worth studying. Students of computer science may encounter a wide variety of ways of treating uncertainty and uncertain inference, ranging from nonmonotonic logic to probability to belief functions to fuzzy logic. All of these approaches are discussed in their own terms, but it is rare for their relations and interconnections to be explored. Cognitive science students learn early that the processes by which people make inferences are not quite like the formal logic processes that they study in philosophy, but they often have little exposure to the variety of ideas developed in philosophy and computer science. Much of the uncertain inference of science is statistical inference, but statistics rarely enter directly into the treatment of uncertainty to which any of these three groups of students are exposed.
At what level should such a course be taught? Because a broad and interdisciplinary understanding of uncertainty seemed to be just as lacking among graduate students as among undergraduates, and because without assuming some formal background all that could be accomplished would be rather superficial, the course was developed for upper-level undergraduates and beginning graduate students in these three disciplines. The original goal was to develop a course that would serve all of these groups.
In Chapter 3, we discussed the axioms of the probability calculus and derived some of its theorems. We never said, however, what “probability” meant. From a formal or mathematical point of view, there was no need to: we could state and prove facts about the relations among probabilities without knowing what a probability is, just as we can state and prove theorems about points and lines without knowing what they are. (As Bertrand Russell said [Russell, 1901, p. 83] “Mathematics may be defined as the subject where we never know what we are talking about, nor whether what we are saying is true.”)
Nevertheless, because our goal is to make use of the notion of probability in understanding uncertain inference and induction, we must be explicit about its interpretation. There are several reasons for this. In the first place, if we are hoping to follow the injunction to believe what is probable, we have to know what is probable. There is no hope of assigning values to probabilities unless we have some idea of what probability means. What determines those values? Second, we need to know what the import of probability is for us. How is it supposed to bear on our epistemic states or our decisions? Third, what is the domain of the probability function? In the last chapter we took the domain to be a field, but that merely assigns structure to the domain: it doesn't tell us what the domain objects are.
There is no generally accepted interpretation of probability.
We have abandoned many of the goals of the early writers on induction. Probability has told us nothing about how to find interesting generalizations and theories, and, although Carnap and others had hoped otherwise, it has told us nothing about how to measure the support for generalizations other than approximate statistical hypotheses. Much of uncertain inference has yet to be characterized in the terms we have used for statistical inference. Let us take a look at where we have arrived so far.
Objectivity
Our overriding concern has been with objectivity. We have looked on logic as a standard of rational argument: Given evidence (premises), the validity (degree of entailment) of a conclusion should be determined on logical grounds alone. Given that the Hawks will win or the Tigers will win, and that the Tigers will not win, it follows that the Hawks will win. Given that 10% of a large sample of trout from Lake Seneca have shown traces of mercury, and that we have no grounds for impugning the fairness of the sample, it follows with a high degree of validity that between 8% and 12% of the trout in the lake contain traces of mercury.
The parallel is stretched only at the point where we include among the premises “no grounds for impugning. …” It is this that is unpacked into a claim about our whole body of knowledge, and embodied in the constraints discussed in the last three chapters under the heading of “sharpening.”
The system described in this book retrieves and analyzes data each night and employs a substantial amount of computational sophistication to determine the most profitable bets to make. It isn't something you are going to try at home, kiddies.
However, in this section I'll provide some hints on how you can make your trip to the fronton as profitable as possible. By combing the results of our Monte Carlo simulations and expected payoff model, I've constructed tables giving the expected payoff for each bet, under the assumption that all players are equally skillful. This is very useful information to have if you are not equipped to make your own judgments as to who is the best player, although we also provide tips as to how to assess player skills. By following my advice, you will avoid criminally stupid bets like the 6–8–7 trifecta.
But first a word of caution is in order. There are three primary types of gamblers:
Those who gamble to make money – If you are in this category, you are likely a sick individual and need help. My recommendation instead would be that you take your money and invest in a good mutual fund. In particular, the Vanguard Primecap fund has done right well for me over the past few years.
One theme running through this book is how hard we had to work in order to make even a small profit. As the saying goes, “gambling is a hard way to make easy money.”