We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter retraces the genealogical development of deduction in the Latin and Arabic medieval traditions and in the early modern period, and finally the emergence of mathematical logic in the nineteenth century. It is shown that dialogical conceptions of logic remained pervasive in the Latin medieval tradition, but that they coexisted with other, non-dialogical conceptualizations, in part because of the influence of Arabic logic. In the modern period, however, mentalistic conceptions of logic and deduction became increasingly prominent. The chapter thus explains why we (i.e. twenty-first-century philosophers) have by and large forgotten the dialogical roots of deduction.
This chapter returns to the three main features of deduction defined in Chapter 1 from a cognitive, empirically informed perspective: necessary truth-preservation, perspicuity, and belief-bracketing. It discusses experimental findings that lend support to the dialogical conceptualization of these three features presented in Chapter 4. It also discusses the notion of internalization as formulated by Lev Vygotsky, which allows for an explanation of how deductive practices can also take place in purely mono-agent situations: as an intrapersonal enactment of interpersonal dialogues. The upshot is that framing deductive practices dialogically provides cognitive scaffolding that facilitates the ontogenetic development of deductive reasoning in an individual.
In this chapter, it is argued that what is needed to make progress on the issues described in Chapter 1 is a ‘roots’ approach, i.e. going back to the roots of deduction. The distinction between phylogenetic, ontogenetic, and historical roots is introduced, and it is argued that all three perspectives must be taken into account. The chapter further briefly presents the four main senses in which deduction has dialogical roots treated in this book: philosophical roots, historical roots, cognitive roots, and with respect to mathematical practices.
This chapter presents an overview of experimental work on deductive reasoning, which has shown that human reasoners do not seem to reason spontaneously according to the deduction canons. However, there are also experimental results suggesting that, when tackling deductive tasks in groups, performance comes much closer to the canons. These findings offer a partial vindication of the dialogical conception of deduction insofar as they show that, when given the opportunity to engage in dialogues with others, humans become better deductive reasoners.
This chapter presents a dialogical rationale based on the Prover–Skeptic model for the three main features of deduction identified in Chapter 1: necessary truth-preservation, perspicuity, and belief-bracketing. Moreover, it addresses four important ongoing debates in the philosophy of logic: the normativity of logic, logical pluralism, logical paradoxes, and logical consequence. It is shown that the Prover–Skeptic model provides a promising vantage point to address the questions raised in these debates.
This comprehensive account of the concept and practices of deduction is the first to bring together perspectives from philosophy, history, psychology and cognitive science, and mathematical practice. Catarina Dutilh Novaes draws on all of these perspectives to argue for an overarching conceptualization of deduction as a dialogical practice: deduction has dialogical roots, and these dialogical roots are still largely present both in theories and in practices of deduction. Dutilh Novaes' account also highlights the deeply human and in fact social nature of deduction, as embedded in actual human practices; as such, it presents a highly innovative account of deduction. The book will be of interest to a wide range of readers, from advanced students to senior scholars, and from philosophers to mathematicians and cognitive scientists.
This chapter is a tutorial about some of the key issues in semantics of the first-order aspects of probabilistic programming languages for statistical modelling – languages such as Church, Anglican, Venture and WebPPL. We argue that s-finite measures and s-finite kernels provide a good semantic basis.
Reasoning about probabilistic programs is hard because it compounds the difficulty of classic program analysis with sometimes subtle questions of probability theory. Having precise mathematical models, or semantics, describing their behaviour is therefore particularly important. In this chapter, we review two probabilistic semantics. First, an operational semantics which models the local, step-by-step, behaviour of programs, then a denotational semantics describing global behaviour as an operator transforming probability distributions over memory states.
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning and big data analytics) can often naturally tolerate soft errors.In this chapter, we demonstrate how a programming language, Rely, enables developers to reason about and verify the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely leverages a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering.
Church's λ-calculus has become a universally accepted model of pure functional programming, and its properties have been thoroughly scrutinised by the research community in the last 90 years. Many variations of it have been introduced for the sake of capturing programming with various forms of effects, thus going beyond pure functional programming. This chapter is meant to be a gentle introduction to a family of such calculi, namely probabilistic λ-calculi, in their two main variations: randomised λ-calculi and Bayesian λ-calculi. We focus our attention on the operational semantics, expressive power and termination properties of randomised λ-calculi, only giving some hints and references about denotational models and Bayesian λ-calculi.
The quantitative analysis of probabilistic programs answers queries involving the expected values of program variables and expressions involving them, as well as bounds on the probabilities of assertions. In this chapter, we will present the use of concentration of measure inequalities to reason about such bounds. First, we will briefly present and motivate standard concentration of measure inequalities. Next, we survey approaches to reason about quantitative properties using concentration of measure inequalities, illustrating these on numerous motivating examples. Finally, we discuss currently open challenges in this area for future work.