The 2023 Spring Meeting of the Association for Symbolic Logic was held on April 5–6, 2023 at the Westin St. Francis in San Francisco, CA in conjunction with the 2023 Pacific Division Meeting of the American Philosophical Association. The members of the Program Committee were Jeremy Avigad, Paolo Mancosu (chair), and Richard Zach.
The program comprised six invited talks and a contributed papers session. The invited talks from the session The Philosophy of Mathematical Practice are listed below.
-
Silvia De Toffoli (Scuola Universitaria Superiore IUSS Pavia), How to do things with diagrams: the case of mathematics.
-
Patrick J. Ryan (University of California, Berkeley), Explanation, impurity, and infinity: some themes from mathematical practice.
-
Dirk Schlimm (McGill University), Towards a philosophy of notation.
The talks from the session History and Philosophy of Programming Languages are listed below.
-
Tomas Petricek (Charles University), Cultures of programming.
-
Brigitte Pientka (McGill University), The unusual effectiveness of modal types in writing programs and proofs.
-
Giuseppe Primiero (University of Milan), Logic for fair AI.
Abstracts of the invited talks as well as of the contributed talks by members of the Association for Symbolic Logic follow.
For the Program Committee
Paolo Mancosu
Abstracts of invited plenary lectures
▸ SILVIA DE TOFFOLI, How to do things with diagrams: the case of mathematics.
Linguistics and Philosophy Center, Scuola Universitaria Superiore IUSS Pavia, Pavia, Italy.
E-mail: [email protected].
In my talk, I will focus on diagrams in contemporary mathematics. I will advance two main theses which I will support with the aid of the two case studies: (1) Some types of diagrams form mathematical notational systems and play non-redundant roles in proofs. They not only serve as illustrations but are themselves genuine reasoning tools. That is to say, we can think with diagrams. (2) Some diagrams are essential to the proofs in which they figure. Although we can find diagram-free counterparts of diagrammatic proofs, given plausible criteria of identity for proof, any such counterpart would be a different proof from the original one. In sum: inter-transformability does not imply inter-translatability. Donald Davidson once said that a picture is “not worth a thousand words, or any other number. Words are the wrong currency to exchange for a picture” [1]. I will show that words are also the wrong currency to exchange for a mathematical diagram. I will argue that in order to appreciate the effectiveness of diagrams, it is not sufficient to consider only their informational content and how such content can be put into words. We must also consider the articulation of such content and why it matters in practice. This can be done, for instance, by evaluating how such articulation facilitates extracting information, carrying out specific inferences, and performing calculations.
[1] D. Davidson, What metaphors mean. The Philosophical Quarterly , vol. 71 (1978), no. 4, pp. 823–844.
[2] S. De Toffoli, What are mathematical diagrams? Synthese , vol. 200 (2022), 86, pp. 1–29.
[3] S. De Toffoli, Who’s afraid of mathematical diagrams? Philosopher’s Imprint , forthcoming.
[4] M. Giaquinto, Visual Thinking in Mathematics , Oxford University Press, Oxford, 2007.
▸TOMAS PETRICEK, Cultures of programming.
Department of Distributed and Dependable Systems, Charles University, Prague, Czech Republic.
E-mail: [email protected].
To build software systems, programmers use a plethora of programming concepts and methodologies, but the way in which different groups use these vary greatly. Since the birth of the discipline in the 1940s, there have been disagreements about the nature of programming, the form of programming concepts and the applicability of different methodologies. At the same time, different groups have contributed ideas to shared concepts and methodologies, making them richer and more practically useful.
My proposal for making sense of the contentious and intertwined history of programming is to retell it as the history of interactions between five different cultures of programming that have different basic ideas about the nature of programming and about the methods that should be employed in practice. Using three case studies from the history of programming—the history of formal reasoning about programs, the rise of interactive programming tools, and the development of software engineering—I will illustrate how the different cultures of programming clash over basic principles, argue about the nature of programming concepts, but also exchange ideas to advance the state of the art.
In the early days of programming, the friction between cultures could have been just a sign of the immaturity of the field. As different cultures keep maintaining their strong identity 70 years later, the idea that a unified perspective on programming will emerge as the field matures is becoming more difficult to believe. Interesting innovations appear and revealing discussions occur when multiple cultures meet or clash, marking programming as an inherently pluralistic discipline.
▸ BRIGITTE PIENTKA, The unusual effectiveness of modal types in writing programs and proofs.
School of Computer Science, McGill University, Montréal, QC, Canada.
E-mail: [email protected].
Over the past two decades, modal logic’s notion of necessity has provided deep insights and precise characterizations for a wide range of computational phenomena: from reasoning about different stages of computation and meta-programming to guarded recursion and homotopy type theory. These applications build on the tradition of modal type systems and type theories that started in the 1990s and provided for the first time a Curry Howard interpretation of the modal necessity.
In this talk, I first revisit the simply-typed Kripke-style modal lambda-calculus by Pfenning and Davies which was presented in the late 1990 and which uniformly captures the modal systems K, T, $K4$ , and $S4$ . Then I extend this work to a Kripke-style Modal INtuitionistic Type theory, called MINT. The key insight of this work is the notion of Kripke-style substitution which unifies two formerly separate concepts: transition between different Kripke worlds and substitution for individual assumptions. This enables us for the first time to study properties such as normalization uniformly for all subsystems. In addition, it allows us to understand practical applications in for example meta-programming in the type-theoretic setting.
Acknowledgments. This is joint work with Jason Z. Hu.
▸ GIUSEPPE PRIMIERO, Logics for fair AI.
Logic, Uncertainty, Computation and Information Group, Department of Philosophy, University of Milan, Via Festa del Perdono 7, 20122 Milan, Italy.
E-mail: [email protected].
Since the advent of expert systems in the mid-60s, and the passing of their heyday between the 1980s and the 1990s, the role of logic in programming AI systems has shifted significantly. From being the discipline largely determining the algorithmic behaviour of in-depth knowledge systems on single inputs, logic has been superseded by the evaluation of correlations on huge amount of data. The impressive efficiency of machine learning methods does not come, however, at no cost: their opacity and risk of bias are well known and largely discussed in the literature. While a variety of tools are being developed to make ML systems more transparent, logical methods are coming back to play an increasingly important role: by their nature, they may help building and verifying transparent models of computations. A recent trend in building verified AI systems is growing, extending another historically important task for logic [1]. A major aim is therefore to develop formal methods that will help the verification of these systems’ trustworthiness and fairness. In this lecture, I will present some recent work in this direction. The typed natural deduction calculus TPTND [2] is designed for reasoning about programs with probabilistic outputs obtained under possibly opaque distributions, for which trustworthiness can be formally verified as an admissible distance from the behavior of a fair and transparent counterpart. The variant calculus TPTND-BL [3] is apt for the formal verification of a maximum bias threshold in automated labeling methods. I will survey their rule systems, the meaning of trust and bias checking rules, their meta-theory and report on current work on their relational semantics and implementation.
[1] S. A. Seshia, D. Sadigh, and S. S. Sastry, Toward verified artificial intelligence. Communications of the Association for Computing Machinery , vol. 65 (2022), no. 7, pp. 46–55.
[2] F. A. D’Asaro and G. Primiero, Probabilistic typed natural deduction for trustworthy computations, 22nd International Workshop on Trust in Agent Societies ( TRUST 2021 ) Co-located with the 20th International Conferences on Autonomous Agents and Multiagent Systems ( AAMAS 2021 ) (London, May 3–7, 2021), (Rino Falcone, Dongxia Wang, and Jie Zhang, editors), vol. 3022, CEUR-WS.org, 2021.
[3] G. Primiero and F. A. D’Asaro, Proof-checking bias in labeling methods, 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming ( BEWARE-22 ), co-located with AIxIA 2022 (University of Udine, Udine, Italy, 2022), (Guido Boella, Fabio Aurelio D’Asaro, Abeer Dyoub, and Giuseppe Primiero, editors), vol. 3319, CEUR-WS.org, 2022.
▸ PATRICK J. RYAN, Explanation, impurity, and infinity: some themes from mathematical practice.
Department of Philosophy, University of California, Berkeley, Berkeley, CA, USA.
E-mail: [email protected].
This talk investigates a fascinating collection of finitary statements proved using infinitary techniques and examines their epistemic significance for the philosophy of mathematical practice. These statements may be divided into two main varieties: (i) independence results from mathematical logic, viz., finitary statements whose proofs require the use of infinitary resources; (ii) results from “ordinary” mathematics whose proofs employ but do not require the infinite. I begin by briefly discussing classical examples of (i) (e.g., the Gödel sentence, the Paris–Harrington theorem, and finite Kruskal’s theorem) and note that, though these are undoubtedly of philosophical interest, they do not provide much information about the epistemic gains produced by the use of infinitary resources. Thus, I turn to examples of (ii) (e.g., Szemerédi’s theorem, the Prime Number Theorem) and consider how an analysis of such results can inform central debates in the philosophy of mathematical practice, especially discussions of purity, content, and explanation. In particular, if a finitary theorem $\tau $ has a perfectly cogent, finitary proof, why then provide an infinitary proof of $\tau $ , a proof involving principles of an ostensibly different sort? What is gained? Do such infinitary proofs play an explanatory role? How is this even possible? I conclude by indicating some promising directions for future research.
▸DIRK SCHLIMM, Towards a philosophy of notation.
Department of Philosophy, McGill University, Montreal, QC, Canada.
E-mail: [email protected].
The aim of this talk is two-fold: First, I will present some philosophical reflections about notations and, second, indicate some roles that notations can play in the discussion of traditional philosophical questions about the nature of mathematics and logic. In the first part, I will give a brief overview of previous work on mathematical notations and introduce some relevant conceptual distinctions and terminology. This includes a discussion of some design criteria for good notations and how these are relative to specific tasks. In the second part, I will give some examples of how the study of notations can contribute to the discussion of traditional philosophical problems pertaining to mathematical ontology, the access to abstract entities, and mathematical progress. In conclusion, notations are not just convenient but dispensable representations of theoretical ideas, but instead essential ingredients of mathematical practice.
Abstracts of contributed talks
▸ ROHAN FRENCH, The semantic content in the syntactic residue of Meyer’s relevant completeness theorem.
Department of Philosophy, University of California, Davis, Davis, CA, USA.
E-mail: [email protected].
Bob Meyer’s “Proving Semantical Completeness “Relevantly” for R” is commonly cited as an example of a soundness and completeness theorem for a non-classical logic conducted using only metatheoretic resources which are acceptable to that non-classical logician. The result that Meyer actually proves in [2] is unlike any “standard” completeness proof. What Meyer proves is that we can faithfully interpret the propositional relevant logic R into the relevantly formulated first-order theory of ternary relational structures. Meyer characterizes this as giving the “honest syntactic residue” of the completeness of R relative to the ternary relational semantics. What we will investigate here is the extent to which there is sufficient “semantic” content in this residue for us to be able to honestly count this as a completeness result in any relevant sense. To that end we will begin by outlining Meyer’s proof, before looking at how this method fares when done with intuitionistic logic. It will turn out that the result we end up with looks and sounds like results which are known to be unprovable for intuitionistic propositional logic over second-order Heyting arithmetic (as shown in, for example, [1]). This parallel seems to suggest that the presence of the syntactic residue is no guide at all to whether “‘relevant semantics’ … works just as well when it is ‘expressed relevantly’ as it does when it is ‘expressed classically”’ as Meyer claimed.
[1] D. C. McCarty, Undecidability and intuitionistic incompleteness. Journal of Philosophical Logic , vol. 23 (1996), no. 5, pp. 559–565.
[2] R. K. Meyer, Proving semantical completeness “relevantly” for R. Australian National University Research School of Social Sciences Logic Research Paper , no. 23 (1985), pp. 1–23.
▸ RONALD FULLER, Ideologies about logic.
Institute for Logic and the Public Interest, 15600 Redmond Way, Suite 101, Redmond, WA 98052, USA.
E-mail: [email protected].
Ideologies about logic have had a profound influence on civilization and are fundamental to its character. Changes in ideologies about logic have preceded every major civilizational transition since ancient times—the high points: the Hellenistic period and Pax Romana, the Golden Age of Islam, the High Middle Ages, the Enlightenment, the digital revolution; and the low points: the Crisis of the Third Century, the Dark Ages, the decline of the Abbasids, the Crisis of the Late Middle Ages, and the authoritarian violence of the 20th century. These transitions cannot be accounted for without explaining ideologies about logic. Such ideologies differ in at least two respects: (1) attitudes towards logic education, and (2) observance of the distinction between logic and ontology. These two factors combine to form distinct ideologies: the Hellenistic ideology, the Byzantine ideology, the late-Abbasid/Scholastic ideology, the Ramist/Enlightenment ideology, and the modern ideology. Variations are found during periods when logic was not well understood, e.g., the Sophistical and the Carolingian.
▸S. KAAN TABAKCI, Categoricity for LP and K3.
Philosophy Department, University of California, Davis, 1 Shields Avenue, Davis, CA, USA.
E-mail: [email protected].
Semantics determined by the provable inferences of a logic may not coincide with the intended semantics of that logic, e.g., there are non-Boolean models that are consistent with Classical Logic in $\texttt {SET-FMLA}$ . This problem is called the Categoricity problem, or Carnap’s problem in the literature (see [3]). One of the common strategies to solve the Categoricity problem, i.e., rule out such non-standard models, is to restrict what counts as an admissible model. For instance, if we restrict admissible models to be compositional and non-trivial, or if we restrict admissible models to interpret negation or disjunction as Boolean operators, we can show that all and only admissible models that can satisfy the provable inferences of Classical Logic in $\texttt {SET-FMLA}$ are Boolean models (see [1, 2, 4]). Moreover, as argued by [2] such restrictions should be non-ad hoc and properly motivated.
In this paper, we will discuss the categoricity problem for the three-valued logics $\mathbb {K}3$ and $\mathbb {LP}$ . We will provide two different solutions to the Categoricity problem of $\mathbb {K}3$ and $\mathbb {LP}$ by imposing restrictions on the admissible models, where each of these restrictions is stricter than the ones for Classical Logic. The first solution will be to restrict admissible models to be compositional and to assign propositional constants their intended value. On the other hand, the second solution will be to restrict the admissible models to interpret negation as the Strong Kleene negation. Then, we will argue that the first solution can be plausibly motivated by appealing to the learnability argument discussed in [2, 5, 6], and the second solution can be motivated by appealing to the necessity of having an operator that can express denial, rejection, etc. to perform disagreements in a language as discussed in [7] and [8].
[1] N. D. Belnap and G. J. Massey, Semantic holism. Studia Logica , vol. 49 (1990), no. 1, pp. 67–82.
[2] D. Bonnay and D. Westerståhl, Compositionality solves Carnap’s problem. Erkenntis , vol. 81 (2016), no. 4, pp. 721–739.
[3] R. Carnap, Formalization of Logic , Harvard University Press, Cambridge, MA, 1943.
[4] A. Church, Review of Carnap 1943. Philosophical Review , vol. 53 (1944), no. 5, pp. 493–498.
[5] P. Pagin and D. Westerståhl, Compositionality I: definitions and variants. Philosophy Compass , vol. 5 (2010), no. 3, pp. 250–264.
[6] ——, Compositionality II: arguments and problems. Philosophy Compass , vol. 5 (2010), no. 3, pp. 265–282.
[7] H. Price, Why ‘not’? Mind , vol. 99 (1990), no. 394, pp. 221–238.
[8] T. Smiley, Rejection. Analysis , vol. 1 (1997), no. 56, pp. 1–9.
Abstract of talk presented by title
▸JOACHIM MUELLER-THEYS, On the structure of the modal.
Independent scholar, Heidelberg, Germany.
E-mail: [email protected].
In standard modal languages with $ N $ as primitive modal operator for necessity and formulæ $ \alpha $ , $ \beta $ , $ \ldots $ , we may define analyticity by $ L \alpha : = \mathrm {N} \alpha \vee \mathrm {I} \alpha $ , where $ I \alpha := \mathrm {N} \neg \alpha $ .
This procedure allows for a natural and elegant definition of contingency as non-analyticity, namely $ C \alpha := \neg \, \mathrm {L} \alpha $ , being coherent with the Aristotelian, since $ \mathrm {C} \alpha \leftrightarrow \mathrm {P} \alpha \wedge \mathrm {P} \neg \alpha $ is a theorem (under tautologies, modus ponens, and replacement), whereby now $ P \alpha : = \neg \mathrm {I} \alpha = \neg \mathrm {N} \neg \alpha $ .
The formal theory reflects the metalogical situation. For instance, $ \phi $ is PL-necessary iff $ \models \phi $ , $ \phi $ is PL-impossible iff $ \neg \phi $ is PL-necessary, $ \phi $ is PL-analytical iff $ \phi $ is PL-necessary or PL-impossible, $ \phi $ is PL-contingent iff $ \phi $ is not PL-analytical.
Since $ \mathrm {L} \alpha \leftrightarrow \mathrm {L} \neg \alpha $ is derivable, so is $ \mathrm {C} \alpha \leftrightarrow \mathrm {C} \neg \alpha $ . Propositions are either analytical or contingent, formally $ \mathrm {L} \alpha \ \dot {\vee } \ \mathrm {C} \alpha $ , which we call the modal dichotomy. Subsequently, propositions are either necessary or contingent or impossible: $ \mathrm {N} \alpha \ \dot {\vee } \ \mathrm {C} \alpha \ \dot {\vee } \ \mathrm {I} \alpha $ .
Astonishingly, specifically modal axioms have not been required. The following result solely uses $ \mathrm {N} \alpha \rightarrow \mathrm {P} \alpha $ (D), which may be derived from the fundamental $ \mathrm {I} \alpha \rightarrow \neg \mathrm {N} \alpha $ , like $ \mathrm {P} \alpha \vee \mathrm {P} \neg \alpha $ and $ \mathrm {N} \alpha \ | \ \mathrm {I} \alpha $ . By the modal trichotomy, $ \mathrm {P} \alpha \rightarrow \mathrm {N} \alpha \vee \mathrm {C} \alpha $ . The converse derives from (D) and $ \mathrm {C} \alpha \rightarrow \mathrm {P} \alpha $ . Thus $ \mathrm {P} \alpha \leftrightarrow \mathrm {N} \alpha \vee \mathrm {C} \alpha $ . Moreover, propositions are possible if and only if they are either necessary or contingent, and non-necessary iff either contingent or impossible.
Unfortunately, substitutivity obstructs complete formalisation of modalities. Most strikingly, no $ \mathrm {C} \alpha $ is a theorem then. Cf. “On Uniform Substitution”, The Bulletin of Symbolic Logic 20 (2014), pp. 264–265.
Nevertheless, adequate formalisation can be achieved. In case of the PL-modalities, by means of the evident axioms $ \neg \mathrm {N} \phi $ for $ \not \models \phi $ . S13 (Kaplan, Cocchiarella) bases on S5, Carroll’s simplification on K, and our S on N (cf. “Metalogical Extensions I: Propositional Logic”, ibidem, p. 238). For instance, $ \vdash _{\mathrm {S}} \mathrm {C} \phi $ iff $ \phi $ is PL-contingent. Semantically, C (like Carnap) does the job.
Notes. The modal trichotomy and the composition of possibility and non-necessity were observed by the author long ago. The clue to the formal theory, the L-C approach, was set off by a talk of A. Schumann in April 2022 at the Logica Universalis Webinar and subsequent discussions. It is due to Wilfried Buchholz, whereby a former terminological consideration by E. Scheibe stimulated N* = L. A detailed script is available.