Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-24T13:53:27.780Z Has data issue: false hasContentIssue false

2023 MEETING OF THE AUSTRALASIAN ASSOCIATION FOR LOGIC University of Queensland Brisbane, Australia 9-10 November, 2023

Published online by Cambridge University Press:  02 April 2024

Rights & Permissions [Opens in a new window]

Abstract

Type
Meeting Report
Copyright
© The Association for Symbolic Logic, 2024. Published by Cambridge University Press on behalf of The Association for Symbolic Logic

The Annual Meeting of the Australasian Association for Logic was held in Brisbane, Australia on 9–10 November 2023, hosted by the University of Queensland. The event was organized by the current presidents of the AAL, Guillermo Badia (Queensland) and Shawn Standefer (National Taiwan University). There were three invited addresses, given by Raheleh Jalali (Czech Academy of Sciences) on An introduction to proof complexity, Carles Noguera (Siena) on Asymptotic truth-value laws in many-valued logics and Petr Cintula (Czech Academy of Sciences) on Logics with Infinitary Rules. The 2024 Meeting of the AAL will be held in November 2024 at the University of Sydney, and will be organized by Guillermo Badia, Sasha Rubin and Shawn Standefer.

Abstracts of the invited talks and the contributed talks by members of the Association for Symbolic Logic follow.

For the Organizing Committee

Guillermo Badia and Shawn Standefer

Abstracts of invited plenary lectures

▸PETR CINTULA, Logics with infinitary rules: A tutorial.

Institute of Computer Science of the Czech Academy of Sciences, Pod Vodárenskou Věží 271/2, 182 00 Prague, Czech Republic.

E-mail: .

Finitary logics, i.e., Tarskian consequence relations between sets of formulas and formulas with Hilbert-style axiomatization using rules with finitely many premises only, are the bread and butter of almost every (non-classical) logician (see e.g. [2]). However there are many interesting logics which are not finitary: e.g. certain dynamic logics, logics of common knowledge, or prominent many-valued logics such as standard Łukasiewicz logic [5,6,8,9].

We start this tutorial by introducing several interesting extensions of the class of finitary logics, present their characterizations, mutual inclusions and separations, and show that several important theorems of abstract algebraic logic can be generalized from finitary logics to some of these classes [2,3,7]).

Then we focus on the Lindenbaum lemma and a closely related Pair Extension Property (a ‘proper’ replacement of the Cut rule in symmetric consequence relations [4]). These crucial results (used e.g. in the completeness proofs) are usually proved for finitary logics only but are known to hold for some infinitary logics as well [5,6,8,9]. We explore the mutual relationship of these two notions and present several easily checkable sufficient and/or necessary conditions for their validity outside the class of finitary logics [1,2].

We conclude by showcasing certain applications of the presented results, remark on their relation to other notions (e.g. the Rasiowa–Sikorski lemma or expansion of a first-order theory into a Henkin one), and explore several avenues of possible generalization.

[1] M. Bílková, P. Cintula, T. Lávička, Lindenbaum and pair extension lemma in infinitary logics, WoLLIC 2018 , (Moss, de Queiroz, Martinez, editors), Springer, 2018, pp. 134–144.

[2] P. Cintula, C. Noguera, Logic and Implication: An Introduction to the General Algebraic Study of Non-classical Logics , Springer, 2021.

[3] , The proof by cases property and its variants in structural consequence relations, Studia Logica vol. 101 (2013), pp. 713–747.

[4] J.M. Dunn, G.M. Hardegree Algebraic Methods in Philosophical Logic , Clarendon Press, 2001

[5] R. Goldblatt, Mathematics of Modality , CSLI Publications Stanford University, 1993.

[6] L.S. Hay; Axiomatization of the infinite-valued predicate calculus, Journal of Symbolic Logic vol. 28 (1963), pp. 77–86.

[7] T. Lávička, C. Noguera, A new hierarchy of infinitary logics in abstract algebraic logic, Studia Logica vol 105 (2017), pp. 521–551.

[8] K. Segerberg, A model existence theorem in infinitary propositional modal logic, Journal of Philosophical Logic , vol. 23 (1994), pp. 337–367.

[9] G. Sundholm, A completeness proof for an infinitary tense-logic, Theoria , vol. 43 (1977), pp. 47–51.

▸RAHELEH JALALI, An introduction to proof complexity.

Czech Academy of Sciences.

E-mail: .

“A student of mine asked me today to give him a reason for a fact which I did not know was a fact $-$ and do not yet. He says that if a figure be anyhow divided and the compartments differently coloured so that figures with any portion of common boundary line are differently coloured $-$ four colours may be wanted, but not more $\dots $ . If you retort with some very simple case which makes me out a stupid animal, I think I must do as the Sphynx did $\dots $ ” [3]

This is what the famed mathematician De Morgan wrote to his friend, Hamilton, the distinguished mathematician and physicist, in 1852. The content of this letter was the birth of the famous “four color theorem”. Over the years, several fallacious proofs were given until finally in 1977, Appel and Haken presented a correct one. The proof, however, required analyzing many (to be precise, 1936) discrete cases. Facing such a tedious case-checking work, the question arises whether there exists a shorter, more brilliant proof. Or, we may more generally wonder:

How hard is it to prove some given theorems? What are their shortest proofs? Are there such hard theorems that even their shortest proofs go beyond our physical capacities?

Even in the case that we consider the propositional level, these problems are meaningful: Let $\varphi $ be a classical propositional tautology. By the so-called brute-force method, we know that $\varphi $ has a proof roughly of the size $2^n$ , where n is the number of the atomic variables in $\varphi $ . The question is whether there exists a smarter strategy to verify the validity of $\varphi $ , which does not include checking all the possible valuations.

The problems we mentioned so far focus on theorems rather than the theories in which they are proved. Looking in this direction, one can ask whether there is a theory so strong that no hard theorems exist in it. Otherwise, what happens if there exists no such theory? Then, we may wonder if there will be a significant decrease in lengths of proofs when we move to more powerful theories. If so, we can continue to advance towards stronger and stronger theories and ask: Is there a “strongest” theory, in the sense that it provides the best proofs?

These are some examples of the problems considered in proof complexity, a field whose main aim is investigating the complexity (for instance, length, i.e., number of symbols) of proofs.

In this tutorial, we begin with introducing proof systems such as Frege and extended Frege systems and resolution. We introduce Cook’s program and consider open problems and how they are related to the complexity classes P, NP, and coNP, and in general to the field of computational complexity. We will then talk about interpolation, specially feasible interpolation as one of the main methods to prove lower bounds. Finally, we will talk about the complexity of proofs in systems for non-classical logics. For more, see [1,2].

[1] J. Krajíček, Proof complexity , Vol. 170, Cambridge University Press, 2019.

[2] P. Pudlák, Logical foundations of mathematics and computational complexity: A gentle introduction , Springer, Berlin, 2013.

[3] R. Wilson, Four Colors Suffice: How the Map Problem Was Solved , Revised Color Edition, Vol. 30, Princeton university press, 2013.

▸CARLES NOGUERA, Asymptotic truth-value laws in many-valued logics.

Department of Information Engineering and Mathematics, University of Siena, Siena, Italy.

E-mail: .

URL: https://sites.google.com/view/carlesnoguera/.

In this tutorial we concentrate on studying which truth-values are most likely to be taken on finite models by arbitrary sentences of a many-valued predicate logic. We show generalizations of Fagin’s classical zero-one law for any logic with values in a finite lattice-ordered algebra, and for some infinitely valued logics, including Łukasiewicz logic. The finitely valued case is reduced to the classical one through a uniform translation and Oberschelp’s generalization of Fagin’s result. Moreover, we show that the complexity of determining the almost sure value of a given sentence is PSPACE-complete, and for some logics we may describe completely the set of truth-values that can be taken by sentences almost surely. The presented new results have been obtained in a joint work with Guillermo Badia and Xavier Caicedo [1].

[1] G. Badia, X. Caicedo and C. Noguera, Asymptotic truth-value laws in many-valued logics, arXiv:2306.13904, 2023.

Abstracts of contributed talks

▸KATALIN BIMBÓ, Fission in positive relevance logic.

Department of Philosophy, University of Alberta, 2–40 Assiniboia Hall, Edmonton, AB T6G 2E7, Canada.

E-mail: .

URL: www.ualberta.ca/~bimbo.

Gaggle theory, which was introduced by Dunn in [3], gives a semantics for a logic through a relational representation of its Lindenbaum algebra. The relations that represent connectives may interact with each other (or even coincide) depending on the properties the connectives have. For instance, in the Meyer–Routley semantics for relevance logics, the implication and the fusion connectives are modeled from the same ternary relation. Certain combinations of gaggles and some variations on the key ideas are given, for example, in [1, 2, 4, 5].

Fission ( $+$ ) is the intensional analog of disjunction like fusion ( $\circ $ ) is the intensional analog of conjunction in $\mathbf R$ , the logic of relevant implication. Fission is definable in $\mathbf R$ as $(A+B)\leftrightarrow {\mathop {\sim }} ({\mathop {\sim }} A\circ {\mathop {\sim }} B)$ ; hence, it is often omitted. In this talk, I introduce fission into a positive relevance logic, define a semantics with a ternary relation, and prove that the logic is sound and complete for the semantics.

[1] Katalin Bimbó, Some relevance logics from the point of view of relational semantics, Logic Journal of the IGPL: Israeli Workshop on Non-classical Logics and their Applications (IsraLog 2014) (O. Arieli and A. Zamansky, editors), vol. 24 (2016), no. 3, pp. 268–287.

[2] Katalin Bimbó and J. Michael Dunn, Generalized Galois Logics: Relational Semantics of Nonclassical Logical Calculi , CSLI Lecture Notes, vol. 188, CSLI Publications, 2008.

[3] J. Michael Dunn, Gaggle theory: An abstraction of Galois connections and residuation, with applications to negation, implication, and various logical operators, Logics in AI: European Workshop JELIA’90 (J. van Eijck, editor), Lecture Notes in Computer Science, vol. 478, Springer, 1991, pp. 31–51.

[4] J. Michael Dunn, Positive modal logic, Studia Logica , vol. 55 (1995), no. 2, pp. 301–317.

[5] J. Michael Dunn, A representation of relation algebras using Routley–Meyer frames, Logic, Meaning and Computation. Essays in Memory of Alonzo Church (C. A. Anderson and M. Zelëny, editors), Synthese Library: Studies in Epistemology, Logic, Methodology and Philosophy of Science, vol. 305, Kluwer, Dordrecht, 2001, pp. 77–108.

▸BEN BLUMSON, Dialetheism and distributed sorites.

Philosophy, National University of Singapore, Kent Ridge, Singapore.

E-mail: .

Intuitively, plucking one hair from a hirsute man will not make him bald. Giving one dollar to a poor person will not make them rich. And adding one grain of sand to a pile will not it make it a heap. In general, according to the principle of tolerance, vague predicates are insensitive to tiny differences. Growing just a millimetre taller, for example, may make someone who is strictly less than two metres tall at least two meters tall, but it won’t make someone tall who isn’t tall already. Adding a dollar to someone’s income may take them from one tax threshold to the next, but it won’t make a poor person rich.

One response to the sorites paradox, known as noniteration, is to accept arguments which use tolerance once or twice on its own, but reject arguments which use it many times over. So, for example, from the fact that $0$ grains of sand is not a heap, one can infer that $1$ grain of sand is not a heap. And from the fact that $1$ grain of sand is not a heap, one can infer that $2$ grains of sand is not a heap. In general, from the fact that n grains of sand is not a heap, one can infer that $n + 1$ grains of sand is not a heap. Nevertheless, according to noniteration, one cannot string all these arguments together to show, for example, that $100,000$ grains of sand is not a heap.

But the distributed sorites paradox is a puzzle designed to undermine noniteration as a response to the original sorites paradox. Here is how Zach Barnett [1, 1074] presents the puzzle:

There is a $100,000$ step staircase. The bottom step, Step $1$ , has one grain of sand. Step $2$ has two grains. In general, Step n contains n grains (arranged in a heap where possible). The steps toward the bottom obviously do not contain heaps. The steps toward the top obviously do. With respect to some intermediate steps, it’s hard to say. Now, we tinker with the set up: Remove one grain of sand from each step (except the bottom one), and then add all $99,999$ of the grains taken to Step $1$ , and arrange them in a heap. So what is the problem? By tinkering, we have created a new heap without destroying any. But this is odd, for the new configuration does not seem relevantly different from the original.

Noniteration can avoid the classical sorites paradox by avoiding the application of tolerance to the same object many times. But it cannot avoid the distributed sorites paradox, according to Barnett, since the distributed sorites applies tolerance only to different objects.

How should proponents of noniteration respond to the distributed sorites paradox? The logic $st$ , for “strict-tolerant”, works by incorporating an apparently orthogonal idea known as dialetheism, according to which some sentences and their negations are both true [2]. How does combining dialetheism with noniteration resolve the distributed sorites? The answer is surprisingly obvious. The distributed sorites involves valid reasoning from true premises to an inconsistent conclusion. Classically in such a situation one must either reject the truth of one of the premises or dispute the validity of the reasoning. But it is open to proponents of dialetheism to do neither, and to simply accept the inconsistent conclusion instead.

[1] Z. Barnett, Tolerance and the distributed sorites, Synthese , vol. 196 (2019), no. 3, pp. 1071–1077.

[2] P. Cobreros, P. Egré, D. Ripley and R. van Rooij, Tolerant, Classical, Strict, Journal of Philosophical Logic , vol. 41 (2012), no. 2, pp. 347–385.

▸FERNANDO CANO-JORGE AND LUIS ESTRADA-GONZÁLEZ, Logics for progressive reasoning.

Philosophy Programme, University of Otago; Department of Philosophy, University of Canterbury; and Facultad de Filosofía, Universidad Panamericana, campus México.

Instituto de Investigaciones Filosofía, Universidad Nacional Autónoma de México.

E-mail: .

E-mail: .

Consider reasoning in the sense of the act and effect of validly putting forward something as a reason for something else. Let us represent “The $A_{i}$ ’s are (or give) a reason for B” as “ $(A_{1}\otimes \ldots \otimes A_{n})\succ B$ ”, where $\otimes $ is a suitable reasons-binder. This formal representation of reasoning is quite suggestive and invites an almost direct logical treatment of reasoning as an implicative-like phenomenon. (And of binding reasons as a conjunctive-like phenomenon.) Now, let us go back to the roots of logic in Aristotle and retrieve his notion of deductive reasoning or syllogismos.

Now reasoning is an argument in which, certain things being laid down, something other than these necessarily comes about through them (Topica, 100a25).

He specified that, for his notion of reasoning, the conclusion of an argument not only must follow necessarily from its premises but must also be different than any of them. It has been argued —for example, in [2], [4]— that Aristotle’s qualification of difference stems from a concern of ruling out question-begging arguments as proper forms of reasoning. Indeed, arguing “A because A” or “A is a reason that justifies A” is petitio principii. Some, like Duncombe, look at the applications of Aristotelian reasoning and further argue that there are also contextual considerations for his qualification [1].

Thus, Aristotle’s definition of reasoning seems to turn away from the implicative form of the Reflexivity principle, $A\rightarrow A$ , and moreover demand the stronger principle of Irreflexivity, $\sim \! (A\rightarrow A)$ , since one cannot deductively obtain A from laying down A itself. Dropping Reflexivity already makes us move away from classical logic into some non-classical alternative, like FDE or K3; but introducing Irreflexivity is an infrequent move which leads us even farther away from classical logic into the world of contra-classical logics, like relevant syllogistic S [4], Abelian logic A [3] or Mortensen’s M3V [5]. Thus, to even meet Aristotle’s first demand of what logic is to study, one must work with very non-standard formal systems.

Reflexivity is one of many lattice principles involved in classical and sub-classical logics. Indeed, if one looks at the algebraic structure of a set $\{1,0\}$ of two values which may be interpreted as truth and falsity, one sees that this set can be partially-ordered, i.e. equipped with a reflexive, transitive and anti-symmetric relation $\leq $ resembling logical consequence, and that a meet $\wedge $ and a join $\vee $ are definable. Thus, a lot of what turns out to be logically true in this simple structure is what can be called lattice principles, like $0\leq 1$ , $a\leq a$ , $a\wedge b\leq a$ , $a\leq a\vee b$ , $a\wedge (b\vee c)\leq (a\wedge b)\vee c$ , etc. including $a=\sim \!\sim \!a$ , $\sim \!(a\wedge b)\leq \sim \!a\vee \sim \!b$ , $\sim \!(a\vee b)\leq \sim \!a\wedge \sim \!b$ , etc. if the lattice is complemented, and also the implicative forms of these if the lattice is residuated.

Notably, Aristotelian logic seldom uses lattice principles and it even rejects some of them, like Idempotence of conjunction, $A\rightarrow (A\wedge A)$ , Simplification, $(A\wedge B)\rightarrow A$ , and Reflexivity, on the grounds of introducing redundancies [6, §5.1], [2, pp.152, 157]. In the case of Simplification, for instance, the conclusion that A is true is a partial repetition of what has being laid down in the premise, which is that A is true and B is true. One may even consider this argument to incur in petitio principii since one is asking that the truth of the conclusion is already granted as a premise. And similarly for Idempotence of conjunction.

Lattice principles, in general, are simply not forms used by traditional logic; most of them are not to be found in the works of Aristotle, Boethius, Abelard or Peter of Spain, to name a few of the people responsible for the study and development of logic for more than a dozen centuries. The lattice theoretical approach to logic is very different from what these philosophers defended as principles of correct reasoning. And though that algebraic approach is appropriate for studying classical logic and some of its sub-systems, it is not adequate for centuries worth of logical theory.

Indeed, the logic of the relation “A is a reason for B” is far away from being classical. It resembles more closely the logic of the relation of “A justifies B”, much treasured in epistemology, since nothing is justified by itself. Clearly, for Aristotle, A cannot be a reason for A nor for $\sim \! A$ , whence $\sim \!(A\succ A)$ and $\sim \!(A\succ \sim \!A)$ must be valid in a logic of reasons; and it is clear that these two are invalid in classical logic. Moreover, if A is a reason for B then one could expect that A is not a reason for $\sim \!B$ , whence Boethius’ Thesis $(A\succ B)\succ \sim \!(A\succ \sim \! B)$ may also be valid, bringing this logic into the connexive realm.

In this paper we follow [2] in calling progressive logics such systems that replace Reflexivity with Irreflexivity, so as to avoid circular reasoning, but we also generalize further and define a class of Aristotelian logics for zero and first-order languages. We also provide a series of matrices for the conditional of some zero-order Aristotelian logics, termed PR1, PR2, PR3 and PR4, all of which are expansions of FDE. It is shown that these conditionals satisfy the requirements of non-circular reasoning and, moreover, have the features expected by Sylvan and Goddard of avoiding paradoxes of implication and validating diverse connexive theses. Further comments on the applications of these logics are provided. We also prove soundness and completeness of these systems with respect to a tableaux system.

[1] Matthew Duncombe, Irreflexivity and Aristotle’s syllogismos, The Philosophical Quarterly , vol. 64 (2014), no. 256, pp. 434–452.

[2] Len Goddard and Richard Sylvan, On reasoning: (ponible) reason for (and also against), and relevance, Reason, Cause and Relevant Containment with an application to Frame Problems (Richard Sylvan and Len Goddard and Newton da Costa, editors), Australian National University, Canberra, 1989.

[3] Robert K. Meyer and John K. Slaney, Abelian logic (from A to Z), Paraconsistent Logic: Essays on the Inconsistent (Richard Routley and Graham Priest and Jean Norman, editors), Philosophia, Munich, 1989, pp. 245–288.

[4] Robert K. Meyer and Errol Martin, S (for Syllogism) Revisited: “The Revolution Devours its Children”, The Australasian Journal of Logic , vol. 16 (2019), no. 3, pp. 49–67.

[5] Chris Mortensen, Aristotle’s Thesis in Consistent and Inconsistent Logics, Studia Logica , vol. 43 (1984), no. 1-2, pp.107–116.

[6] Richard Sylvan, A preliminary Western history of sociative logics, Bystanders’ Guide to Sociative Logics , Australian National University, Canberra, 1989.

▸JAMES CARR, Homomorphism preservation in the finite for many-valued logics.

School of Historical and Philosophical Inquiry, University of Queensland, St Lucia QLD, Australia.

E-mail: .

A canonical result in model theory is the homomorphism preservation theorem (h.p.t) which states that a first-order formula is preserved under homomorphisms on all structures if and only if it is equivalent to an existential-positive formula, standardly proved via a compactness argument. Rossman [4] established that the h.p.t. remains valid when restricted to finite structures. This is a significant result in the field of finite model theory, as it stands in contrast to other results proved via compactness where the failure of the latter also results in the failure of the former[3]. It is also a result of introduce to the field of constraint satisfaction due to the equivalence of existential-positive formulas and unions of conjunctive queries. At the same time Dellunde and Vidal [2] established that a version of the h.p.t. holds for a collection of many-valued predicate logics, namely those whose structures (finite and infinite) are defined over a fixed finite MTL-chain.

In this paper we unite these two strands. We show how one can extend Rossman’s proof of a finite h.p.t. to a very wide collection of many-valued predicate logics. In doing so we establish a finite variant to Dellunde and Vidal’s result which not only applies to structured defined over more general algebras than MTL-chains but where we allow the algebra our structures are defined over to vary within a given class. We do so by identify the fairly minimal critical features of classical logic that enable Rossman’s proof from a model theoretic point of view, and demonstrate how any non-classical logic satisfying them will inherit an appropriate finite h.p.t. One requirement of this is a generalisation of back-and-forth equivalence for many-valued logics first presented in [1]. The investigation provides a starting point in a wider development of finite model theory for non-classical logics and just as the classical finite h.p.t has implications for constraint satisfaction the many-valued finite h.p.t has implications for valued constrain satisfaction problems.

[1] M.P. Dellunde, A. Garcia-Cerdaña and C. Noguera, Back-and-forth systems for fuzzy first order models, Fuzzy Sets and Systems , vol. 345 (2018), pp. 83–98.

[2] M.P. Dellunde and A. Vidal, Truth-preservation under fuzzy pp-formulas, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems , vol. 27 (2019), pp. 89–105.

[3] H-D. Ebbinghaus and J. Flum, Finite Model Theory , Springer, 1995.

[4] B. Rossman, Homomorphism preservation theorems, Journal of the Association for Computing Machinery , vol. 55 (2008), no. 3, pp. 1–53.

▸ALBA CUENCA, New prospects for Thomason conditionals.

School of Philosophical, Historical and International Studies, Monash University, Australia.

E-mail: .

The question of evaluating a conditional has remained a topic of intense debate. In the epistemic theories of conditionals, a widely accepted proposal pertains to indicative conditionals – those in which the antecedent aligns with our existing beliefs. In a famous footnote, Ramsey claims that “If two people are arguing ‘If p, will q?’ and are both in doubt as to p, they are adding p hypothetically to their stock of knowledge and arguing on that basis about q…” [4, p. 247n1]. This proposal is known as the Ramsey Test and was the starting point of much work in theories about conditionals.

This talk will discuss a particular challenge to the Ramsey test. Imagine that you have known your friend Helena for ages, and your judgment about her is that she carries a perfectly everyday life. However, the local newspaper publishes some yellow news about Helena’s past, portraying her as a spy. In that context, it seems acceptable to assert the following sentence:

If Helena is a spy, I do not believe it.

Sentences like the one presented are called “Thomason Conditionals”, following Van Frassen[5, p. 503] who attribute those to Richard Thomason. Formally, we are dealing with a sentence of the form: “If p, then $\neg Bp$ ”, where B is a modal operator of belief. Following the Ramsey test, the antecedent’s role is to hypothetically update the knowledge base with the information that the antecedent is true and evaluate the consequent in this new light. However, once you temporally assume the truth of the proposition “Helena is a spy,” then you are no longer able to accept the consequent since the update leads to believing the antecedent, that is, $Bp$ . Therefore, following the Ramsey test, the conditional is unassertable.

The aim of this talk is to present a theory of indicative conditionals capable of modeling Thomason Conditionals. The theory follows two main desiderata: first, it should follow the proposal of the Ramsey test. Evaluating conditionals implies assuming the antecedent in a relevant way. Second, we subscribe to the view presented by Gillies [2], indicatives give information about an agent’s pre-established belief acceptance base. With this in mind, our proposal, inspired by the unified theory for indicative and counterfactual conditionals presented in [3], will apply a probabilistic approach to conditionals and modal logic to model Thomason Conditionals.

[1] J. Bennet, A philosophical Guide to Conditionals , Oxford University Press, Oxford, 2003.

[2] A. Gillies, Epistemic Conditionals and Conditional Epistemics, Noûs , vol. 38 (2004), no. 11, pp. 585–616.

[3] M. Günther and C. Sisti, Ramsey’s conditionals, Synthese , vol. 200 (2022), no. 2, pp. 165.

[4] F.P. Ramsey, General propositions and causality, The foundations of mathematics and other logical essays (R. B. Braithwaite, editor), Humanities Press, 1950, pp. 237–257.

[5] B.C. van Fraassen, Review of Brian Ellis, Rational Belief Systems, Canadian Journal of Philosophy , vol. 10 (1980), no. 3, pp. 497–511.

▸SHIMPEI ENDO, Truthmakers for vagueness: the case of epistemicism.

Department of Philosophy, University of Sydney.

E-mail: .

When we talk about vagueness, we often talk about truth. Most stances on vagueness and attitudes towards the related paradox of the sorites say something about truth. Degreeists, for instance, suggest our truth is not 2-valued but many-valued. The connection between vagueness and truth is apparent. How about the connection to its source — what makes a truth true — also known as truthmakers? The answer is yes, at least for Sorensen, who appeals to this connection for his version of epistemicism [3]. According to his truthmaker gap epistemicism, borderline cases are true but ungrounded: They are just true but have no truthmaker at all.

Sorensen fulfills his purpose of explaining what he calls the absolute borderline cases. However, this approach has a serious drawback. As Jago criticized, Sorensen’s style of adopting truthmakers has no space for higher-order vagueness [2]. This is a high price because giving an account to higher-order vagueness is one of the merits of epistemicism.

This paper explores a better way of employing truthmaker for epistemicism in vagueness debate. Particularly, this paper presents truthmaker semantics for Williamson’s (more popular) version of epistemicism [4] and shows the resulting semantics satisfies its desiderata (capturing the higher-order vagueness and canceling KK/DD principle) [5].

Williamson is an epistemicist. So he sees the problem of vagueness as just a special case of the broader issue of epistemology: the problem of inexact knowledge. When we deal with inexact knowledge (e.g. knowing that there are at least 10 people in the classroom at a glance, without knowing the exact number of people there), the contested but still widely believed principle of KK fails. KK says that if you know p (K p) then you know that you know p (KK p). Its counterpart to vagueness is D principle. Read D as “definitely” operator. This correspondence is immediate as knowing is safely (i.e. definitely) believing, at least for reliabilists like Williamson.

How to accommodate this view in the framework of truthmaker? The key idea is matching between two inexact notions: inexact knowledge in Williamsonian epistemicism and inexact truthmakers in the truthmaker semantics literature [1]. An inexact truthmaker contains an exact truthmaker. Hence an inexact truthmaker includes some extra and abundant factors contributing a truth. This abundant part captures Williamson’s core idea of margin for error, which plays a central role as a buffer to make a belief “safe”.

The resulted semantics meets two main formal goals. The first is to disprove (i.e. provides a counter-example to) KK/DD principle (if p is determinately so, it is determinate that p is determinately so — if D(p) then DD(p) — this corresponds to KK principle since vagueness is just a special case of epistemology). The second is to accommodate higher-order vagueness, disproving that II p implies I p (read I as the “indefinite” operator, which is dual to the definite D).

[1] K. Fine, Truthmaker Semantics, A Companion to the Philosophy of Language (Bob Hale, Crispin Wright, and Alexander Miller, editors), John Wiley & Sons Ltd., New York, 2017, pp. 556–577.

[2] M. Jago, The Problem with Truthmaker-Gap Epistemicism, Thought: A Journal of Philosophy , vol. 1 (2012), no. 4, pp. 320–329.

[3] R. Sorensen, Vagueness and Contradiction , Oxford University Press, 2001.

[4] T. Williamson, Vagueness , Routledge, 1994.

[5] T. Williamson, On the Structure of Higher-Order Vagueness, Mind , vol. 108 (1999), no. 429, pp. 127–144.

▸PETER FRITZ, Prospects for higher-order contingentism.

Dianoia Institute of Philosophy, Australian Catholic University, Level 5, 250 Victoria Parade, East Melbourne VIC 3002, Australia.

Department of Philosophy, Classics, History of Art and Ideas, University of Oslo, Georg Morgenstiernes hus, Blindernveien 31, 0315 Oslo, Norway.

E-mail: .

Contingentism is the view that it is contingent what there is; necessitism is the view that it is necessary what there is. These views in modal metaphysics have the advantage that they can be formulated using only the resources of first-order modal logic. Using a higher-order extension of this formal language, one can regiment talk of propositions, properties, and relations as well. In such an extension, one can formulate higher-order contingentism, the view that it is contingent what propositions, properties, and relations there are, as well as higher-order necessitism, the view that it is necessary what propositions, properties, and relations there are. Following [1], exploring these views in modal metaphysics means doing modal logic as metaphysics.

Previous explorations of contingentism have for the most part assumed the being constraint, according to which having a property or standing in a relation requires being something. However, there are good reasons to question this assumption. Indeed, there are good reasons to question even the weaker assumption of the modalized being constraint, according to which having a property or having a relation requires possibly being something. This explores a version of contingentism which rejects this assumption as well, and so assumes that even impossibilia, in particular propositions, properties, and relations there couldn’t be, can and do have properties and stand in relations.

To develop such an unfamiliar metaphysical view, we will start with a necessitist view which has recently received much attention in the burgeoning literature on higher-order metaphysics (metaphysics using higher-order logic), namely classicism. This view can be defined by closing a standard system of classical higher-order logic under the rule of equivalence, which allows us to conclude that any two (closed or open) formulas which are provably equivalent express the same proposition, property, or relation. To accommodate contingentism, we weaken the underlying logic from classical to free quantification theory. The presence of $\lambda $ -abstraction and $\beta $ -conversion entails that contingentism entails failures of the modalized being constraint, as intended.

With the further assumption of the truth principle, according to which truth entails being entailed by some truth, it can be shown that this form of contingentism allows us to express generality not only with respect to what there is and what there could be, but also with respect to what there could not be. The resulting ability to generalize over impossibilia allows the contingentist to overcome some pressing problems, including certain expressive power challenges. In this sense, the resulting form of contingentism does better than forms of contingentism which have been explored before.

However, this form of generality also gives rise to a new challenge concerning the metasemantics of quantification. It will be noted that according to this contingentist view, this form of generality satisfies all the logical properties of quantification. Further, it will be noted that if it is a kind of quantification, it is the broadest kind of quantification. This strongly suggests that this form of generality is the broadest kind of quantification. However, embracing this conclusion means embracing necessitism. This form of contingentism therefore faces the challenge of explaining why generalizing with respect to what there is, what there could be, and what there could not be should not be a kind of quantification. It is not clear how to motivate such a position.

[1] T. Williamson, Modal Logic as Metaphysics , Oxford University Press, 2013.

▸HUAYU GUO AND BRUNO BENTZEN, Three challenges to Martin-Löf’s distinction between sense and reference.

School of Philosophy, Zhejiang University, 866 Yuhangtang Rd, China.

E-mail: .

E-mail: .

The traditional distinction between sense and reference proposed by Frege faces a difficult challenge when viewed through the lens of constructive semantics. There is a growing interest in this topic since Dummett [1] first claims that the sense of an expression is related to its reference as a program to its execution. Dummett [2] elaborates on his own views with an explicit constructive background decades later, and his ideas are then further refined by Martin-Löf [3] in the setting of his own constructive type theory that explains computation as evaluation. Both papers remained unpublished for over twenty years until recently, but discussions in the literature are still lacking.

Some of the main novelties of Martin-Löf’s distinction are the theses that the reference of a sentence is a proposition in the primitive form and that computation is unfolding the definitions of objects to their primitive forms. In this talk, we will raise the following three objections to Martin-Löf’s semantic distinction:

  • We argue that Martin-Löf’s theory of sameness of sense as synonymy contradicts his view of senses as programs inspired by Dummett. Martin-Löf identifies two expressions when they have the same value even when they are evaluated differently. For instance, senses expressed by $10^{10}$ and $10000000000$ are identical as senses. Nevertheless, according to the standard we set for comparing whether the programs are identical, namely, their computational behavior, these two programs are not the same.

  • Martin-Löf [3] explains the evaluation relation as the unfolding of definitions. We argue that Martin-Löf’s views on how to introduce a new function and what evaluation is are inconsistent. Martin-Löf [4] introduces a new function to a type in terms of more primitive ones. For example, $\neg X$ where X is a propositional variable, is a function defined by $X \rightarrow \bot $ . The definition for “ $\neg X$ ” allows it to be evaluated. The contradiction comes from, according to Martin-Löf [3], functional expressions can not be evaluated.

  • Martin-Löf [3] borrows the scholastic notion of supposition as what an expression stands for on a particular occasion of its use. He distinguishes between meaning and referential supposition, adding that in the judgment $a:A$ , we have meaning supposition on the left side of the colon and referential supposition on the right side. For him, this referential supposition has to do with the fact that when $a:A$ and $A = B : type$ we can conclude that $a : B$ . We object to his views with the proposal of a similar admissible rule in type theory that is inconsistent with this claim.

[1] M. Dummett, Frege’s distinction between sense and reference, Truth and Other Enigmas , Harvard Univ. Press, Cambridge, 1978, pp. 116–144.

[2] M. Dummett, Sense and reference from a constructivist standpoint., The Bulletin of Symbolic Logic , vol. 27, no. 4, pp. 485–500, 2021.

[3] P. Martin-Löf The sense/reference distinction in constructive semantics Bulletin of Symbolic Logic , vol. 27, no. 4, pp. 501–513, 2021.

[4] P. Martin-Löf Philosophical aspects of intuitionistic type theory, Transcriptions of lectures given at Leiden University from 23 September to 16 December, 1993

▸LLOYD HUMBERSTONE, A puzzle about the semantics of structural rules.

Monash University, Clayton, Victoria, Australia.

E-mail: .

Three structural rules labelled in the style of Dana Scott to emphasize that we are here concerned with logic in the framework Set-Set, as it is called in [2]:

$(\mathbb {R})$                    $(\mathbb {T})$  

The formulation of $(\mathbb {T})$ is tailored to a finitary setting, assumed here. We are concerned with sets of sequents closed under the third of these rules and their interpretation by sets of pairs of bivalent valuations $\langle u, v\rangle $ , a sequent being said to hold on such valuation pair just in case we do not have $u(C) = T$ for all $C\in \Gamma $ while $v(D) = F$ for all $D \in \Delta $ . Note that the rule $(\mathbb {M})$ preserves the property of holding on an arbitrary valuation pair.

Although valuation pairs as currently conceived scarcely figure in [2] in their own right, there is one reference to them (Remark 5.34.20, p. 750, here paraphrased), extricating a point made in the ‘linguistically heterogeneous’ setting of [1]; as usual, $u \leq v$ means that for all formulas A, $u(A) = T$ implies $v(A) = T$ : every instance of $(\mathbb {R})$ holds on $\langle u, v\rangle $ iff $u \leq v$ ; the rule $(\mathbb {T})$ preserves the property of holding on $\langle u, v\rangle $ iff $v \leq u$ .

The above observation shows an intimate connection between the structural rules $(\mathbb {R})$ and $(\mathbb {T})$ and the $\leq $ relation between the valuations in a valuation pair, in the latter case facing us with a simple $\leq $ -condition and its converse. But syntactically these structural rules look very different, since one is a zero-premise sequent-to-sequent rule and the other is a two-premise such rule: a difference it is hard to see as connected to the contrast between $u \leq v$ and $v \leq u$ . The present discussion explores this somewhat puzzling situation, while also touching on the case of the operational (as opposed to structural) rules, as in [3].

[1] L. Humberstone, Heterogeneous Logic, Erkenntnis , vol. 29 (1988), pp. 395–435.

[2] , The Connectives , MIT Press, Cambridge MA. 2011.

[3] , ‘Monotonic Logic’, presented May 30, 2012, at the Arché (U. St. Andrews) Foundations of Logical Consequence Project Audit Conference, Ardtornish Estate, Morvern, Lochaber, Argyllshire, Scotland.

▸LEONARDO PACHECO, Towards a characterization of the $\mu $ -calculus’ collapse to modal logic.

Institute of Discrete Mathematics and Geometry, TU Wien.

E-mail: .

The $\mu $ -calculus is obtained by adding least and greatest fixed-point operators $\mu $ and $\nu $ to modal logic. The alternation depth of a formula measures the entanglement of its least and greatest fixed-point operators. Bradfield [2] showed that, for all $n\in \mathbb {N}$ , there is a formula $W_n$ such that $W_n$ has alternation depth n and, over all Kripke frames, $W_n$ is not equivalent to any formula with alternation depth smaller than n.

The same may not happen over restricted classes of frames: Alberucci and Facchini [1] showed that, the $\mu $ -calculus collapses to modal logic over $\mathsf {S5}$ frames. That is, every $\mu $ -formula is equivalent to a formula without fixed point operators over $\mathsf {S5}$ frames. Later, Pacheco and Tanaka [3] proved that the $\mu $ -calculus also collapses to the $\mu $ -calculus over $\mathsf {S4.4}$ and $\mathsf {S4.3.2}$ frames.

We show how Alberucci and Facchini’s proof generalize to the $\mu $ -calculus’s collapse over n-pigeonhole frames. Let $n\in \omega $ . A frame $F =\langle W,R\rangle $ is an n-pigeonhole frame iff, for all sequence $w_0 R^+ w_1 R^+ \cdots R^+ w_n$ , there is $i< j\leq n$ such that $w_{i}R = w_{j}R$ . We also comment about ongoing work to prove the converse: if the $\mu $ -calculus collapses to modal logic over a class of frames $\mathsf {F}$ , then there is $n\in \omega $ such all frames $F\in \mathsf {F}$ are n-pigeonhole.

(This is joint work with Kazuyuki Tanaka.)

[1] Luca Alberucci Alessandro Facchini, The modal $\mu $ -calculus hierarchy over restricted classes of transition systems, The Journal of Symbolic Logic , vol. 74 (2009), no. 4, pp. 1367-–1400.

[2] Julian C. Bradfield, The modal mu-calculus alternation hierarchy is strict, Theoretical Computer Science , vol. 195 (1998), no. 2, pp. 133–153.

[3] Leonardo Pacheco Kazuyuki Tanaka, The alternation hierarchy of the $\mu $ -calculus over weakly transitive frames, Lecture Notes in Computer Science , vol. 13468 (2022), pp. 207–220.

▸SALMAN PANAHY, Deductive information.

Independent Scholar.

E-mail: .

There have been a number of attempts to explain the informativity of deduction in recent literature. Among these, three will be discussed in this paper. If we think of analyticity of an argument as the fact that the conclusion is contained in the premises, then it is a natural question how logical inference or deductive reasoning is analytical and yet informative. Usually, this problem is referred to as the Scandal of Deduction (SD). Answers to this question are usually expected to explain some common-sense phenomena, such as why some deductions are trivial and some informative. Or, why we do not possess logical omniscience.

As explained by Bar-Hillel and Carnap [1], deduction can increase our psychological knowledge even though it does not increase our empirical knowledge. There have been two recent attempts to model this psychological information, namely some of Sequoiah-Grayson’s works [5, 6] and Berto and Jago’s chapter Nine in Impossible Worlds [2]. The former utilizes the frame semantics of relevant logic to model possibly incomplete and inconsistent psychological states with some additional constraints on structural rules of contraction and weakening, resulting in a linear logic model. The latter provides an interpretation of classical derivations using sequent calculus (SC), which binds together worlds that might be impossible. In these impossible worlds, anything logical can go wrong except one and the same expressions being evaluated as true and false at the same time. These impossible worlds model inconsistent psychological states.

Separately, Duzi [3] argues that analytical information is procedural rather than psychological. A procedural semantic based on Transparent Intensional Logic is proposed to cash out the procedure we learn in reasoning. In this account of information, synonymous expressions, i.e. expressions that are procedurally isomorphic, convey the same analytic information. In addition, logically true expressions convey more analytic information than analytically true but not logically true expressions. As an example, the statement ‘no bachelor is married’ is analytically true, but it carries less analytic information than the statement’no unmarried man is married’. The former is logically true and is the result of refining the term ‘bachelor’ with ‘unmarried man’. As a result of this refinement, the expression carries more analytical information.

In this paper, we will evaluate these three theories of deductive information in light of the task of explaining the above-mentioned common-sense phenomena. We will examine their merits and shortcomings. In summary, Sequoiah-Grayson explains the flow of information in his paper, but he does not address what is the information we gain from deduction, which inferences are more informative, or why we are not logically omniscient. According to Berto and Jago, the information gained from any deduction is a ruling out of worlds that are epistemically feasible but logically impossible. However, their criteria for distinguishing informative deductions from trivial ones over-populate as explained by Panahy [4]. The reason for this is that their criteria for determining the informativeness of a deduction, namely the length of the prooftree in the SC presentation of the deduction in question, is not an accurate means of measuring the informativity of a deduction. To explain why we are not logically omniscient, Berto and Jago’s model of psychological or epistemic possibilities (which can be logically impossible) does not possess the structural property of transitivity. We are not omniscient because this epistemic or psychological space is non-transitive.

Duzi’s procedural semantics does a valuable job explaining what we learn in deductive reasoning. Additionally, it provides a satisfactory explanation of why logically true statements are more (analytically) informative than analytically but not logically true statements. Moreover, explaining why we are not logically omniscient is not a difficult task based on her account of analytic information. In contrast, her account of trivial deduction, which is based on the definition of synonymyity, renders many deductions as informative.

It will be argued that Sequoiah-Grayson’s way of modeling psychological information, when tuned to interpret classical SC, can provide different criteria for distinguishing information from trivial deductions as well as an explanation for why we are not logically omniscient. According to the proposed account, the most bottom line of a classical prooftree in SC is a situation or information source and inference rules are information channels. Some psychological information is lost whenever a structural rule forbidden by the linear logic of Sequoiah-Grayson’s or any similar logic appears in a prooftree. A derivation like this is informative in the sense that the information gained during the proof is not present in the most bottom line, which is where we begin reasoning. This account of psychological information can provide a different explanation for why we are not logically omniscient, the one which does not rely on the non-transitivity of the psychological or epistemic space (the psychological information flow is actually transitive in this account). This is due to the fact that the sources of information in an argument (the most bottom line of a SC prooftree) contains ambiguous information which will only be clarifed during the course of the proof. Moreover, this new account is more discriminating when it comes to distinguishing between deductions that are more informative and those that are less informative.

[1] Y. Bar-Hillel and R. Carnap, Semantic Information, The British Journal for the Philosophy of Science , vol. 4 (1953), no. 16, pp. 147–157.

[2] F. Berto and M. Jago, Impossible Worlds, Oxford University Press , 2019, Chapter 9.

[3] M. Duzi, The Paradox of Inference and the Non-Triviality of Analytic Information, Journal of Philosophical Logic , vol. 39 (2010), no. 10, pp. 473–510.

[4] S. Panahy, Semantic Information and the Complexity of Deduction, Erkenntnis , (2023), no. 4, pp. 1–22.

[5] S. Sequoiah-Grayson, Information Flow and Impossible Situations, Logique et Analyse , vol. 49 (2006), no. 196, pp. 371–398.

[6] , A Logic of Affordances, Logica Yearbook 2020 , (Martin Blicha and Igor Sedlár, editors), College Publications, 2021, pp. 219–236.