Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-23T19:00:19.084Z Has data issue: false hasContentIssue false

A linear linear lambda-calculus

Published online by Cambridge University Press:  31 May 2024

Alejandro Díaz-Caro*
Affiliation:
Departamento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina CONICET-Universidad de Buenos Aires, Instituto de Ciencias de la Computación (ICC), Buenos Aires, Argentina Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
Gilles Dowek
Affiliation:
Inria & ENS Paris-Saclay, Paris, France
*
Corresponding author: Alejandro Díaz-Caro; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We present a linearity theorem for a proof language of intuitionistic multiplicative additive linear logic, incorporating addition and scalar multiplication. The proofs in this language are linear in the algebraic sense. This work is part of a broader research program aiming to define a logic with a proof language that forms a quantum programming language.

Type
Special Issue: LSFA 2021 and LSFA 2022
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

1.1 Interstitial rules

The name of linear logic (Girard Reference Girard1987) suggests that this logic has some relation with the algebraic notion of linearity. A common account of this relation is that a proof of a linear implication between two propositions $A$ and $B$ should not be any function mapping proofs of $A$ to proofs of $B$ , but a linear one. This idea has been fruitfully exploited to build models of linear logic (e.g., Blute Reference Blute1996; Ehrhard Reference Ehrhard2002; Girard Reference Girard1999), but it seems difficult to even formulate it within the proof language itself. Indeed, expressing the properties $f(u + v) = f(u) + f(v)$ and $f(a. u) = a.\, f(u)$ requires an addition and a multiplication by a scalar that are usually not present in proof languages.

The situation has changed with quantum programming languages and the algebraic $\lambda$ -calculus that mix some usual constructions of programming languages with algebraic operations.

In this paper, we construct a minimal extension of the proof language for intuitionistic multiplicative additive linear logic with addition and multiplication by a scalar, the ${\mathcal L}^{\mathcal S}$ -calculus (where $\mathcal S$ denotes the semi-ring of scalars used), and we prove that the proof language of this logic expresses linear maps only: if $f$ is a proof of an implication between two propositions, then $f(u + v) = f(u) + f(v)$ and $f(a. u) = a.\, f(u)$ .

Our main goal is thus to construct this extension of intuitionistic linear logic and prove this linearity theorem. Only in a second step, we discuss whether such a language forms the basis of a quantum programming language or not.

In classical linear logic, the right rules of the multiplicative falsehood, the additive implication, and the multiplicative disjunction

do not preserve the number of propositions in the right-hand side of the sequents. Hence, these three connectives are excluded from intuitionistic linear logic, and we do not consider them.

Thus, we have the multiplicative truth $\mathfrak{1}$ , the multiplicative implication $\multimap$ , the multiplicative conjunction $\otimes$ , the additive truth $\top$ , the additive falsehood $\mathfrak{0}$ , the additive conjunction $\&$ , and the additive disjunction $\oplus$ .

The introduction rule for the additive conjunction $\&$ is the same as that in usual natural deduction

\begin{equation*}\frac {\Gamma \vdash A \quad \Gamma \vdash B}{\Gamma \vdash A\, \&\,B}\,{\tiny \&-i}\end{equation*}

In particular, the proofs of $A$ , $B$ , and $A \,\&\, B$ are in the same context $\Gamma$ . In contrast, in the introduction rule for the multiplicative conjunction $\otimes$

\begin{equation*}\frac {\Gamma _1 \vdash A \quad \Gamma _2 \vdash B}{\Gamma _1, \Gamma _2 \vdash A \otimes B}\,{\tiny \otimes -i}\end{equation*}

the proofs of $A$ and $B$ are in two contexts $\Gamma _1$ and $\Gamma _2$ and the proof of the conclusion $A \otimes B$ is in the multiset union of these two contexts. But, in both cases, in the elimination rules

\begin{equation*}\frac {\Gamma \vdash A \,\&\, B \quad \Delta, A \vdash C}{\Gamma, \Delta \vdash C}\,{\tiny \&{-}e1} \qquad \frac {\Gamma \vdash A \,\&\, B \quad \Delta, B \vdash C}{\Gamma, \Delta \vdash C}{\tiny \&{-}e2}\end{equation*}
\begin{equation*}\frac {\Gamma \vdash A \otimes B \quad \Delta, A, B \vdash C}{\Gamma, \Delta \vdash C}\,{\tiny \otimes {-}e}\end{equation*}

the proof of the major premise and that of the minor one are in contexts $\Gamma$ and $\Delta, A$ (resp. $\Delta, B$ , $\Delta, A, B$ ) and the proof of the conclusion $C$ is in the multiset union of $\Gamma$ and $\Delta$ . The same holds for the other connectives.

To extend this logic with addition and multiplication by a scalar, we proceed, like in Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023), in two steps: we first add interstitial rules and then scalars.

An interstitial rule is a deduction rule whose premises are identical to its conclusion. We consider two such rules

\begin{equation*}\frac {\Gamma \vdash A \quad \Gamma \vdash A}{\Gamma \vdash A}\,{\textrm {sum}} \qquad \frac {\Gamma \vdash A}{\Gamma \vdash A}\,{\tiny \textrm {prod}(a)}\end{equation*}

These rules obviously do not extend provability, but they introduce new constructors $\boldsymbol{+}$ and $\bullet$ in the proof language.

We then consider a semi-ring $\mathcal S$ of scalars and replace the introduction rule of the connective $\mathfrak{1}$ with a family of rules $\mathfrak{1}$ -i $(a)$ , one for each scalar, and the rule prod with a family of rules prod $(a)$ , also one for each scalar

\begin{equation*}\frac {}{\vdash {\mathfrak {1}}}\,{\tiny {\mathfrak {1}}-\text {i} (a)} \qquad \qquad \qquad \frac {\Gamma \vdash A}{\Gamma \vdash A}\,{\tiny \textrm {prod}(a)}\end{equation*}

1.2 Commutations

Adding these rules yields proofs that cannot be reduced because the introduction rule of some connective and its elimination rule are separated by an interstitial rule, for example,

\begin{equation*}\dfrac {\dfrac {\dfrac {\dfrac {\pi _1}{\Gamma \vdash A}\quad \dfrac {\pi _2}{\Gamma \vdash B}}{ \Gamma \vdash A \,\&\, B}\,{\&-i} \qquad \dfrac {\dfrac {\pi _3}{\Gamma \vdash A}\quad \dfrac {\pi _4}{\Gamma \vdash B}}{ \Gamma \vdash A \,\&\, B}\,{ \&-i}}{ \Gamma \vdash A \,\&\, B}\,{\textrm {sum}} \qquad \dfrac {\pi _5}{\Gamma, A \,\vdash \,C}}{\Gamma \vdash C}{ \&-e1}\end{equation*}

Reducing such a proof, sometimes called a commuting cut, requires reduction rules to commute the rule sum either with the elimination rule below or with the introduction rules above.

As the commutation with the introduction rules above is not always possible, for example, in the proofs

where $\Gamma _1 \Gamma _2 = \Gamma '_{\!\!1} \Gamma '_{\!\!2} = \Gamma$ , the commutation with the elimination rule below is often preferred. In this paper, we favor the commutation of the interstitial rules with the introduction rules, rather than with the elimination rules, whenever it is possible, that is for the connectives $\mathfrak{1}$ , $\multimap$ , $\top$ , and $\&$ and keep commutation with the elimination rules for the connectives $\otimes$ and $\oplus$ only. For example, with the additive conjunction $\&$ , the proof

reduces to

Such commutation rules yield a stronger introduction property for the considered connective.

For coherence, we commute both rules sum and prod with the elimination rule of the additive disjunction $\oplus$ and of the multiplicative conjunction $\otimes$ , rather that with its introduction rules. But, for the rule prod, both choices are possible.

1.3 Related work

While our primary objective is to introduce a minimal extension to the proof language of linear logic, our work is greatly indebted to quantum programming languages. These languages were pioneers in amalgamating programming language constructs with algebraic operations, such as addition and scalar multiplication.

The language QML (Altenkirch and Grattage Reference Altenkirch and Grattage2005) introduced the concept of superposition of terms, through an encoding: the $\textrm{if}^\circ$ constructor can receive qubits as conditional parameters. For example, the expression $\textrm{if}^\circ \ a.{|{0}\rangle } + b.{|{1}\rangle }\ \textrm{then}\ u\ \textrm{else}\ v$ represents the linear combination $a.u+b.v$ . Thus, although QML does not have a direct way to represent linear combinations of terms, such linear combinations can always be expressed using this $\textrm{if}^\circ$ constructor. A linearity property, and even an unitarity property, is proved for QML, through a translation to quantum circuits.

The ZX calculus (Coecke and Kissinger Reference Coecke and Kissinger2017) is a graphical language based on a categorical model. It does not have addition or multiplication by a scalar in the syntax, but such constructions could be added and interpreted in the model. This idea of extending the syntax with addition and multiplication by a scalar, lead to the Many Worlds Calculus (Chardonnet Reference Chardonnet2023). Although the Many Worlds Calculus and the ${\mathcal L}^{\mathcal S}$ -calculus have several points in common, the ${\mathcal L}^{\mathcal S}$ -calculus takes advantage of being a $\lambda$ -calculus and not a graphical language to introduce a primitive $\lambda$ -abstraction, while the Many Worlds Calculus introduces it indirectly through the adjunction between the hom and the tensor. Then, the linearity proof for the Many Worlds Calculus uses semantic tools, while that for the ${\mathcal L}^{\mathcal S}$ -calculus is purely syntactic.

The algebraic $\lambda$ -calculus (Vaux Reference Vaux2009) and lineal (Arrighi and Dowek Reference Arrighi and Dowek2017) have similar syntaxes to the $\mathcal L^{\mathcal S}$ -calculus we are proposing. However, in the case of the algebraic $\lambda$ -calculus, there is only a simple intuitionistic type system, with no proof of linearity. In the case of lineal, there is no type system, and the linearity is not proved, but forced: the term $f(u+v)$ is defined as $f(u)+f(v)$ and $f(a.u)$ is defined as $a.f(u)$ . Several type systems have been proposed for lineal (Arrighi and Díaz-Caro Reference Arrighi and Díaz-Caro2012; Arrighi et al. Reference Arrighi, Díaz-Caro and Valiron2017; Díaz-Caro and Petit Reference Díaz-Caro, Petit, Ong and de Queiroz2012; Díaz-Caro et al. Reference Díaz-Caro, Dowek and Rinaldi2019a,Reference Díaz-Caro, Guillermo, Miquel and Valironb), but none of them are related to linear logic, and they are not intended to prove the linearity, instead of forcing it.

Finally, other sources of the ${\mathcal L}^{\mathcal S}$ -calculus are the quantum lambda calculus (Selinger and Valiron Reference Selinger and Valiron2006) and the language Q (Zorzi Reference Zorzi2016), although the classical nature of their control yields a restricted form of superposition, on data rather than on arbitrary terms.

1.4 Outline of the paper

Extending the proof language of intuitionistic linear logic with interstitial rules and with scalars yields the ${\mathcal L}^{\mathcal S}$ -calculus that we define and study in Section 2. In particular, we prove that the ${\mathcal L}^{\mathcal S}$ -calculus verifies the subject reduction, confluence, termination, and introduction properties. We then show, in Section 3, that the vectors of ${\mathcal S}^n$ can be expressed in this calculus, that the irreducible closed proofs of some propositions are equipped with a structure of vector space, and that all linear functions from ${\mathcal S}^m$ to ${\mathcal S}^n$ can be expressed as proofs of an implication between such propositions. We then prove, in Section 4, the main result of this paper: that, conversely, all the proofs of implications are linear.

Finally, we discuss applications to quantum computing, in Section 5.

2. The ${\mathcal{L}}^{\mathcal{S}}$ -Calculus

2.1 Syntax and operational semantics

The propositions of the of intuitionistic multiplicative additive linear logic are

\begin{equation*}A = {\mathfrak {1}} \mid A \multimap A \mid A \otimes A \mid \top \mid {\mathfrak {0}} \mid A \,\&\, A \mid A \oplus A\end{equation*}

Let $\mathcal S$ be a semi-ring of scalars, for instance $\{*\}$ , $\{0, 1\}$ , $\mathbb N$ , $\mathbb Q$ , $\mathbb R$ , or $\mathbb C$ . The proof-terms of the ${\mathcal L}^{\mathcal S}$ -calculus are

\begin{align*} t =\,& x \mid t \boldsymbol{+} u \mid a \bullet t\\ & \mid a.\star \mid \delta _{{\mathfrak{1}}}(t,u) \mid \lambda x.t\mid t\,u \mid t \otimes u \mid \delta _{\otimes }(t, x y. u)\\ & \mid \langle \rangle \mid \delta _{{\mathfrak{0}}}(t) \mid \langle t, u \rangle \mid \delta _{\&}^1(t,x.u) \mid \delta _{\&}^2(t,x.u) \mid \textit{inl}(t)\mid \textit{inr}(t)\mid \delta _{\oplus }(t,x.u,y.v) \end{align*}

where $a$ is a scalar.

These symbols are in one-to-one correspondence with the rules of intuitionistic multiplicative additive linear logic extended with interstitial rules and scalars, and proof-terms can be seen as mere one-dimensional representation of proof-trees. The proofs of the form $a.\star$ , $\lambda x.t$ , $t \otimes u$ , $\langle \rangle$ , $\langle t, u \rangle$ , $\textit{inl}(t)$ , and $\textit{inr}(t)$ are called introductions, and those of the form $\delta _{{\mathfrak{1}}}(t,u)$ , $t\,u$ , $\delta _{\otimes }(t,xy.u)$ , $\delta _{{\mathfrak{0}}}(t)$ , $\delta _{\&}^1(t,x.u)$ , $\delta _{\&}^2(t,x.u)$ , and $\delta _{\oplus }(t,x.u,y.v)$ eliminations. The variables and the proofs of the form $t \boldsymbol{+} u$ and $a \bullet t$ are neither introductions nor eliminations.

On the other hand, each symbol can be considered as a construction of a functional programming language: the introductions are standard, while the eliminations are as follows:

  • $\delta _{{\mathfrak{1}}}(t,u)$ is the sequence, sometimes written $t;\;u$ ,

  • $t\,u$ is the application, sometimes written $t(u)$ ,

  • $\delta _{\otimes }(t,xy.u)$ is the let on pairs, sometimes written $\textsf{let }(x,y)=t\ \textsf{in}\ u$ ,

  • $\delta _{{\mathfrak{0}}}(t)$ is the error, sometimes written $\textsf{error}(t)$ ,

  • $\delta _{\&}^1(t,x.u)$ is the first projection, sometimes written $\textsf{let }x=\textsf{fst}(t)\ \textsf{in}\ u$ ,

  • $\delta _{\&}^2(t,x.u)$ is the second projection, sometimes written $\textsf{let }x=\textsf{snd}(t)\ \textsf{in}\ u$ ,

  • $\delta _{\oplus }(t,x.u,y.v)$ is the match, sometimes written $\textsf{match}\ t\ \textsf{in}\ \{\textsf{inl}(x)\mapsto u\mid \textsf{inr}(y)\mapsto v\}$ .

The $\alpha$ -equivalence relation and the free and bound variables of a proof-term are defined as usual. Proof-terms are defined modulo $\alpha$ -equivalence. A proof-term is closed if it contains no free variables. We write $(u/x)t$ for the substitution of $u$ for $x$ in $t$ and if $FV(t) \subseteq \{x\}$ , we also use the notation $t\{u\}$ .

The typing rules are those of Fig. 1. These typing rules are exactly the deduction rules of intuitionistic linear natural deduction, with proof-terms, with two differences: the interstitial rules and the scalars.

Figure 1. The deduction rules of the ${\mathcal L}^{\mathcal S}$ -calculus.

Figure 2. The reduction rules of the ${\mathcal L}^{\mathcal S}$ -calculus.

The reduction rules are those of Fig. 2. As usual, the reduction can occur in any context. The one-step reduction relation is written $\longrightarrow$ , its inverse $\longleftarrow$ , its reflexive-transitive closure $\longrightarrow ^*$ , the reflexive-transitive closure of its inverse $\mathrel{{}^*{\longleftarrow }}$ , and its reflexive-symmetric-transitive closure $\equiv$ . The first seven rules correspond to the reduction of cuts on the connectives $\mathfrak{1}$ , $\multimap$ , $\otimes$ , $\&$ , and $\oplus$ . The twelve others enable to commute the interstitial rules sum and prod with the introduction rules of the connectives $\mathfrak{1}$ , $\multimap$ , $\top$ , and $\&$ , and with the elimination rule of the connectives $\otimes$ and $\oplus$ . For instance, the rule

\begin{equation*}\langle t, u \rangle \boldsymbol {+} \langle v, w \rangle \longrightarrow \langle t \boldsymbol {+} v, u \boldsymbol {+} w \rangle \end{equation*}

pushes the symbol $\boldsymbol{+}$ inside the pair. The scalars are added in the rule

\begin{equation*}{a.\star } \boldsymbol {+} b.\star \longrightarrow (a+b).\star \end{equation*}

and multiplied in the rule

\begin{equation*}a \bullet b.\star \longrightarrow (a \times b).\star \end{equation*}

We now prove the subject reduction, confluence, termination, and introduction properties of the ${\mathcal L}^{\mathcal S}$ -calculus.

2.2 Subject reduction

The subject reduction property is not completely trivial: as noted above, it would, for example, fail if we commuted the sum rule with the introduction rule of the multiplicative conjunction $\otimes$ .

Lemma 2.1 (Substitution). If $\Gamma,x:B\vdash t:A$ and $\Delta \vdash u:B$ , then $\Gamma,\Delta \vdash (u/x)t:A$ .

Proof. By induction on the structure of $t$ . Since the deduction system is syntax directed, the generation lemma is trivial and will be implicitly used in the proof.

  • If $t=x$ , then $\Gamma =\varnothing$ and $A=B$ . Thus, $\Gamma,\Delta \vdash (u/x)t:A$ is the same as $\Delta \vdash u:B$ , which is valid by hypothesis.

  • The proof $t$ cannot be a variable $y$ different from $x$ , as such a variable $y$ is not a proof in $\Gamma, x:B$ .

  • If $t=v_1\boldsymbol{+} v_2$ , then $\Gamma,x:B\vdash v_1:A$ and $\Gamma,x:B\vdash v_2:A$ . By the induction hypothesis, $\Gamma,\Delta \vdash (u/x)v_1:A$ and $\Gamma,\Delta \vdash (u/x)v_2:A$ . Therefore, by the rule sum, $\Gamma,\Delta \vdash (u/x)v_1\boldsymbol{+}(u/x)v_2:A$ . Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t = a \bullet v$ , then $\Gamma, x:B \vdash v:A$ . By the induction hypothesis, $\Gamma,\Delta \vdash (u/x)v:A$ . Therefore, by the rule prod, $\Gamma,\Delta \vdash a \bullet (u/x)v:A$ . Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • The proof $t$ cannot be of the form $t=a.\star$ , that is not a proof in $\Gamma, x:B$ .

  • If $t=\delta _{{\mathfrak{1}}}(v_1,v_2)$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1,x:B\vdash v_1:{\mathfrak{1}}$ and $\Gamma _2\vdash v_2:A$ , then, by the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:{\mathfrak{1}}$ and, by the rule $\mathfrak{1}$ -e, $\Gamma,\Delta \vdash \delta _{{\mathfrak{1}}}((u/x)v_1,v_2):A$ .

    2. If $\Gamma _1\vdash v_1:{\mathfrak{1}}$ and $\Gamma _2,x:B\vdash v_2:A$ , then, by the induction hypothesis, $\Gamma _2,\Delta \vdash (u/x)v_2:A$ and, by the rule $\mathfrak{1}$ -e, $\Gamma,\Delta \vdash \delta _{{\mathfrak{1}}}(v_1,(u/x)v_2):A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\lambda y.v$ , then $A=C\multimap D$ and $\Gamma,y:C,x:B\vdash v:D$ . By the induction hypothesis, $\Gamma,\Delta,y:C\vdash (u/x)v:D$ , so, by the rule $\multimap$ -i, $\Gamma,\Delta \vdash \lambda y.(u/x)v:A$ . Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=v_1\,v_2$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1,x:B\vdash v_1:C\multimap A$ and $\Gamma _2\vdash v_2:C$ , then, by the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:C\multimap A$ and, by the rule $\multimap$ -e, $\Gamma,\Delta \vdash (u/x)v_1\,v_2:A$ .

    2. If $\Gamma _1\vdash v_1:C\multimap A$ and $\Gamma _2,x:B\vdash v_2:C$ , then, by the induction hypothesis, $\Gamma _2,\Delta \vdash (u/x)v_2:C$ and, by the rule $\multimap$ -e, $\Gamma,\Delta \vdash v_1\,(u/x)v_2:A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=v_1\otimes v_2$ , then $A=A_1\otimes A_2$ , $\Gamma =\Gamma _1,\Gamma _2$ , and there are two cases.

    1. If $\Gamma _1,x:B\vdash v_1:A_1$ and $\Gamma _2\vdash v_2:A_2$ , then, by the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:A_1$ , and, by the rule $\otimes$ -i, $\Gamma,\Delta \vdash (u/x)v_1\otimes v_2:A$ .

    2. If $\Gamma _1\vdash v_1:A_1$ and $\Gamma _2,x:B\vdash v_2:A_2$ , this case is analogous to the previous one.

    Hence, $\Gamma,\Delta \vdash (u/x)t:A$ .

  • If $t=\delta _{\otimes }(v_1.yz.v_2)$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1,x:B\vdash v_1:C\otimes D$ and $\Gamma _2,y:C,z:D\vdash v_2:A$ . By the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:C\otimes D$ and, by the rule $\otimes$ -e, $\Gamma,\Delta \vdash \delta _{\otimes }((u/x)v_1,yz.v_2):A$ .

    2. If $\Gamma _1 \vdash v_1:C\otimes D$ and $\Gamma _2,y:C,z:D,x:B\vdash v_2:A$ . By the induction hypothesis, $\Gamma _2,\Delta,y:C,z:D\vdash (u/x)v_2:A$ and, by the rule $\otimes$ -e, $\Gamma,\Delta \vdash \delta _{\otimes }(v_1,yz.(u/x)v_2):A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\langle \rangle$ , then $A=\top$ , and since $(u/x)t = t = \langle \rangle$ , by rule $\top$ -i, $\Gamma,\Delta \vdash t:A$ .

  • If $t=\delta _{{\mathfrak{0}}}(v)$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1\vdash v:{\mathfrak{0}}$ , then $x\notin FV(v)$ , so $(u/x)v = v$ , $\Gamma _1\vdash (u/x)v:{\mathfrak{0}}$ , and, by the rule $\mathfrak{0}$ -e, $\Gamma,\Delta \vdash \delta _{{\mathfrak{0}}}((u/x)v):A$ .3

    2. If $\Gamma _1,x:B\vdash v:{\mathfrak{0}}$ , then, by the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v:{\mathfrak{0}}$ , and, by the rule $\mathfrak{0}$ -e, $\Gamma,\Delta \vdash \delta _{{\mathfrak{0}}}((u/x)v):A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\langle v_1, v_2 \rangle$ , then $A=A_1\& A_2$ and $\Gamma,x:B\vdash v_1:A_1$ and $\Gamma,x:B\vdash v_2:A_2$ . By the induction hypothesis, $\Gamma,\Delta \vdash (u/x)v_1:A_1$ and $\Gamma,\Delta \vdash (u/x)v_2:A_2$ . Therefore, by the rule $\&$ -i, $\Gamma,\Delta \vdash \langle (u/x)v_1, (u/x)v_2 \rangle :A$ . Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\delta _{\&}^1(v_1,y.v_2)$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1,x:B\vdash v_1:C\& D$ and $\Gamma _2,y:C \vdash v_2:A$ . By the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:C\& D$ and, by the rule $\&$ -e, $\Gamma,\Delta \vdash \delta _{\&}^1((u/x)v_1,y.v_2):A$ .

    2. If $\Gamma _1 \vdash v_1:C\& D$ and $\Gamma _2,y:C, x:B\vdash v_2:A$ . By the induction hypothesis, $\Gamma _2,\Delta,y:C \vdash (u/x)v_2:A$ and, by the rule $\&$ -e, $\Gamma,\Delta \vdash \delta _{\&}^1(v_1,y.(u/x)v_2):A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\delta _{\&}^2(v_1,y.v_2)$ . The proof is analogous.

  • If $t=\textit{inl}(v)$ , then $A=C\oplus D$ and $\Gamma,x:B\vdash v:C$ . By the induction hypothesis, $\Gamma,\Delta \vdash (u/x)v:C$ and so, by the rule $\oplus$ -i, $\Gamma,\Delta \vdash \textit{inl}((u/x)v):A$ . Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

  • If $t=\textit{inr}(v)$ . The proof is analogous.

  • If $t=\delta _{\oplus }(v_1,y.v_2,z.v_3)$ , then $\Gamma =\Gamma _1,\Gamma _2$ and there are two cases.

    1. If $\Gamma _1,x:B \vdash v_1:C\oplus D$ , $\Gamma _2,y:C \vdash v_2:A$ , and $\Gamma _2,z:D \vdash v_3:A$ . By the induction hypothesis, $\Gamma _1,\Delta \vdash (u/x)v_1:C\oplus D$ and, by the rule $\oplus$ -e, $\Gamma,\Delta \vdash \delta _{\oplus }((u/x)v_1,y.v_2,z.v_3):A$ .

    2. If $\Gamma _1\vdash v_1:C\oplus D$ , $\Gamma _2,y:C,x:B\vdash v_2:A$ , and $\Gamma _2,z:D,x:B\vdash v_3:A$ . By the induction hypothesis, $\Gamma _2,\Delta,y:C,\vdash (u/x)v_2:A$ and $\Gamma _2,\Delta,z:D\vdash (u/x)v_3:A$ . By the rule $\oplus$ -e, $\Gamma,\Delta \vdash \delta _{\oplus }(v_1,y.(u/x)v_2,z.(u/x)v_3):A$ .

    Hence, $\Gamma, \Delta \vdash (u/x)t:A$ .

Theorem 2.2 (Subject reduction). If $\Gamma \vdash t:A$ and $t \longrightarrow u$ , then $\Gamma \vdash u:A$ .

Proof. By induction on the definition of the relation $\longrightarrow$ . The context cases are trivial, so we focus on the reductions at top level. As the generation lemma is trivial, we use it implicitly in the proof.

  • If $t = \delta _{{\mathfrak{1}}}(a.\star,v)$ and $u = a \bullet v$ , then $\vdash a.\star :{\mathfrak{1}}$ and $\Gamma \vdash v:A$ . Hence, $\Gamma \vdash a \bullet v:A$ .

  • If $t = (\lambda x. v_1)v_2$ and $u = (v_2/x)v_1$ , then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1, x:B \vdash v_1:A$ , and $\Gamma _2 \vdash v_2:B$ . By Lemma 2.1, $\Gamma \vdash u:A$ .

  • If $t=\delta _{\otimes }(v_1\otimes v_2,xy.\,v_3)$ and $u=(v_1/x,v_2/y)v_3$ , then $\Gamma =\Gamma _1,\Gamma _2,\Gamma _3$ , $\Gamma _1\vdash v_1:B_1$ , $\Gamma _2\vdash v_2:B_2$ , and $\Gamma _3,x:B_1,y:B_2\vdash v_3:A$ . By Lemma 2.1, $\Gamma _2,\Gamma _3,x:B_1\vdash (v_2/y)v_3:A$ , and, by Lemma 2.1 again, $\Gamma \vdash (v_1/x,v_2/y)v_3:A$ . That is, $\Gamma \vdash u:A$ .

  • If $t = \delta _{\&}^1(\langle v_1, v_2 \rangle, y.v_3)$ and $u = (v_1/y)v_3$ , then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B$ , $\Gamma _1 \vdash v_2:C$ , and $\Gamma _2, y:B \vdash v_3:A$ . By Lemma 2.1, $\Gamma \vdash (v_1/y)v_3:A$ , that is $\Gamma \vdash u:A$ .

  • If $t = \delta _{\&}^2(\langle v_1, v_2 \rangle ), y.v_3)$ and $u = (v_2/y)v_3$ , the proof is analogous.

  • If $t = \delta _{\oplus }(\textit{inl}(v_1), y. v_2, z. v_3)$ and $u = (v_1/y)v_2$ then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B$ , and $\Gamma _2, y:B \vdash v_2:A$ . By Lemma 2.1, $\Gamma \vdash u:A$ .

  • If $t = \delta _{\oplus }(\textit{inr}(v_1), y. v_2, z. v_3)$ and $u = (v_1/z)v_3$ , the proof is analogous.

  • If $t ={a.\star } \boldsymbol{+} b.\star$ and $u = (a+b).\star$ , then $A ={\mathfrak{1}}$ and $\Gamma$ is empty. Thus, $\Gamma \vdash u:A$ .

  • If $t = \lambda x. v_1 \boldsymbol{+} \lambda x. v_2$ and $u = \lambda x. (v_1 \boldsymbol{+} v_2)$ then $A = B \multimap C$ , $\Gamma, x:B \vdash v_1:C$ , and $\Gamma, x:B \vdash v_2:C$ . Thus, $\Gamma \vdash u:A$ .

  • If $t=\delta _{\otimes }(v_1\boldsymbol{+} v_2,xy.v_3)$ and $u=\delta _{\otimes }(v_1,xy.v_3)\boldsymbol{+}\delta _{\otimes }(v_2,xy.v_3)$ , then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B \otimes C$ , $\Gamma _1 \vdash v_2:B \otimes C$ , and $\Gamma _2, x:B,y:C \vdash v_3:A$ . Hence, $\Gamma \vdash u:A$ .

  • If $t=\langle \rangle \boldsymbol{+}\langle \rangle$ and $u=\langle \rangle$ , then $\Gamma \vdash \langle \rangle :A$ , that is, $\Gamma \vdash u:A$ .

  • If $t = \langle v_1, v_2 \rangle \boldsymbol{+} \langle v_3, v_4 \rangle$ and $u = \langle v_1 \boldsymbol{+} v_3, v_2 \boldsymbol{+} v_4 \rangle$ then $A = B \,\&\, C$ , $\Gamma \vdash v_1:B$ , $\Gamma \vdash v_2:C$ , and $\Gamma \vdash v_3:B$ , $\Gamma \vdash v_4:C$ . Thus, $\Gamma \vdash u:A$ .

  • If $t = \delta _{\oplus }(v_1 \boldsymbol{+} v_2,x.v_3,y.v_4)$ and $u = \delta _{\oplus }(v_1,x.v_3,y.v_4) \boldsymbol{+} \delta _{\oplus }(v_2,x.v_3,y.v_4)$ then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B \oplus C$ , $\Gamma _1 \vdash v_2:B \oplus C$ , $\Gamma _2, x:B \vdash v_3:A$ , $\Gamma _2, y:C \vdash v_4:A$ . Hence, $\Gamma \vdash u:A$ .

  • If $t = a \bullet b.\star$ and $u = (a \times b).\star$ , then $A ={\mathfrak{1}}$ , $\Gamma$ is empty. Thus, $\Gamma \vdash u:A$ .

  • If $t = a \bullet \lambda x. v$ and $u = \lambda x. a \bullet v$ , then $A = B \multimap C$ and $\Gamma, x:B \vdash v:C$ . Thus, $\Gamma \vdash u:A$ .

  • If $t=\delta _{\otimes }(a\bullet v_1,xy.v_2)$ and $u=a\bullet \delta _{\otimes }(v_1,xy.v_2)$ , then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B \otimes C$ and, $\Gamma _2, x:B,y:C \vdash v_2:A$ . Thus, $\Gamma \vdash u:A$ .

  • If $t=a\bullet \langle \rangle$ and $u=\langle \rangle$ , then $\Gamma \vdash \langle \rangle :A$ . Thus, $\Gamma \vdash u:A$ .

  • If $t = a \bullet \langle v_1, v_2 \rangle$ and $u = \langle a \bullet v_1, a \bullet v_2 \rangle$ , then $A = B \,\&\, C$ , $\Gamma \vdash v_1:B$ , and $\Gamma \vdash v_2:C$ . Thus, $\Gamma \vdash u:A$ .

  • If $t = \delta _{\oplus }(a \bullet v_1,x.v_2,y.v_3)$ and $u = a \bullet \delta _{\oplus }(v_1,x.v_2,y.v_3)$ , then $\Gamma = \Gamma _1, \Gamma _2$ , $\Gamma _1 \vdash v_1:B \oplus C$ , $\Gamma _2, x:B \vdash v_2:A$ , and $\Gamma _2, y:C \vdash v_3:A$ . Thus, $\Gamma \vdash u:A$ .

2.3 Confluence

Theorem 2.3 (Confluence). The ${\mathcal L}^{\mathcal S}$ -calculus is confluent, that is whenever $u \mathrel{{}^*{\longleftarrow }} t \longrightarrow ^* v$ , then there exists a $w$ , such as $u \longrightarrow ^* w \mathrel{{}^*{\longleftarrow }} v$ .

Proof. The reduction rules of Fig. 2 applied to well-formed proofs is left linear and has no critical pairs (Mayr and Nipkow Reference Mayr and Nipkow1998, Section 6). By Mayr and Nipkow (Reference Mayr and Nipkow1998, Theorem 6.8), it is confluent.

2.4 Termination

We now prove that the ${\mathcal L}^{\mathcal S}$ -calculus strongly terminates, that is that all reduction sequences are finite. To handle the symbols $\boldsymbol{+}$ and $\bullet$ and the associated reduction rules, we prove the strong termination of an extended reduction system, in the spirit of Girard’s ultra-reduction (Girard Reference Girard1972), whose strong termination obviously implies that of the rules of Fig. 2.

Definition 2.4 (Ultra-reduction). Ultra-reduction is defined with the rules of Fig. 2, plus the rules

\begin{align*} t \boldsymbol{+} u & \longrightarrow t\\ t \boldsymbol{+} u & \longrightarrow u\\ a \bullet t & \longrightarrow t \end{align*}

Definition 2.5 (Length of reduction). If $t$ is a strongly terminating proof, we write $\ell (t)$ for the maximum length of a reduction sequence issued from $t$ .

Lemma 2.6 (Termination of a sum). If $t$ and $u$ strongly terminate, then so does $t \boldsymbol{+} u$ .

Proof. We prove that all the one-step reducts of $t \boldsymbol{+} u$ strongly terminate, by induction first on $\ell (t) + \ell (u)$ and then on the size of $t$ .

If the reduction takes place in $t$ or in $u$ we apply the induction hypothesis. Otherwise, the reduction is at the root and the rule used is either

\begin{align*}{a.\star } \boldsymbol{+}{b.\star } &\longrightarrow (a + b).\star \\ (\lambda x.t') \boldsymbol{+} (\lambda x.u') &\longrightarrow \lambda x.(t' \boldsymbol{+} u')\\ \langle \rangle \boldsymbol{+} \langle \rangle &\longrightarrow \langle \rangle \\ \langle t'_{\!\!1}, t'_{\!\!2} \rangle \boldsymbol{+} \langle u'_{\!\!1}, u'_{\!\!2} \rangle &\longrightarrow \langle t'_{\!\!1} \boldsymbol{+} u'_{\!\!1}, t'_{\!\!2} \boldsymbol{+} u'_{\!\!2} \rangle \\ t \boldsymbol{+} u &\longrightarrow t\\ t \boldsymbol{+} u &\longrightarrow u \end{align*}

In the first case, the proof $(a + b).\star$ is irreducible, hence it strongly terminates. In the second, by induction hypothesis, the proof $t' \boldsymbol{+} u'$ strongly terminates, thus so does the proof $\lambda x.(t' \boldsymbol{+} u')$ . In the third, the proof $\langle \rangle$ is irreducible, hence it strongly terminates. In the fourth, by induction hypothesis, the proofs $t'_{\!\!1} \boldsymbol{+} u'_{\!\!1}$ and $t'_{\!\!2} \boldsymbol{+} u'_{\!\!2}$ strongly terminate, hence so does the proof $\langle t'_{\!\!1} \boldsymbol{+} u'_{\!\!1}, t'_{\!\!2} \boldsymbol{+} u'_{\!\!2} \rangle$ . In the fifth and the sixth, the proofs $t$ and $u$ strongly terminate.

Lemma 2.7 (Termination of a product). If $t$ strongly terminates, then so does $a \bullet t$ .

Proof. We prove that all the one-step reducts of $a \bullet t$ strongly terminate, by induction first on $\ell (t)$ and then on the size of $t$ .

If the reduction takes place in $t$ , we apply the induction hypothesis. Otherwise, the reduction is at the root and the rule used is either

\begin{align*} a \bullet b.\star &\longrightarrow (a \times b).\star \\ a \bullet (\lambda x.t') &\longrightarrow \lambda x. a \bullet t'\\ a \bullet \langle \rangle &\longrightarrow \langle \rangle \\ a \bullet \langle t'_{\!\!1}, t'_{\!\!2} \rangle &\longrightarrow \langle a \bullet t'_{\!\!1}, a \bullet t'_{\!\!2} \rangle \\ a \bullet t &\longrightarrow t \end{align*}

In the first case, the proof $(a \times b).\star$ is irreducible, hence it strongly terminates. In the second, by induction hypothesis, the proof $a \bullet t'$ strongly terminates, thus so does the proof $\lambda x. a \bullet t'$ . In the third, the proof $\langle \rangle$ is irreducible, hence it strongly terminates. In the fourth, by induction hypothesis, the proofs $a \bullet t'_{\!\!1}$ and $a \bullet t'_{\!\!2}$ strongly terminate, hence so does the proof $\langle a \bullet t'_{\!\!1}, a \bullet t'_{\!\!2} \rangle$ . In the fifth, the proof $t$ strongly terminates.

Definition 2.8. Let $\textsf{ST}$ be the set of strongly terminating terms. We define, by induction on the proposition $A$ , a set of proofs $[\![ A ]\!]$ :

\begin{align*} [\![{\mathfrak{1}} ]\!]&=\textsf{ST}\\ [\![ A \multimap B ]\!] &=\{t\in \textsf{ST}\mid \textrm{If }t\to ^*\lambda x.u\textrm{ then for all }v \in [\![ A ]\!], (v/x)u \in [\![ B ]\!]\}\\ [\![ A \otimes B ]\!] &= \{t\in \textsf{ST}\mid \textrm{If }t\to ^*u \otimes v\textrm{ then }u \in [\![ A ]\!]\textrm{ and }v \in [\![ B ]\!]\}\\ [\![ \top ]\!] &=\textsf{ST}\\ [\![{\mathfrak{0}} ]\!] &=\textsf{ST}\\ [\![ A \,\&\, B ]\!] &=\{t\in \textsf{ST}\mid \textrm{If }t\to ^* \langle u, v \rangle \textrm{ then }u \in [\![ A ]\!]\textrm{ and }v \in [\![ B ]\!]\}\\ [\![ A \oplus B ]\!] &= \{t\in \textsf{ST}\mid \textrm{If }t\to ^*\textit{inl}(u)\textrm{ then } u \in [\![ A ]\!] \textrm{ and if }t\to ^*\textit{inr}(v)\textrm{ then } v \in [\![ B ]\!]\} \end{align*}

Lemma 2.9 (Variables). For any $A$ , the set $[\![ A ]\!]$ contains all the variables.

Proof. A variable is irreducible, hence it strongly terminates. Moreover, it never reduces to an introduction.

Lemma 2.10 (Closure by reduction). If $t \in [\![ A ]\!]$ and $t \longrightarrow ^* t'$ , then $t' \in [\![ A ]\!]$ .

Proof. If $t \longrightarrow ^* t'$ and $t$ strongly terminates, then $t'$ strongly terminates.

Furthermore, if $A$ has the form $B \multimap C$ and $t'$ reduces to $\lambda x.u$ , then so does $t$ , hence for every $v \in [\![ B ]\!]$ , $(v/x)u \in [\![ C ]\!]$ . If $A$ has the form $B \otimes C$ and $t'$ reduces to $u \otimes v$ , then so does $t$ , hence $u \in [\![ B ]\!]$ and $v \in [\![ C ]\!]$ . If $A$ has the form $B \,\&\, C$ and $t'$ reduces to $\langle u, v \rangle$ , then so does $t$ , hence $u \in [\![ B ]\!]$ and $v \in [\![ C ]\!]$ . If $A$ has the form $B \oplus C$ and $t'$ reduces to $\textit{inl}(u)$ , then so does $t$ , hence $u \in [\![ B ]\!]$ . And if $A$ has the form $B \oplus C$ and $t'$ reduces to $\textit{inr}(v)$ , then so does $t$ , hence $v \in [\![ C ]\!]$ .

Lemma 2.11 (Girard’s lemma). Let $t$ be a proof that is not an introduction, such that all the one-step reducts of $t$ are in $[\![ A ]\!]$ . Then $t \in [\![ A ]\!]$ .

Proof. Let $t, t_2, \dots$ be a reduction sequence issued from $t$ . If it has a single element, it is finite. Otherwise, we have $t \longrightarrow t_2$ . As $t_2 \in [\![ A ]\!]$ , it strongly terminates and the reduction sequence is finite. Thus, $t$ strongly terminates.

Furthermore, if $A$ has the form $B \multimap C$ and $t \longrightarrow ^* \lambda x.u$ , then let $t, t_2, \ldots, t_n$ be a reduction sequence from $t$ to $\lambda x.u$ . As $t_n$ is an introduction and $t$ is not, $n \geq 2$ . Thus, $t \longrightarrow t_2 \longrightarrow ^* t_n$ . We have $t_2 \in [\![ A ]\!]$ , thus for all $v \in [\![ B ]\!]$ , $(v/x)u \in [\![ C ]\!]$ .

If $A$ has the form $B \otimes C$ and $t \longrightarrow ^* u \otimes v$ , then let $t, t_2, \ldots, t_n$ be a reduction sequence from $t$ to $u \otimes v$ . As $t_n$ is an introduction and $t$ is not, $n \geq 2$ . Thus, $t \longrightarrow t_2 \longrightarrow ^* t_n$ . We have $t_2 \in [\![ A ]\!]$ , thus $u \in [\![ B ]\!]$ and $v \in [\![ C ]\!]$ .

If $A$ has the form $B \,\&\, C$ and $t \longrightarrow ^* \langle u, v \rangle$ , the proof is similar.

If $A$ has the form $B \oplus C$ and $t \longrightarrow ^* \textit{inl}(u)$ , then let $t, t_2,\dots, t_n$ be a reduction sequence from $t$ to $\textit{inl}(u)$ . As $t_n$ is an introduction and $t$ is not, $n \geq 2$ . Thus, $t \longrightarrow t_2 \longrightarrow ^* t_n$ . We have $t_2 \in [\![ A ]\!]$ , thus $u \in [\![ B ]\!]$ .

If $A$ has the form $B \oplus C$ and $t \longrightarrow ^* \textit{inr}(v)$ , the proof is similar.

In Lemmas 2.12 to 2.27, we prove the adequacy of each proof constructor.

Lemma 2.12 (Adequacy of $\boldsymbol{+}$ ). If $t_1 \in [\![ A ]\!]$ and $t_2 \in [\![ A ]\!]$ , then $t_1 \boldsymbol{+} t_2 \in [\![ A ]\!]$ .

Proof. By induction on $A$ . The proofs $t_1$ and $t_2$ strongly terminate. Thus, by Lemma 2.6, the proof $t_1 \boldsymbol{+} t_2$ strongly terminates. Furthermore:

  • If the proposition $A$ has the form $B \multimap C$ , and $t_1 \boldsymbol{+} t_2 \longrightarrow ^* \lambda x. v$ then either $t_1 \longrightarrow ^* \lambda x. u_1$ , $t_2 \longrightarrow ^* \lambda x. u_2$ , and $u_1 \boldsymbol{+} u_2 \longrightarrow ^* v$ , or $t_1 \longrightarrow ^* \lambda x. v$ , or $t_2 \longrightarrow ^* \lambda x. v$ . In the first case, as $t_1$ and $t_2$ are in $[\![ A ]\!]$ , for every $w$ in $[\![ B ]\!]$ , $(w/x)u_1 \in [\![ C ]\!]$ and $(w/x)u_2 \in [\![ C ]\!]$ . By induction hypothesis, $(w/x)(u_1 \boldsymbol{+} u_2) = (w/x)u_1 \boldsymbol{+} (w/x)u_2 \in [\![ C ]\!]$ and by Lemma 2.10, $(w/x)v \in [\![ C ]\!]$ . In the second and the third, as $t_1$ and $t_2$ are in $[\![ A ]\!]$ , for every $w$ in $[\![ B ]\!]$ , $(w/x)v \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \otimes C$ , and $t_1 \boldsymbol{+} t_2 \longrightarrow ^* v \otimes v'$ then $t_1 \longrightarrow ^* v \otimes v'$ , or $t_2 \longrightarrow ^* v \otimes v'$ . As $t_1$ and $t_2$ are in $[\![ A ]\!]$ , $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \,\&\, C$ , and $t_1 \boldsymbol{+} t_2 \longrightarrow ^* \langle v, v' \rangle$ then $t_1 \longrightarrow ^* \langle u_1, u'_{\!\!1} \rangle$ , $t_2 \longrightarrow ^* \langle u_2, u'_{\!\!2} \rangle$ , $u_1 \boldsymbol{+} u_2 \longrightarrow ^* v$ , and $u'_{\!\!1} \boldsymbol{+} u'_{\!\!2} \longrightarrow ^* v'$ , or $t_1 \longrightarrow ^* \langle v, v' \rangle$ , or $t_2 \longrightarrow ^* \langle v, v' \rangle$ . In the first case, as $t_1$ and $t_2$ are in $[\![ A ]\!]$ , $u_1$ and $u_2$ are in $[\![ B ]\!]$ and $u'_{\!\!1}$ and $u'_{\!\!2}$ are in $[\![ C ]\!]$ . By induction hypothesis, $u_1 \boldsymbol{+} u_2 \in [\![ B ]\!]$ and $u'_{\!\!1} \boldsymbol{+} u'_{\!\!2} \in [\![ C ]\!]$ and by Lemma 2.10, $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ . In the second and the third, as $t_1$ and $t_2$ are in $[\![ A ]\!]$ , $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \oplus C$ , and $t_1 \boldsymbol{+} t_2 \longrightarrow ^* \textit{inl}(v)$ then $t_1 \longrightarrow ^* \textit{inl}(v)$ or $t_2 \longrightarrow ^* \textit{inl}(v)$ . As $t_1$ and $t_2$ are in $[\![ A ]\!]$ , $v \in [\![ B ]\!]$ . The proof is similar if $t_1 \boldsymbol{+} t_2 \longrightarrow ^* \textit{ inr}(v)$ .

Lemma 2.13 (Adequacy of $\bullet$ ). If $t \in [\![ A ]\!]$ , then $a \bullet t \in [\![ A ]\!]$ .

Proof. By induction on $A$ . The proof $t$ strongly terminates. Thus, by Lemma 2.7, the proof $a \bullet t$ strongly terminates. Furthermore:

  • If the proposition $A$ has the form $B \multimap C$ , and $a \bullet t \longrightarrow ^* \lambda x. v$ then either $t \longrightarrow ^* \lambda x. u$ and $a \bullet u \longrightarrow ^* v$ , or $t \longrightarrow ^* \lambda x. v$ . In the first case, as $t$ is in $[\![ A ]\!]$ , for every $w$ in $[\![ B ]\!]$ , $(w/x)u \in [\![ C ]\!]$ . By induction hypothesis, $(w/x) (a \bullet u) = a \bullet (w/x)u \in [\![ C ]\!]$ and by Lemma 2.10, $(w/x)v \in [\![ C ]\!]$ . In the second, as $t$ is in $[\![ A ]\!]$ , for every $w$ in $[\![ B ]\!]$ , $(w/x)v \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \otimes C$ , and $a \bullet t \longrightarrow ^* v \otimes v'$ then $t \longrightarrow ^* v \otimes v'$ . As $t$ is in $[\![ A ]\!]$ , $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \,\&\, C$ , and $a \bullet t \longrightarrow ^* \langle v, v' \rangle$ then $t \longrightarrow ^* \langle u, u' \rangle$ , $a \bullet u \longrightarrow ^* v$ , and $a \bullet u' \longrightarrow ^* v'$ , or $t \longrightarrow ^* \langle v, v' \rangle$ . In the first case, as $t$ is in $[\![ A ]\!]$ , $u$ is in $[\![ B ]\!]$ and $u'$ is in $[\![ C ]\!]$ . By induction hypothesis, $a \bullet u \in [\![ B ]\!]$ and $a \bullet u' \in [\![ C ]\!]$ and by Lemma 2.10, $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ . In the second, as $t$ is in $[\![ A ]\!]$ , $v \in [\![ B ]\!]$ and $v' \in [\![ C ]\!]$ .

  • If the proposition $A$ has the form $B \oplus C$ , and $a \bullet t \longrightarrow ^* \textit{inl}(v)$ then $t \longrightarrow ^* \textit{inl}(v)$ . Then, by Lemma 2.10, $\textit{inl}(v) \in [\![ A ]\!]$ hence, $v \in [\![ B ]\!]$ . The proof is similar if $a \bullet t \longrightarrow ^* \textit{inr}(v)$ .

Lemma 2.14 (Adequacy of $a.\star$ ). We have $a.\star \in [\![{\mathfrak{1}} ]\!]$ .

Proof. As $a.\star$ is irreducible, it strongly terminates, hence $a.\star \in [\![{\mathfrak{1}} ]\!]$ .

Lemma 2.15 (Adequacy of $\lambda$ ). If, for all $u \in [\![ A ]\!]$ , $(u/x)t \in [\![ B ]\!]$ , then $\lambda x.t \in [\![ A \multimap B ]\!]$ .

Proof. By Lemma 2.9, $x \in [\![ A ]\!]$ , thus $t = (x/x)t \in [\![ B ]\!]$ . Hence, $t$ strongly terminates. Consider a reduction sequence issued from $\lambda x.t$ . This sequence can only reduce $t$ hence it is finite. Thus, $\lambda x.t$ strongly terminates.

Furthermore, if $\lambda x.t \longrightarrow ^* \lambda x.t'$ , then $t \longrightarrow ^* t'$ . Let $u \in [\![ A ]\!]$ , $(u/x)t \longrightarrow ^* (u/x)t'$ . As $(u/x)t \in [\![ B ]\!]$ , by Lemma 2.10, $(u/x)t' \in [\![ B ]\!]$ .

Lemma 2.16 (Adequacy of $\otimes$ ). If $t_1 \in [\![ A ]\!]$ and $t_2 \in [\![ B ]\!]$ , then $t_1 \otimes t_2 \in [\![ A \otimes B ]\!]$ .

Proof. The proofs $t_1$ and $t_2$ strongly terminate. Consider a reduction sequence issued from $t_1 \otimes t_2$ . This sequence can only reduce $t_1$ and $t_2$ , hence it is finite. Thus, $t_1 \otimes t_2$ strongly terminates.

Furthermore, if $t_1 \otimes t_2 \longrightarrow ^* t'_{\!\!1} \otimes t'_{\!\!2}$ , then $t_1 \longrightarrow ^* t'_{\!\!1}$ and $t_2 \longrightarrow ^* t'_{\!\!2}$ . By Lemma 2.10, $t'_{\!\!1} \in [\![ A ]\!]$ and $t'_{\!\!2} \in [\![ B ]\!]$ .

Lemma 2.17 (Adequacy of $\langle \rangle$ ). We have $\langle \rangle \in [\![ \top ]\!]$ .

Proof. As $\langle \rangle$ is irreducible, it strongly terminates, hence $\langle \rangle \in [\![ \top ]\!]$ .

Lemma 2.18 (Adequacy of $\langle .\,, . \rangle$ ). If $t_1 \in [\![ A ]\!]$ and $t_2 \in [\![ B ]\!]$ , then $\langle t_1, t_2 \rangle \in [\![ A \,\&\, B ]\!]$ .

Proof. The proofs $t_1$ and $t_2$ strongly terminate. Consider a reduction sequence issued from $\langle t_1, t_2 \rangle$ . This sequence can only reduce $t_1$ and $t_2$ , hence it is finite. Thus, $\langle t_1, t_2 \rangle$ strongly terminates.

Furthermore, if $\langle t_1, t_2 \rangle \longrightarrow ^* \langle t'_{\!\!1}, t'_{\!\!2} \rangle$ , then $t_1 \longrightarrow ^* t'_{\!\!1}$ and $t_2 \longrightarrow ^* t'_{\!\!2}$ . By Lemma 2.10, $t'_{\!\!1} \in [\![ A ]\!]$ and $t'_{\!\!2} \in [\![ B ]\!]$ .

Lemma 2.19 (Adequacy of $\textit{inl}$ ). If $t \in [\![ A ]\!]$ , then $\textit{inl}(t) \in [\![ A \oplus B ]\!]$ .

Proof. The proof $t$ strongly terminates. Consider a reduction sequence issued from $\textit{inl}(t)$ . This sequence can only reduce $t$ , hence it is finite. Thus, $\textit{inl}(t)$ strongly terminates.

Furthermore, if $\textit{inl}(t) \longrightarrow ^* \textit{inl}(t')$ , then $t \longrightarrow ^* t'$ . By Lemma 2.10, $t' \in [\![ A ]\!]$ . And $\textit{inl}(t)$ never reduces to $\textit{inr}(t')$ .

Lemma 2.20 (Adequacy of $\textit{inr}$ ). If $t \in [\![ B ]\!]$ , then $\textit{inr}(t) \in [\![ A \oplus B ]\!]$ .

Proof. Similar to the proof of Lemma 2.19.

Lemma 2.21 (Adequacy of $\delta _{{\mathfrak{1}}}$ ). If $t_1 \in [\![{\mathfrak{1}} ]\!]$ and $t_2 \in [\![ C ]\!]$ , then $\delta _{{\mathfrak{1}}}(t_1,t_2) \in [\![ C ]\!]$ .

Proof. The proofs $t_1$ and $t_2$ strongly terminate. We prove, by induction on $\ell (t_1) + \ell (t_2)$ , that is, $\delta _{{\mathfrak{1}}}(t_1,t_2) \in [\![ C ]\!]$ . Using Lemma 2.11, we only need to prove that every of its one step reducts is in $[\![ C ]\!]$ . If the reduction takes place in $t_1$ or $t_2$ , then we apply Lemma 2.10 and the induction hypothesis.

Otherwise, the proof $t_1$ is $a.\star$ , and the reduct is $a \bullet t_2$ . We conclude with Lemma 2.13.

Lemma 2.22 (Adequacy of application). If $t_1 \in [\![ A \multimap B ]\!]$ and $t_2 \in [\![ A ]\!]$ , then $t_1\,t_2 \in [\![ B ]\!]$ .

Proof. The proofs $t_1$ and $t_2$ strongly terminate. We prove, by induction on $\ell (t_1) + \ell (t_2)$ , that $t_1\,t_2 \in [\![ B ]\!]$ . Using Lemma 2.11, we only need to prove that every of its one step reducts is in $[\![ B ]\!]$ . If the reduction takes place in $t_1$ or in $t_2$ , then we apply Lemma 2.10 and the induction hypothesis.

Otherwise, the proof $t_1$ has the form $\lambda x.u$ and the reduct is $(t_2/x)u$ . As $\lambda x.u \in [\![ A \multimap B ]\!]$ , we have $(t_2/x)u \in [\![ B ]\!]$ .

Lemma 2.23 (Adequacy of $\delta _{\otimes }$ ). If $t_1 \in [\![ A \otimes B ]\!]$ and for all $u$ in $[\![ A ]\!]$ , for all $v$ in $[\![ B ]\!]$ , $(u/x,v/y)t_2 \in [\![ C ]\!]$ , then $\delta _{\otimes }(t_1, xy.t_2) \in [\![ C ]\!]$ .

Proof. By Lemma 2.9, $x \in [\![ A ]\!]$ and $y \in [\![ B ]\!]$ , thus $t_2 = (x/x,y/y)t_2 \in [\![ C ]\!]$ . Hence, $t_1$ and $t_2$ strongly terminate. We prove, by induction on $\ell (t_1) + \ell (t_2)$ , that $\delta _{\otimes }(t_1, xy.t_2) \in [\![ C ]\!]$ . Using Lemma 2.11, we only need to prove that every of its one step reducts is in $[\![ C ]\!]$ . If the reduction takes place in $t_1$ or $t_2$ , then we apply Lemma 2.10 and the induction hypothesis. Otherwise, either:

  • The proof $t_1$ has the form $w_2 \otimes w_3$ and the reduct is $(w_2/x,w_3/y)t_2$ . As $w_2 \otimes w_3 \in [\![ A \otimes B ]\!]$ , we have $w_2 \in [\![ A ]\!]$ and $w_3 \in [\![ B ]\!]$ . Hence, $(w_2/x,w_3/y)t_2 \in [\![ C ]\!]$ .

  • The proof $t_1$ has the form $t'_{\!\!1} \boldsymbol{+} t''_{\!\!\!1}$ and the reduct is $\delta _{\otimes }(t'_{\!\!1}, xy.t_2) \boldsymbol{+} \delta _{\otimes }(t''_{\!\!\!1}, xy.t_2)$ . As $t_1 \longrightarrow t'_{\!\!1}$ with an ultra-reduction rule, we have by Lemma 2.10, $t'_{\!\!1} \in [\![ A \otimes B ]\!]$ . Similarly, $t''_{\!\!\!1} \in [\![ A \otimes B ]\!]$ . Thus, by induction hypothesis, $\delta _{\otimes }(t'_{\!\!1}, xy.t_2) \in [\![ A \otimes B ]\!]$ and $\delta _{\otimes }(t''_{\!\!\!1}, xy.t_2) \in [\![ A \otimes B ]\!]$ . We conclude with Lemma 2.12.

  • The proof $t_1$ has the form $a \bullet t'_{\!\!1}$ and the reduct is $a \bullet \delta _{\otimes }(t'_{\!\!1}, xy.t_2)$ . As $t_1 \longrightarrow t'_{\!\!1}$ with an ultra-reduction rule, we have by Lemma 2.10, $t'_{\!\!1} \in [\![ A \oplus B ]\!]$ . Thus, by induction hypothesis, $\delta _{\otimes }(t'_{\!\!1}, xy.t_2) \in [\![ A \otimes B ]\!]$ . We conclude with Lemma 2.13.

Lemma 2.24 (Adequacy of $\delta _{{\mathfrak{0}}}$ ). If $t \in [\![{\mathfrak{0}} ]\!]$ , then $\delta _{{\mathfrak{0}}}(t) \in [\![ C ]\!]$ .

Proof. The proof $t$ strongly terminates. Consider a reduction sequence issued from $\delta _{{\mathfrak{0}}}(t)$ . This sequence can only reduce $t$ , hence it is finite. Thus, $\delta _{{\mathfrak{0}}}(t)$ strongly terminates. Moreover, it never reduces to an introduction.

Lemma 2.25 (Adequacy of $\delta _{\&}^1$ ). If $t_1 \in [\![ A \,\&\, B ]\!]$ and, for all $u$ in $[\![ A ]\!]$ , $(u/x)t_2 \in [\![ C ]\!]$ , then $\delta _{\&}^1(t_1, x.t_2) \in [\![ C ]\!]$ .

Proof. By Lemma 2.9, $x \in [\![ A ]\!]$ thus $t_2 = (x/x)t_2 \in [\![ C ]\!]$ . Hence, $t_1$ and $t_2$ strongly terminate. We prove, by induction on $\ell (t_1) + \ell (t_2)$ , that $\delta _{\&}^1(t_1, x.t_2) \in [\![ C ]\!]$ . Using Lemma 2.11, we only need to prove that every of its one step reducts is in $[\![ C ]\!]$ . If the reduction takes place in $t_1$ or $t_2$ , then we apply Lemma 2.10 and the induction hypothesis.

Otherwise, the proof $t_1$ has the form $\langle u, v \rangle$ and the reduct is $(u/x)t_2$ . As $\langle u, v \rangle \in [\![ A \,\&\, B ]\!]$ , we have $u \in [\![ A ]\!]$ . Hence, $(u/x)t_2 \in [\![ C ]\!]$ .

Lemma 2.26 (Adequacy of $\delta _{\&}^2$ ). If $t_1 \in [\![ A \,\&\, B ]\!]$ and, for all $u$ in $[\![ B ]\!]$ , $(u/x)t_2 \in [\![ C ]\!]$ , then $\delta _{\&}^2(t_1, x.t_2) \in [\![ C ]\!]$ .

Proof. Similar to the proof of Lemma 2.15.

Lemma 2.27 (Adequacy of $\delta _{\oplus }$ ). If $t_1 \in [\![ A \oplus B ]\!]$ , for all $u$ in $[\![ A ]\!]$ , $(u/x)t_2 \in [\![ C ]\!]$ , and, for all $v$ in $[\![ B ]\!]$ , $(v/y)t_3 \in [\![ C ]\!]$ , then $\delta _{\oplus }(t_1, x.t_2, y.t_3) \in [\![ C ]\!]$ .

Proof. By Lemma 2.9, $x \in [\![ A ]\!]$ , thus $t_2 = (x/x)t_2 \in [\![ C ]\!]$ . In the same way, $t_3 \in [\![ C ]\!]$ . Hence, $t_1$ , $t_2$ , and $t_3$ strongly terminate. We prove, by induction on $\ell (t_1) + \ell (t_2) + \ell (t_3)$ , that $\delta _{\oplus }(t_1, x.t_2, y.t_3) \in [\![ C ]\!]$ . Using Lemma 2.11, we only need to prove that every of its one step reducts is in $[\![ C ]\!]$ . If the reduction takes place in $t_1$ , $t_2$ , or $t_3$ , then we apply Lemma 2.10 and the induction hypothesis. Otherwise, either:

  • The proof $t_1$ has the form $\textit{inl}(w_2)$ and the reduct is $(w_2/x)t_2$ . As $\textit{inl}(w_2) \in [\![ A \oplus B ]\!]$ , we have $w_2 \in [\![ A ]\!]$ . Hence, $(w_2/x)t_2 \in [\![ C ]\!]$ .

  • The proof $t_1$ has the form $\textit{inr}(w_3)$ and the reduct is $(w_3/x)t_3$ . As $\textit{inr}(w_3) \in [\![ A \oplus B ]\!]$ , we have $w_3 \in [\![ B ]\!]$ . Hence, $(w_3/x)t_3 \in [\![ C ]\!]$ .

  • The proof $t_1$ has the form $t'_{\!\!1} \boldsymbol{+} t''_{\!\!\!1}$ and the reduct is $\delta _{\oplus }(t'_{\!\!1}, x.t_2, y.t_3) \boldsymbol{+} \delta _{\oplus }(t''_{\!\!\!1}, x.t_2, y.t_3)$ . As $t_1 \longrightarrow t'_{\!\!1}$ with an ultra-reduction rule, we have by Lemma 2.10, $t'_{\!\!1} \in [\![ A \oplus B ]\!]$ . Similarly, $t''_{\!\!\!1} \in [\![ A \oplus B ]\!]$ . Thus, by induction hypothesis, $\delta _{\oplus }(t'_{\!\!1}, x.t_2, y.t_3) \in [\![ A \oplus B ]\!]$ and $\delta _{\oplus }(t''_{\!\!\!1}, x.t_2, y.t_3) \in [\![ A \oplus B ]\!]$ . We conclude with Lemma 2.12.

  • The proof $t_1$ has the form $a \bullet t'_{\!\!1}$ and the reduct is $a \bullet \delta _{\oplus }(t'_{\!\!1}, x.t_2, y.t_3)$ . As $t_1 \longrightarrow t'_{\!\!1}$ with an ultra-reduction rule, we have by Lemma 2.10, $t'_{\!\!1} \in [\![ A \oplus B ]\!]$ . Thus, by induction hypothesis, $\delta _{\oplus }(t'_{\!\!1}, x.t_2, y.t_3) \in [\![ A \oplus B ]\!]$ . We conclude with Lemma 2.13.

Theorem 2.28 (Adequacy). Let $\Gamma \vdash t:A$ and $\sigma$ be a substitution mapping each variable $x:B\in \Gamma$ to an element of $[\![ B ]\!]$ , then $\sigma t \in [\![ A ]\!]$ .

Proof. By induction on the structure of $t$ .

If $t$ is a variable, then, by definition of $\sigma$ , $\sigma t \in [\![ A ]\!]$ . For the sixteen other proof constructors, we use the Lemmas 2.122.27. As all cases are similar, we just give a few examples.

  • If $t = \langle u, v \rangle$ , where $u$ is a proof of $C$ and $v$ a proof of $D$ , then, by induction hypothesis, $\sigma u \in [\![ C ]\!]$ and $\sigma v \in [\![ D ]\!]$ . Hence, by Lemma 2.18, $\langle \sigma u, \sigma v \rangle \in [\![ C \,\&\, D ]\!]$ , that is, $\sigma t \in [\![ A ]\!]$ .

  • If $t = \delta _{\&}^1(u_1,x.u_2)$ , where $u_1$ is a proof of $C \,\&\, D$ and $u_2$ a proof of $A$ , then, by induction hypothesis, $\sigma u_1 \in [\![ C \,\&\, D ]\!]$ , for all $v$ in $[\![ C ]\!]$ , $(v/x)\sigma u_2 \in [\![ A ]\!]$ . Hence, by Lemma 2.15, $\delta _{\&}^1(\sigma u_1,x. \sigma u_2) \in [\![ A ]\!]$ , that is, $\sigma t \in [\![ A ]\!]$ .

Corollary 2.29 (Termination). Let $\Gamma \vdash t:A$ . Then, $t$ strongly terminates.

Proof. Let $\sigma$ be the substitution mapping each variable $x:B\in \Gamma$ to itself. Note that, by Lemma 2.9, this variable is an element of $[\![ B ]\!]$ . Then, $t = \sigma t$ is an element of $[\![ A ]\!]$ . Hence, it strongly terminates.

2.5 Introduction

We now prove that closed irreducible proofs end with an introduction rule, except those of propositions of the form $A \otimes B$ and $A \oplus B$ , that are linear combinations of proofs ending with an introduction rule.

Theorem 2.30 (Introduction). Let $t$ be a closed irreducible proof of $A$ .

  • If $A$ has the form $\mathfrak{1}$ , then $t$ has the form $a.\star$ .

  • If $A$ has the form $B \multimap C$ , then $t$ has the form $\lambda x.u$ .

  • If $A$ has the form $B \otimes C$ , then $t$ has the form $u \otimes v$ , $u \boldsymbol{+} v$ , or $a \bullet u$ .

  • If $A$ has the form $\top$ , then $t$ is $\langle \rangle$ .

  • The proposition $A$ is not $\mathfrak{0}$ .

  • If $A$ has the form $B \,\&\, C$ , then $t$ has the form $\langle u, v \rangle$ .

  • If $A$ has the form $B \oplus C$ , then $t$ has the form $\textit{inl}(u)$ , $\textit{inr}(u)$ , $u \boldsymbol{+} v$ , or $a \bullet u$ .

Proof. By induction on the structure of $t$ .

We first remark that, as the proof $t$ is closed, it is not a variable. Then, we prove that it cannot be an elimination.

  • If $t = \delta _{{\mathfrak{1}}}(u,v)$ , then $u$ is a closed irreducible proof of $\mathfrak{1}$ , hence, by induction hypothesis, it has the form $a.\star$ and the proof $t$ is reducible.

  • If $t = u\,v$ , then $u$ is a closed irreducible proof of $B \multimap A$ , hence, by induction hypothesis, it has the form $\lambda x.u_1$ and the proof $t$ is reducible.

  • If $t = \delta _{\otimes }(u,xy.v)$ , then $u$ is a closed irreducible proof of $B \otimes C$ , hence, by induction hypothesis, it has the form $u_1 \otimes u_2$ , $u_1 \boldsymbol{+} u_2$ , or $a \bullet u_1$ and the proof $t$ is reducible.

  • If $t = \delta _{{\mathfrak{0}}}(u)$ , then $u$ is a closed irreducible proof of $\mathfrak{0}$ and, by induction hypothesis, no such proof exists.

  • If $t = \delta _{\&}^1(u,x.v)$ , then $u$ is a closed irreducible proof of $B \,\&\, C$ , hence, by induction hypothesis, it has the form $\langle u_1, u_2 \rangle$ and the proof $t$ is reducible.

  • If $t = \delta _{\&}^2(u,x.v)$ , then $u$ is a closed irreducible proof of $B \,\&\, C$ , hence, by induction hypothesis, it has the form $\langle u_1, u_2 \rangle$ and the proof $t$ is reducible.

  • If $t = \delta _{\oplus }(u,x.v,y.w)$ , then $u$ is a closed irreducible proof of $B \oplus C$ , hence, by induction hypothesis, it has the form $\textit{inl}(u_1)$ , $\textit{inr}(u_1)$ , $u_1 \boldsymbol{+} u_2$ , or $a \bullet u_1$ and the proof $t$ is reducible.

Hence, $t$ is an introduction, a sum, or a product.

It $t$ has the form $a.\star$ , then $A$ is $\mathfrak{1}$ . If it has the form $\lambda x.u$ , then $A$ has the form $B \multimap C$ . If it has the form $u \otimes v$ , then $A$ has the form $B \otimes C$ . If it is $\langle \rangle$ , then $A$ is $\top$ . If it has the form $\langle u, v \rangle$ , then $A$ has the form $B \,\&\, C$ . If it has the form $\textit{inl}(u)$ or $\textit{inr}(u)$ , then $A$ has the form $B \oplus C$ . We prove that, if it has the form $u \boldsymbol{+} v$ or $a \bullet u$ , $A$ has the form $B \otimes C$ or $B \oplus C$ .

  • If $t = u \boldsymbol{+} v$ , then the proofs $u$ and $v$ are two closed and irreducible proofs of $A$ . If $A ={\mathfrak{1}}$ then, by induction hypothesis, they both have the form $a.\star$ and the proof $t$ is reducible. If $A$ has the form $B \multimap C$ then, by induction hypothesis, they are both abstractions and the proof $t$ is reducible. If $A = \top$ then, by induction hypothesis, they both are $\langle \rangle$ and the proof $t$ is reducible. If $A ={\mathfrak{0}}$ then, they are irreducible proofs of $\mathfrak{0}$ and, by induction hypothesis, no such proofs exist. If $A$ has the form $B \,\&\, C$ , then, by induction hypothesis, they are both pairs and the proof $t$ is reducible. Hence, $A$ has the form $B \otimes C$ or $B \oplus C$ .

  • If $t = a \bullet u$ , then the proofs $u$ is a closed and irreducible proof of $A$ . If $A ={\mathfrak{1}}$ then, by induction hypothesis, $u$ has the form $b.\star$ and the proof $t$ is reducible. If $A$ has the form $B \multimap C$ then, by induction hypothesis, it is an abstraction and the proof $t$ is reducible. If $A = \top$ then, by induction hypothesis, it is $\langle \rangle$ and the proof $t$ is reducible. If $A ={\mathfrak{0}}$ then, it is an irreducible proof of $\mathfrak{0}$ and, by induction hypothesis, no such proof exists. If $A$ has the form $B \,\&\, C$ , then, by induction hypothesis, it is a pair and the proof $t$ is reducible. Hence, $A$ has the form $B \otimes C$ or $B \oplus C$ .

Remark 2.31. We reap here the benefit of commuting, when possible, the interstitial rules with the introduction rules, as closed irreducible proofs of $\mathfrak{1}$ , $A \multimap B$ , $\top$ and $A \,\&\, B$ are genuine introductions.

Those of $A \otimes B$ and $A \oplus B$ are linear combinations of introductions. But $u \otimes v \boldsymbol{+} u' \otimes v'$ is not convertible to $u' \otimes v' \boldsymbol{+} u \otimes v$ , $u \otimes v \boldsymbol{+} u \otimes v$ is not convertible to $2 \bullet (u \otimes v)$ , $u_1 \otimes v \boldsymbol{+} u_2 \otimes v$ is not convertible to $(u_1 \boldsymbol{+} u_2) \otimes v$ , etc. Thus, the proof of $A \otimes B$ are formal, rather than genuine, linear combinations of pairs formed with a closed irreducible proof of $A$ and a closed irreducible proof of $B$ . Such a set still need to be quotiented by a proper equivalence relation to provide the tensor product of the two semi-modules (Díaz-Caro and Malherbe Reference Díaz-Caro and Malherbe2022).

Lemma 2.32 (Disjunction). If the proposition $A \oplus B$ has a closed proof, then $A$ has a closed proof or $B$ has a closed proof.

Proof. Consider a closed proof of $A \oplus B$ and its irreducible form $t$ . We prove, by induction on the structure of $t$ , that $A$ has a closed proof or $B$ has a closed proof. By Theorem 2.30, $t$ has the form $\textit{inl}(u)$ , $\textit{inr}(u)$ , $u \boldsymbol{+} v$ , or $a \bullet u$ . If it has the form $\textit{inl}(u)$ , $u$ is a closed proof of $A$ . If it has the form $\textit{inr}(u)$ , $u$ is a closed proof of $B$ . If it has the form $u \boldsymbol{+} v$ or $a \bullet u$ , $u$ is a closed irreducible proof of $A \oplus B$ . Thus, by induction hypothesis, $A$ has a closed proof or $B$ has a closed proof.

3. Vectors and Matrices

From now on, we take the set of scalars $\mathcal S$ to be a field, rather than just a semi-ring, to aid the intuition with vector spaces. However, all the results are also valid for semi-modules over the semi-ring $\mathcal S$ , except those considering the additive inverse (namely, Definition 3.3 and item 4 of Lemma 3.4).

3.1 Vectors

As there is one rule $\mathfrak{1}$ -i for each scalar $a$ , there is one closed irreducible proof $a.\star$ for each scalar $a$ . Thus, the closed irreducible proofs $a.\star$ of $\mathfrak{1}$ are in one-to-one correspondence with the elements of $\mathcal S$ . Therefore, the proofs $\langle a.\star, b.\star \rangle$ of ${\mathfrak{1}} \&{\mathfrak{1}}$ are in one-to-one with the elements of ${\mathcal S}^2$ , the proofs $\langle \langle a.\star, b.\star \rangle, c.\star \rangle$ of $({\mathfrak{1}} \&{\mathfrak{1}}) \&{\mathfrak{1}}$ , and also the proofs $\langle a.\star, \langle b.\star, c.\star \rangle \rangle$ of ${\mathfrak{1}} \,\&\, ({\mathfrak{1}} \&{\mathfrak{1}})$ , are in one-to-one correspondence with the elements of ${\mathcal S}^3$ , etc.

Hence, as any vector space of finite dimension $n$ is isomorphic to ${\mathcal S}^n$ , we have a way to express the vectors of any $\mathcal S$ -vector space of finite dimension. Yet, choosing an isomorphism between a vector space and ${\mathcal S}^n$ amounts to choosing a basis in this vector space, thus the expression of a vector depends on the choice of a basis. This situation is analogous to that of matrix formalisms. Matrices can represent vectors and linear functions, but the matrix representation is restricted to finite dimensional vector spaces, and the representation of a vector depends on the choice of a basis. A change of basis in the vector space is reflected by the use of a transformation matrix.

Definition 3.1 ( $\mathcal V$ ). The set $\mathcal V$ is inductively defined as follows: ${\mathfrak{1}} \in{\mathcal V}$ , and if $A$ and $B$ are in $\mathcal V$ , then so is $A \,\&\, B$ .

We now show that if $A \in{\mathcal V}$ , then the set of closed irreducible proofs of $A$ has a vector space structure.

Definition 3.2 (Zero vector). If $A \in{\mathcal V}$ , we define the proof $0_A$ of $A$ by induction on $A$ . If $A ={\mathfrak{1}}$ , then $0_A = 0.\star$ . If $A = A_1 \,\&\, A_2$ , then $0_A = \langle 0_{A_1}, 0_{A_2} \rangle$ .

Definition 3.3 (Additive inverse). If $A \in{\mathcal V}$ , and $t$ is a proof of $A$ , we define the proof $- t$ of $A$ by induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $a.\star$ , we let $- t = (-a).\star$ . If $A = A_1 \,\&\, A_2$ , $t$ reduces to $\langle t_1, t_2 \rangle$ where $t_1$ is a proof of $A_1$ and $t_2$ of $A_2$ . We let $-t = \langle - t_1, - t_2 \rangle$ .

Lemma 3.4. If $A \in{\mathcal V}$ and $t$ , $t_1$ , $t_2$ , and $t_3$ are closed proofs of $A$ , then

\begin{align*} \begin{array}{ll} (1)\quad (t_1 \boldsymbol{+} t_2) \boldsymbol{+} t_3 \equiv t_1 \boldsymbol{+} (t_2 \boldsymbol{+} t_3)\qquad \qquad & (5) \quad a \bullet b \bullet t \equiv (a \times b) \bullet t\\ (2) \quad t_1 \boldsymbol{+} t_2 \equiv t_2 \boldsymbol{+} t_1\qquad \qquad & (6) \quad 1 \bullet t \equiv t\\ (3)\quad t \boldsymbol{+} 0_A \equiv t \qquad \qquad & (7)\quad a \bullet (t_1 \boldsymbol{+} t_2) \equiv a \bullet t_1 \boldsymbol{+} a \bullet t_2\\ (4)\quad t \boldsymbol{+} - t \equiv 0_A \qquad \qquad & (8) \quad (a + b) \bullet t \equiv a \bullet t \boldsymbol{+} b \bullet t\\ \end{array} \end{align*}

Proof.

  1. 1. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t_1$ , $t_2$ , and $t_3$ reduce, respectively, to $a.\star$ , $b.\star$ , and $c.\star$ . We have

    \begin{equation*}(t_1 \boldsymbol {+} t_2) \boldsymbol {+} t_3 \longrightarrow ^* ((a + b) + c).\star = (a + (b + c)).\star \mathrel {{}^*{\longleftarrow }} t_1 \boldsymbol {+} (t_2 \boldsymbol {+} t_3)\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t_1$ , $t_2$ , and $t_3$ reduce, respectively, to $\langle u_1, v_1 \rangle$ , $\langle u_2, v_2 \rangle$ , and $\langle u_3, v_3 \rangle$ . Using the induction hypothesis, we have
    \begin{align*} (t_1 \boldsymbol{+} t_2) \boldsymbol{+} t_3 \longrightarrow ^* &\ \langle (u_1 \boldsymbol{+} u_2) \boldsymbol{+} u_3, (v_1 \boldsymbol{+} v_2) \boldsymbol{+} v_3 \rangle \\ \equiv &\ \langle u_1 \boldsymbol{+} (u_2 \boldsymbol{+} u_3), v_1 \boldsymbol{+} (v_2 \boldsymbol{+} v_3) \rangle \mathrel{{}^*{\longleftarrow }} t_1 \boldsymbol{+} (t_2 \boldsymbol{+} t_3) \end{align*}
  2. 2. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t_1$ and $t_2$ reduce, respectively, to $a.\star$ and $b.\star$ . We have

    \begin{equation*}t_1 \boldsymbol {+} t_2 \longrightarrow ^* (a + b).\star = (b + a).\star \mathrel {{}^*{\longleftarrow }} t_2 \boldsymbol {+} t_1\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t_1$ and $t_2$ reduce, respectively, to $\langle u_1, v_1 \rangle$ and $\langle u_2, v_2 \rangle$ . Using the induction hypothesis, we have
    \begin{equation*}t_1 \boldsymbol {+} t_2 \longrightarrow ^* \langle u_1 \boldsymbol {+} u_2, v_1 \boldsymbol {+} v_2 \rangle \equiv \langle u_2 \boldsymbol {+} u_1, v_2 \boldsymbol {+} v_1 \rangle \mathrel {{}^*{\longleftarrow }} t_2 \boldsymbol {+} t_1\end{equation*}
  3. 3. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $a.\star$ . We have

    \begin{equation*}t \boldsymbol {+} 0_A \longrightarrow ^* (a + 0).\star = a.\star \mathrel {{}^*{\longleftarrow }} t\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t$ reduces to $\langle u, v \rangle$ . Using the induction hypothesis, we have
    \begin{equation*}t \boldsymbol {+} 0_A \longrightarrow ^* \langle u \boldsymbol {+} 0_{A_1}, v \boldsymbol {+} 0_{A_2} \rangle \equiv \langle u, v \rangle \mathrel {{}^*{\longleftarrow }} t\end{equation*}
  4. 4. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $a.\star$ . We have

    \begin{equation*}t \boldsymbol {+} -t \longrightarrow ^* {a.\star } \boldsymbol {+} (-a).\star \longrightarrow (a+(-a)).\star = 0.\star = 0_A\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t$ reduces to $\langle u, v \rangle$ . Using the induction hypothesis, we have
    \begin{equation*}t \boldsymbol {+} - t \longrightarrow ^* \langle u \boldsymbol {+} - u, v \boldsymbol {+} - v \rangle \equiv \langle 0_{A_1}, 0_{A_2} \rangle = 0_A\end{equation*}
  5. 5. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $c.\star$ . We have

    \begin{equation*}a \bullet b \bullet t \longrightarrow ^* (a \times (b \times c)).\star = ((a \times b) \times c).\star \mathrel {{}^*{\longleftarrow }} (a \times b) \bullet t\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t$ reduces to $\langle u, v \rangle$ . Using the induction hypothesis, we have
    \begin{equation*}a \bullet b \bullet t \longrightarrow ^* \langle a \bullet b \bullet u, a \bullet b \bullet v \rangle \equiv \langle (a \times b) \bullet u, (a \times b) \bullet v \rangle \mathrel {{}^*{\longleftarrow }} (a \times b) \bullet t\end{equation*}
  6. 6. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $a.\star$ . We have

    \begin{equation*}1 \bullet t \longrightarrow ^*(1 \times a).\star = a.\star \mathrel {{}^*{\longleftarrow }} t\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t$ reduces to $\langle u, v \rangle$ . Using the induction hypothesis, we have
    \begin{equation*}1 \bullet t \longrightarrow ^* \langle 1 \bullet u, 1 \bullet v \rangle \equiv \langle u, v \rangle \mathrel {{}^*{\longleftarrow }} t\end{equation*}
  7. 7. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t_1$ and $t_2$ reduce, respectively, to $b.\star$ and $c.\star$ . We have

    \begin{equation*}a \bullet (t_1 \boldsymbol {+} t_2) \longrightarrow ^* (a \times (b + c)).\star = (a \times b + a \times c).\star \mathrel {{}^*{\longleftarrow }} a \bullet t_1 \boldsymbol {+} a \bullet t_2\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t_1$ and $t_2$ reduce, respectively, to $\langle u_1, v_1 \rangle$ and $\langle u_2, v_2 \rangle$ . Using the induction hypothesis, we have
    \begin{align*} a \bullet (t_1 \boldsymbol{+} t_2) \longrightarrow ^* &\ \langle a \bullet (u_1 \boldsymbol{+} u_2), a \bullet (v_1 \boldsymbol{+} v_2) \rangle \\ \equiv &\ \langle a \bullet u_1 \boldsymbol{+} a \bullet u_2, a \bullet v_1 \boldsymbol{+} a \bullet v_2 \rangle \mathrel{{}^*{\longleftarrow }} a \bullet t_1 \boldsymbol{+} a \bullet t_2 \end{align*}
  8. 8. By induction on $A$ . If $A ={\mathfrak{1}}$ , then $t$ reduces to $c.\star$ . We have

    \begin{equation*}(a + b) \bullet t \longrightarrow ^* ((a + b) \times c).\star = (a \times c + b \times c).\star \mathrel {{}^*{\longleftarrow }} a \bullet t \boldsymbol {+} b \bullet t\end{equation*}
    If $A = A_1 \,\&\, A_2$ , then $t$ reduces to $\langle u, v \rangle$ . Using the induction hypothesis, we have
    \begin{align*} (a + b) \bullet t \longrightarrow ^*\,&\langle (a + b) \bullet u, (a + b) \bullet v \rangle \\ \equiv \,& \langle a \bullet u \boldsymbol{+} b \bullet u, a \bullet v \boldsymbol{+} b \bullet v \rangle \mathrel{{}^*{\longleftarrow }} a \bullet t \boldsymbol{+} b \bullet t \end{align*}

Definition 3.5 (Dimension of a proposition in $\mathcal V$ ). To each proposition $A \in{\mathcal V}$ , we associate a positive natural number $d(A)$ , which is the number of occurrences of the symbol $\mathfrak{1}$ in $A$ : $d({\mathfrak{1}}) = 1$ and $d(B \,\&\, C) = d(B) + d(C)$ .

If $A \in{\mathcal V}$ and $d(A) = n$ , then the closed irreducible proofs of $A$ and the vectors of ${\mathcal S}^n$ are in one-to-one correspondence: to each closed irreducible proof $t$ of $A$ , we associate a vector $\underline{t}$ of ${\mathcal S}^n$ and to each vector $\textbf{ u}$ of ${\mathcal S}^n$ , we associate a closed irreducible proof $\overline{\textbf{ u}}^A$ of $A$ .

Definition 3.6 (One-to-one correspondence). Let $A \in{\mathcal V}$ with $d(A) = n$ . To each closed irreducible proof $t$ of $A$ , we associate a vector $\underline{t}$ of ${\mathcal S}^n$ as follows.

  • If $A ={\mathfrak{1}}$ , then $t = a.\star$ . We let $\underline{t} = \left (\begin{smallmatrix} a \end{smallmatrix}\right )$ .

  • If $A = A_1 \,\&\, A_2$ , then $t = \langle u, v \rangle$ . We let $\underline{t}$ be the vector with two blocks $\underline{u}$ and $\underline{v}$ : $\underline{t} = \left (\begin{smallmatrix} \underline{u}\\\underline{v} \end{smallmatrix}\right )$ . Remind that, using the block notation, if $\textbf{ u} = \left (\begin{smallmatrix} 1\\2 \end{smallmatrix}\right )$ and $\textbf{ v} = \left (\begin{smallmatrix} 3 \end{smallmatrix}\right )$ , then $\left (\begin{smallmatrix} \textbf{ u}\\\textbf{ v} \end{smallmatrix}\right ) = \left (\begin{smallmatrix} 1\\2\\3 \end{smallmatrix}\right )$ and not $\left (\begin{smallmatrix} \left (\begin{smallmatrix} 1\\2 \end{smallmatrix}\right ) \\ \left (\begin{smallmatrix} 3 \end{smallmatrix}\right ) \end{smallmatrix}\right )$ .

To each vector $\textbf{ u}$ of ${\mathcal S}^n$ , we associate a closed irreducible proof $\overline{\textbf{ u}}^A$ of $A$ .

  • If $n = 1$ , then $\textbf{ u} = \left (\begin{smallmatrix} a \end{smallmatrix}\right )$ . We let $\overline{\textbf{ u}}^A = a.\star$ .

  • If $n \gt 1$ , then $A = A_1 \,\&\, A_2$ , let $n_1$ and $n_2$ be the dimensions of $A_1$ and $A_2$ . Let $\textbf{ u}_1$ and $\textbf{ u}_2$ be the two blocks of $\textbf{ u}$ of $n_1$ and $n_2$ lines, so $\textbf{ u} = \Big(\begin{array}{c}\textbf{ u}_1\\[-5pt] \textbf{ u}_2\end{array}\Big)$ . We let $\overline{\textbf{ u}}^A = \langle \overline{\textbf{ u}_1}^{A_1}, \overline{\textbf{ u}_2}^{A_2} \rangle$ .

We extend the definition of $\underline{t}$ to any closed proof of $A$ , $\underline{t}$ is by definition $\underline{t'}$ where $t'$ is the irreducible form of $t$ .

The next lemmas show that the symbol $\boldsymbol{+}$ expresses the sum of vectors and the symbol $\bullet$ , the product of a vector by a scalar.

Lemma 3.7 (Sum of two vectors). Let $A \in{\mathcal V}$ , and $u$ and $v$ be two closed proofs of $A$ . Then, $\underline{u \boldsymbol{+} v} = \underline{u} + \underline{v}$ .

Proof. By induction on $A$ .

  • If $A ={\mathfrak{1}}$ , then $u \longrightarrow ^* a.\star$ , $v \longrightarrow ^* b.\star$ , $\underline{u} = \left (\begin{smallmatrix} a \end{smallmatrix}\right )$ , $\underline{v} = \left (\begin{smallmatrix} b \end{smallmatrix}\right )$ . Thus, $\underline{u \boldsymbol{+} v} = \underline{{a.\star } \boldsymbol{+}{b.\star }} = \underline{(a + b).\star } = \left (\begin{smallmatrix} a + b \end{smallmatrix}\right ) = \left (\begin{smallmatrix} a \end{smallmatrix}\right ) + \left (\begin{smallmatrix} b \end{smallmatrix}\right ) = \underline{u} + \underline{v}$ .

  • If $A = A_1 \,\&\, A_2$ , then $u \longrightarrow ^* \langle u_1, u_2 \rangle$ , $v \longrightarrow \langle v_1, v_2 \rangle$ , $\underline{u} = \left (\begin{smallmatrix} \underline{u_1} \\ \underline{u_2} \end{smallmatrix}\right )$ and $\underline{v} = \left (\begin{smallmatrix} \underline{v_1} \\ \underline{v_2} \end{smallmatrix}\right )$ . Thus, using the induction hypothesis, $\underline{u \boldsymbol{+} v} = \underline{\langle u_1, u_2 \rangle \boldsymbol{+} \langle v_1, v_2 \rangle } = \underline{\langle u_1 \boldsymbol{+} v_1, u_2 \boldsymbol{+} v_2 \rangle } = \left (\begin{smallmatrix} \underline{u_1 \boldsymbol{+} v_1} \\ \underline{u_2 \boldsymbol{+} v_2} \end{smallmatrix}\right ) = \left (\begin{smallmatrix} \underline{u_1} + \underline{v_1} \\ \underline{u_2} + \underline{v_2} \end{smallmatrix}\right ) = \left (\begin{smallmatrix} \underline{u_1} \\ \underline{u_2} \end{smallmatrix}\right ) + \left (\begin{smallmatrix} \underline{v_1} \\ \underline{v_2} \end{smallmatrix}\right ) = \underline{u} + \underline{v}$ .

Lemma 3.8 (Product of a vector by a scalar). Let $A \in{\mathcal V}$ and $u$ be a closed proof of $A$ . Then, $\underline{a \bullet u} = a \underline{u}$ .

Proof. By induction on $A$ .

  • If $A ={\mathfrak{1}}$ , then $u \longrightarrow ^* b.\star$ , $\underline{u} = \left (\begin{smallmatrix} b \end{smallmatrix}\right )$ , Thus $\underline{a \bullet u} = \underline{a \bullet b.\star } = \underline{(a \times b).\star } = \left (\begin{smallmatrix} a \times b \end{smallmatrix}\right ) = a \left (\begin{smallmatrix} b \end{smallmatrix}\right ) = a \underline{u}$ .

  • If $A = A_1 \,\&\, A_2$ , then $u \longrightarrow ^* \langle u_1, u_2 \rangle$ , $\underline{u} = \left (\begin{smallmatrix} \underline{u_1} \\ \underline{u_2} \end{smallmatrix}\right )$ . Thus, using the induction hypothesis, $\underline{a \bullet u} = \underline{a \bullet \langle u_1, u_2 \rangle } = \underline{\langle a \bullet u_1, a \bullet u_2 \rangle } = \left (\begin{smallmatrix} \underline{a \bullet u_1} \\ \underline{a \bullet u_2} \end{smallmatrix}\right ) = \left (\begin{smallmatrix} a \underline{u_1} \\ a \underline{u_2} \end{smallmatrix}\right ) = a \left (\begin{smallmatrix} \underline{u_1} \\ \underline{u_2} \end{smallmatrix}\right ) = a \underline{u}$

Remark 3.9. We have seen that the rules

\begin{equation*} \begin {array}{r@{\,}l@{\qquad \qquad }r@{\,}l} {a.\star } \boldsymbol {+} b.\star & \longrightarrow (a+b).\star & a \bullet b.\star & \longrightarrow (a \times b).\star \\ \langle t, u \rangle \boldsymbol {+} \langle v, w \rangle & \longrightarrow \langle t \boldsymbol {+} v, u \boldsymbol {+} w \rangle & a \bullet \langle t, u \rangle & \longrightarrow \langle a \bullet t, a \bullet u \rangle \end {array} \end{equation*}

are commutation rules between the interstitial rules, sum and prod, and introduction rules $\mathfrak{1}$ -i and $\&$ -i.

Now, these rules appear to be also vector calculation rules.

3.2 Matrices

We now want to prove that if $A, B \in{\mathcal V}$ with $d(A) = m$ and $d(B) = n$ , and $F$ is a linear function from ${\mathcal S}^m$ to ${\mathcal S}^n$ , then there exists a closed proof $f$ of $A \multimap B$ such that, for all vectors $\textbf{ u} \in{\mathcal S}^m$ , $\underline{f\,\overline{\textbf{ u}}^A} = F(\textbf{ u})$ . This can equivalently be formulated as the fact that if $M$ is a matrix with $m$ columns and $n$ lines, then there exists a closed proof $f$ of $A \multimap B$ such that for all vectors $\textbf{ u} \in{\mathcal S}^m$ , $\underline{f\,\overline{\textbf{ u}}^A} = M \textbf{ u}$ .

A similar theorem has been proved in Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023) for a nonlinear calculus. The proof of the following theorem is just a check that the construction given there verifies the linearity constraints of the ${\mathcal L}^{\mathcal S}$ -calculus.

Theorem 3.10 (Matrices). Let $A, B \in{\mathcal V}$ with $d(A) = m$ and $d(B) = n$ and let $M$ be a matrix with $m$ columns and $n$ lines, then there exists a closed proof $t$ of $A \multimap B$ such that, for all vectors $\textbf{ u} \in{\mathcal S}^m$ , $\underline{t\,\overline{\textbf{ u}}^A} = M \textbf{ u}$ .

Proof. By induction on $A$ .

  • If $A ={\mathfrak{1}}$ , then $M$ is a matrix of one column and $n$ lines. Hence, it is also a vector of $n$ lines. We take

    \begin{equation*}t = \lambda x. \delta _{{\mathfrak {1}}}(x,\overline {M}^B)\end{equation*}
    Let $\textbf{ u} \in{\mathcal S}^1$ , $\textbf{ u}$ has the form $\left (\begin{smallmatrix} a \end{smallmatrix}\right )$ and $\overline{\textbf{ u}}^A = a.\star$ . Then, using Lemma 3.8, we have $\underline{t\,\overline{\textbf{ u}}^A} = \underline{\delta _{{\mathfrak{1}}}(\overline{\textbf{ u}}^A,\overline{M}^B)} = \underline{\delta _{{\mathfrak{1}}}(a.\star,\overline{M}^B)} = \underline{a \bullet \overline{M}^B} = a \underline{\overline{M}^B} = a M = M \left (\begin{smallmatrix} a\end{smallmatrix}\right ) = M \textbf{ u}$ .
  • If $A = A_1 \,\&\, A_2$ , then let $d(A_1) = m_1$ and $d(A_2) = m_2$ . Let $M_1$ and $M_2$ be the two blocks of $M$ of $m_1$ and $m_2$ columns, so $M = \left (\begin{smallmatrix} M_1 & M_2\end{smallmatrix}\right )$ . By induction hypothesis, there exist closed proofs $t_1$ and $t_2$ of the propositions $A_1 \multimap B$ and $A_2 \multimap B$ such that, for all vectors $\textbf{ u}_1 \in{\mathcal S}^{m_1}$ and $\textbf{ u}_2 \in{\mathcal S}^{m_2}$ , we have $\underline{t_1\,\overline{\textbf{ u}_1}^{A_1}} = M_1 \textbf{ u}_1$ and $\underline{t_2\,\overline{\textbf{ u}_2}^{A_ 2}} = M_2 \textbf{ u}_2$ . We take

    \begin{equation*}t = \lambda x. (\delta _{\&}^1(x,y. (t_1\,y)) \boldsymbol {+} \delta _{\&}^2(x, z. (t_2\,z)))\end{equation*}
    Let $\textbf{ u} \in{\mathcal S}^m$ , and $\textbf{ u}_1$ and $\textbf{ u}_2$ be the two blocks of $m_1$ and $m_2$ lines of $\textbf{ u}$ , so $\textbf{ u} = \Big(\begin{array}{c}\textbf{ u}_1\\[-5pt] \textbf{ u}_2\end{array}\Big)$ , and $\overline{\textbf{ u}}^A = \langle \overline{\textbf{ u}_1}^{A_1}, \overline{\textbf{ u}_2}^{A_ 2} \rangle$ . Then, using Lemma 3.7, $\underline{t\,\overline{\textbf{ u}}^A} = \underline{\delta _{\&}^1(\langle \overline{\textbf{ u}_1}^{A_1}, \overline{\textbf{ u}_2}^{A_ 2} \rangle, y. (t_1\,y)) \boldsymbol{+} \delta _{\&}^2(\langle \overline{\textbf{ u}_1}^{A_1}, \overline{\textbf{ u}_2}^{A_ 2} \rangle, z. (t_2\,z))} = \underline{(t_1\,\overline{\textbf{ u}_1}^{A_1}) \boldsymbol{+} (t_2\,\overline{\textbf{ u}_2}^{A_ 2})} = \underline{t_1\,\overline{\textbf{ u}_1}^{A_1}} + \underline{t_2\,\overline{\textbf{ u}_2}^{A_ 2}} = M_1 \!\textbf{ u}_1 + M_2 \!\textbf{ u}_2 = \left (\begin{smallmatrix} M_1 & M_2 \end{smallmatrix}\right ) \Big(\begin{array}{c}\textbf{ u}_1\\[-5pt] \textbf{ u}_2\end{array}\Big) = M \!\textbf{ u}$ .

Remark 3.11. In the proofs $\delta _{{\mathfrak{1}}}(x,\overline{M}^B)$ , $\delta _{\&}^1(x,y. (t_1\,y))$ , and $\delta _{\&}^2(x,z. (t_2\,z))$ , the variable $x$ occurs in one argument of the symbols $\delta _{{\mathfrak{1}}}$ , $\delta _{\&}^1$ , and $\delta _{\&}^2$ , but not in the other. In contrast, in the proof $\delta _{\&}^1(x,y. (t_1\,y)) \boldsymbol{+} \delta _{\&}^2(x, z. (t_2\,z))$ , it occurs in both arguments of the symbol $\boldsymbol{+}$ . Thus, these proofs are well-formed proofs in the system of Fig. 1.

Remark 3.12. The rules

\begin{align*} \delta _{{\mathfrak{1}}}(a.\star,t) & \longrightarrow a \bullet t & \delta _{\&}^1(\langle t, u \rangle, x.v) & \longrightarrow (t/x)v \\ (\lambda x.t)\,u & \longrightarrow (u/x)t & \delta _{\&}^2(\langle t, u \rangle, x.v) & \longrightarrow (u/x)v \end{align*}

were introduced as cut reduction rules.

Now, these rules appear to be also matrix calculation rules.

Example 3.13 (Matrices with two columns and two lines). The matrix $\left (\begin{smallmatrix} a & c\\b & d \end{smallmatrix}\right )$ is expressed as the proof

\begin{equation*}t = \lambda x. \delta _{\&}^1(x,y. \delta _{{\mathfrak {1}}}(y,\langle a.\star, b.\star \rangle )) \boldsymbol {+} \delta _{\&}^2(x,z. \delta _{{\mathfrak {1}}}(z,\langle c.\star, d.\star \rangle ))\end{equation*}

Then

\begin{align*} t\,\langle e.\star, f.\star \rangle & \longrightarrow \delta _{\&}^1(\langle e.\star, f.\star \rangle,y. \delta _{{\mathfrak{1}}}(y,\langle a.\star, b.\star \rangle )) \boldsymbol{+} \delta _{\&}^2(\langle e.\star, f.\star \rangle,z. \delta _{{\mathfrak{1}}}(z,\langle c.\star, d.\star \rangle ))\\ & \longrightarrow ^* \delta _{{\mathfrak{1}}}(e.\star,\langle a.\star, b.\star \rangle ) \boldsymbol{+} \delta _{{\mathfrak{1}}}(f.\star,\langle c.\star, d.\star \rangle )\\ & \longrightarrow ^* e \bullet \langle a.\star, b.\star \rangle \boldsymbol{+} f \bullet \langle c.\star, d.\star \rangle \\ & \longrightarrow ^* \langle (a \times e).\star, (b \times e).\star \rangle \boldsymbol{+} \langle (c \times f).\star, (d \times f).\star \rangle \\ & \longrightarrow ^* \langle (a \times e + c \times f).\star, (b \times e + d \times f).\star \rangle \end{align*}

4. Linearity

4.1 Observational equivalence

We now prove the converse: if $A, B \in{\mathcal V}$ , then each closed proof $t$ of $A \multimap B$ expresses a linear function, that is, that if $u_1$ and $u_2$ are closed proofs of $A$ , then

\begin{equation*}t\,(u_1 \boldsymbol {+} u_2) \equiv t\,u_1 \boldsymbol {+} t\,u_2 \qquad \qquad \textrm {and}\qquad \qquad t\,(a \bullet u_1) \equiv a \bullet t\,u_1\end{equation*}

The fact that we want all proofs of ${\mathfrak{1}} \multimap{\mathfrak{1}}$ to be linear functions from $\mathcal S$ to $\mathcal S$ explains why the rule sum must be additive. If it were multiplicative, the proposition ${\mathfrak{1}} \multimap{\mathfrak{1}}$ would have the proof $g = \lambda x.{{x} \boldsymbol{+}{1.\star }}$ that is not linear as $g\,({1.\star } \boldsymbol{+}{1.\star }) \longrightarrow ^* 3.\star \not \equiv 4.\star \mathrel{{}^*{\longleftarrow }}{(g\,1.\star )} \boldsymbol{+}{(g\,1.\star )}$ .

In fact, instead of proving that for all closed proofs $t$ of $A \multimap B$

\begin{equation*}t\,(u_1 \boldsymbol {+} u_2) \equiv t\,u_1 \boldsymbol {+} t\,u_2 \qquad \qquad \textrm {and}\qquad \qquad t\,(a \bullet u_1) \equiv a \bullet t\,u_1\end{equation*}

it is more convenient to prove the equivalent statement that for a proof $t$ of $B$ such that $x:A \vdash t:B$

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \equiv t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a \bullet u_1\} \equiv a \bullet t\{u_1\}\end{equation*}

We can attempt to generalize this statement and prove that these properties hold for all proofs, whatever the proved proposition. But this generalization is too strong for two reasons. First, as we have seen, the sum rule does not commute with the introduction rules of $\otimes$ and $\oplus$ . Thus, if, for example, $A ={\mathfrak{1}}$ , $B ={\mathfrak{1}} \oplus{\mathfrak{1}}$ , and $t = \textit{inl}(x)$ , we have

\begin{equation*}t\{{1.\star } \boldsymbol {+} 1.\star \} \longrightarrow \textit {inl}({2.\star }) \qquad \qquad \textrm {and}\qquad \qquad t\{1.\star \} \boldsymbol {+} t\{1.\star \} = \textit {inl}(1.\star ) \boldsymbol {+} \textit {inl}(1.\star )\end{equation*}

and these two irreducible proofs are different. Second, the introduction rule of $\multimap$ introduces a free variable. Thus, if, for example, $A ={\mathfrak{1}}$ , $B = ({\mathfrak{1}} \multimap{\mathfrak{1}}) \multimap{\mathfrak{1}}$ , and $t = \lambda y. y\,x$ , we have

\begin{equation*}t\,\{{1.\star } \boldsymbol {+} 2.\star \} \longrightarrow \lambda y. y\,3.{\star } \qquad \qquad \textrm {and}\qquad \qquad t\{1.\star \} \boldsymbol {+} t\{2.\star \} \longrightarrow \lambda y. {(y\,1.\star )} \boldsymbol {+} {(y\,2.\star )}\end{equation*}

and these two irreducible proofs are different.

Note, however, that, although the proofs $\textit{inl}({2.\star })$ and $\textit{inl}(1.\star ) \boldsymbol{+} \textit{inl}(1.\star )$ of ${\mathfrak{1}} \oplus{\mathfrak{1}}$ are different, if we put them in the context $\delta _{\oplus }(\_,x.x,y.y)$ , then both proofs $\delta _{\oplus }(\textit{inl}({1.\star } \boldsymbol{+} 1.\star ),x.x,y.y)$ and $\delta _{\oplus }(\textit{inl}(1.\star ) \boldsymbol{+} \textit{ inl}(1.\star ),x.x,y.y)$ reduce to $2.\star$ . In the same way, although the proofs $\lambda y. y\,3.\star$ and $\lambda y.{(y\,1.\star )} \boldsymbol{+}{(y\,2.\star )}$ are different, if we put them in the context $\_\,\lambda z. z$ , then both proofs $(\lambda y. y\,3.\star )\,\lambda z. z$ and $(\lambda y.{(y\,1.\star )} \boldsymbol{+}{(y\,2.\star )})\,\lambda z. z$ reduce to $3.\star$ . This leads us to introduce a notion of observational equivalence.

Definition 4.1 (Observational equivalence). Two proofs $t_1$ and $t_2$ of a proposition $B$ are observationally equivalent, $t_1 \sim t_2$ , if for all propositions $C$ in $\mathcal{V}$ and for all proofs $c$ such that $\_:B \vdash c:C$ , we have

\begin{equation*}c\{t_1\} \equiv c\{t_2\}\end{equation*}

We want to prove that for all proofs $t$ such that $x:A \vdash t:B$ and for all closed proofs $u_1$ and $u_2$ of $A$ , we have

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \sim t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a\bullet u_1\} \sim a \bullet t\{u_1\}\end{equation*}

However, a proof of this property by induction on $t$ does not go through and, to prove it, we first prove, in Theorem 4.10, that for all proofs $t$ such that $x:A \vdash t:B$ , $B \in \mathcal{V}$ , for all closed proofs $u_1$ and $u_2$ of $A$

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \equiv t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a\bullet u_1\} \equiv a \bullet t\{u_1\}\end{equation*}

and we deduce the result for observational equivalence in Corollary 4.11.

4.2 Measure of a proof

The proof of Theorem 4.10 proceeds by induction on the measure of the proof $t$ , and the first part of this proof is the definition of such a measure function $\mu$ . Our goal could be to build a measure function such that if $t$ is proof of $B$ in a context $\Gamma, x:A$ and $u$ is a proof of $A$ , then $\mu ((u/x)t) = \mu (t) + \mu (u)$ . This would be the case, for the usual notion of size, if $x$ had exactly one occurrence in $t$ . But, due to additive connectives, the variable $x$ may have zero, one, or several occurrences in $t$ .

First, as the rule $\mathfrak{0}$ -e is additive, it may happen that $\delta _{{\mathfrak{0}}}(t)$ is a proof in the context $\Gamma, x:A$ , and $x$ has no occurrence in $t$ . Thus, we lower our expectations to $\mu ((u/x)t) \leq \mu (t) + \mu (u)$ , which is sufficient to prove the linearity theorem.

Then, as the rules $\boldsymbol{+}$ , $\&$ -i, and $\oplus$ -e rules are additive, if $u \boldsymbol{+} v$ is proof of $B$ in a context $\Gamma,{x:A}$ , $x$ may occur both in $u$ and in $v$ . And the same holds for the proofs $\langle u, v \rangle$ , and $\delta _{\oplus }(t,x.u,y.v)$ . In these cases, we modify the definition of the measure function and take $\mu (t \boldsymbol{+} u) = 1 + \max (\mu (t), \mu (u))$ , instead of $\mu (t \boldsymbol{+} u) = 1 + \mu (t) + \mu (u)$ , etc., making the function $\mu$ a mix between a size function and a depth function. This leads to the following definition.

Definition 4.2 (Measure of a proof).

\begin{align*} \mu (x) &= 0 & \mu (\langle \rangle ) &= 1\\ \mu (t \boldsymbol{+} u) &= 1 + \max (\mu (t), \mu (u)) & \mu (\delta _{{\mathfrak{0}}}(t)) &= 1 + \mu (t)\\ \mu (a \bullet t) &= 1 + \mu (t) & \mu (\langle t, u \rangle ) &= 1 + \max (\mu (t), \mu (u))\\ \mu (a.\star ) &= 1 & \mu (\delta _{\&}^1(t,y.u)) &= 1 + \mu (t) + \mu (u)\\ \mu (\delta _{{\mathfrak{1}}}(t,u)) &= 1 + \mu (t) + \mu (u) & \mu (\delta _{\&}^2(t,y.u)) &= 1 + \mu (t) + \mu (u)\\ \mu (\lambda x. t) &= 1 + \mu (t) & \mu (\textit{inl}(t)) &= 1 + \mu (t)\\ \mu (t\,u) &= 1 + \mu (t) + \mu (u) & \mu (\textit{inr}(t)) &= 1 + \mu (t)\\ \mu (t \otimes u) &= 1 + \mu (t) + \mu (u) & \mu (\delta _{\oplus }(t,y.u,z.v)) &= 1 + \mu (t) + \max (\mu (u), \mu (v)) \\ \mu (\delta _{\otimes }(t,xy. u)) &= 1 + \mu (t) + \mu (u) & \end{align*}

Lemma 4.3. If $\Gamma, x:A \vdash t:B$ and $\Delta \vdash u:A$ then $\mu ((u/x)t) \leq \mu (t)+\mu (u)$ .

Proof. By induction on $t$ .

  • If $t$ is a variable, then $\Gamma$ is empty, $t = x$ , $(u/x)t = u$ and $\mu (t) = 0$ . Thus, $\mu ((u/x)t) = \mu (u) = \mu (t)+\mu (u)$ .

  • If $t = t_1 \boldsymbol{+} t_2$ , then $\Gamma, x:A \vdash t_1:B$ , $\Gamma, x:A \vdash t_2:B$ . Using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \max (\mu ((u/x)t_1),\mu ((u/x)t_2)) \leq 1 + \max (\mu (t_1) + \mu (u), \mu (t_2) + \mu (u)) = \mu (t) + \mu (u)$ .

  • If $t = a \bullet t_1$ , then $\Gamma, x:A \vdash t_1:B$ . Using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \mu ((u/x)t_1) \leq 1 + \mu (t_1) + \mu (u) = \mu (t) + \mu (u)$ .

  • The proof $t$ cannot be of the form $a.\star$ , that is not a proof in $\Gamma, x:A$ .

  • If $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ , then $\Gamma = \Gamma _1, \Gamma _2$ and there are two cases.

    1. If $\Gamma _1, x:A \vdash t_1:{\mathfrak{1}}$ and $\Gamma _2 \vdash t_2:B$ , then, using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \mu ((u/x)t_1) + \mu (t_2) \leq 1 + \mu (t_1) + \mu (u) + \mu (t_2) = \mu (t) + \mu (u)$ .

    2. If $\Gamma _1 \vdash t_1:{\mathfrak{1}}$ and $\Gamma _2, x:A \vdash t_2:B$ , then, using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \mu (t_1) + \mu ((u/x)t_2) \leq 1 + \mu (t_1) + \mu (t_2) + \mu (u) = \mu (t) + \mu (u)$ .

  • If $t = \lambda y. t_1$ , we apply the same method as for the case $t = a \bullet t_1$ .

  • If $t = t_1\,t_2$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = t_1 \otimes t_2$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \delta _{\otimes }(t_1, yz. t_2)$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \langle \rangle$ , then $\mu ((u/x)t) = 1 \leq 1 + \mu (u) = \mu (t) + \mu (u)$ .

  • If $t = \delta _{{\mathfrak{0}}}(t_1)$ , then $\Gamma = \Gamma _1, \Gamma _2$ and there are two cases.

    1. If $\Gamma _1, x:A \vdash t_1:{\mathfrak{0}}$ , we apply the same method as for the case $t = a \bullet t_1$ .

    2. If $\Gamma _1 \vdash t_1:{\mathfrak{0}}$ , then, we get $\mu ((u/x)t) = \mu (t) \leq \mu (t) + \mu (u)$ .

  • If $t = \langle t_1, t_2 \rangle$ , we apply the same method as for the case $t = t_1 \boldsymbol{+} t_2$ .

  • If $t = \delta _{\&}^1(t_1,y.t_2)$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \delta _{\&}^2(t_1,y.t_2)$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \textit{inl}(t_1)$ or $t = \textit{inr}(t_1)$ , we apply the same method as for the case $t = a \bullet t_1$ .

  • If $t = \delta _{\oplus }(t_1,y.t_2,z.t_3)$ then $\Gamma = \Gamma _1, \Gamma _2$ and there are two cases.

    1. If $\Gamma _1, x:A \vdash t_1:C_1 \oplus C_2$ , $\Gamma _2, y:C_1 \vdash t_2:A$ , $\Gamma _2, z:C_2 \vdash t_3:A$ , then using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \mu ((u/x)t_1) + \max (\mu (t_2),\mu (t_3)) \leq 1 + \mu (t_1) + \mu (u) + \max (\mu (t_2),\mu (t_3)) = \mu (t) + \mu (u)$ .

    2. If $\Gamma _1 \vdash t_1:C_1 \oplus C_2$ , $\Gamma _2, y:C_1, x:A \vdash t_2:A$ , $\Gamma _2, z:C_2, x:A \vdash t_3:A$ , then using the induction hypothesis, we get $\mu ((u/x)t) = 1 + \mu (t_1) + \max (\mu ((u/x)t_2),\mu ((u/x)t_3)) \leq 1 + \mu (t_1) + \max (\mu (t_2) + \mu (u),\mu (t_3) + \mu (u)) = 1 + \mu (t_1) + \max (\mu (t_2),\mu (t_3)) + \mu (u) = \mu (t) + \mu (u)$ .

Example 4.4. Let $t = \delta _{{\mathfrak{0}}}(y)$ and $u = 1.\star$ . We have $y:{\mathfrak{0}}, x:{\mathfrak{1}} \vdash t:C$ , $\mu (t) = 1$ , $\mu (u) = 1$ and $\mu ((u/x)t) = 1$ . Thus, $\mu ((u/x)t) \leq \mu (t) + \mu (u)$ .

As a corollary, we get a similar measure preservation theorem for reduction.

Lemma 4.5. If $\Gamma \vdash t:A$ and $t \longrightarrow u$ , then $\mu (t) \geq \mu (u)$ .

Proof. By induction on $t$ . The context cases are trivial because the functions used to define $\mu (t)$ in function of $\mu$ of the subterms of $t$ are monotone. We check the rules one by one, using Lemma 4.3.

  • $\mu (\delta _{{\mathfrak{1}}}(a.\star,t)) = 2 + \mu (t) \gt 1 + \mu (t) = \mu (a \bullet t)$

  • $\mu ((\lambda x.t)\,u) = 2 + \mu (t) + \mu (u) \gt \mu (t) + \mu (u) \geq \mu ((u/x)t)$

  • $\mu (\delta _{\otimes }(u \otimes v,x y.w)) = 2 + \mu (u) + \mu (v) + \mu (w) \gt \mu (u) + \mu (v) + \mu (w) \geq \mu (u) + \mu ((v/y)w) \geq \mu ((u/x)(v/y)w) = \mu ((u/x,v/y)w)$ as $x$ does not occur in $v$

  • $\mu (\delta _{\&}^1(\langle t, u \rangle, x.v)) = 2 + \max (\mu (t),\mu (u)) + \mu (v) \gt \mu (t) + \mu (v) \geq \mu ((t/x)v)$

  • $\mu (\delta _{\&}^2(\langle t, u \rangle, x.v)) = 2 + \max (\mu (t),\mu (u)) + \mu (v) \gt \mu (u) + \mu (v) \geq \mu ((u/x)v)$

  • $\mu (\delta _{\oplus }(\textit{inl}(t),x.v,y.w)) = 2 + \mu (t) + \max (\mu (v), \mu (w)) \gt \mu (t) + \mu (v) \geq \mu ((t/x)v)$

  • $\mu (\delta _{\oplus }(\textit{inr}(t),x.v,y.w)) = 2 + \mu (t) + \max (\mu (v),\mu (w)) \gt \mu (t) + \mu (w) \geq \mu ((t/y)w)$

  • $\mu ({a.\star } \boldsymbol{+} b.\star ) = 2 \gt 1 = \mu ((a+b).\star )$

  • $\mu ((\lambda x.t) \boldsymbol{+} (\lambda x.u)) = 1 + \max (1 + \mu (t), 1 + \mu (u)) = 2 + \max (\mu (t),\mu (u)) = \mu (\lambda x.(t \boldsymbol{+} u))$

  • $\mu (\delta _{\otimes }(t \boldsymbol{+} u,x y.v)) = 2 + \max (\mu (t), \mu (u)) + \mu (v) = 1 + \max (1 + \mu (t) + \mu (v), 1 + \mu (u) + \mu (v)) = \mu (\delta _{\otimes }(t,x y.v) \boldsymbol{+} \delta _{\otimes }(u,x y.v))$

  • $\mu (\langle \rangle \boldsymbol{+} \langle \rangle ) = 2 \gt 1 = \mu (\langle \rangle )$

  • $\mu (\langle t, u \rangle \boldsymbol{+} \langle v, w \rangle ) = 1 + \max (1 + \max (\mu (t),\mu (u)), 1 + \max (\mu (v),\mu (w)))$ $= 2 + \max (\mu (t),\mu (u),\mu (v),\mu (w)) = 1 + \max (1+ \max (\mu (t),\mu (v)), 1 + \max (\mu (u),\mu (w))) = \mu (\langle t \boldsymbol{+} v, u \boldsymbol{+} w \rangle )$

  • $\mu (\delta _{\oplus }(t \boldsymbol{+} u,x.v,y.w)) = 2 + \max (\mu (t),\mu (u)) + \max (\mu (v), \mu (w)) = 1 + \max (1 + \mu (t) + \max (\mu (v), \mu (w)), 1 + \mu (u) + \max (\mu (v), \mu (w))) = \mu (\delta _{\oplus }(t,x.v,y.w) \boldsymbol{+} \delta _{\oplus }(u,x.v,y.w))$

  • $\mu (a \bullet b.\star ) = 2 \gt 1 = \mu ((a \times b).\star )$

  • $\mu (a \bullet \lambda x. t) = 2 + \mu (t) = \mu (\lambda x. a \bullet t)$

  • $\mu (\delta _{\otimes }(a \bullet t,x y.v)) = 2 + \mu (t) + \mu (v) = a \bullet \delta _{\otimes }(t,x y.v)$

  • $\mu (a \bullet \langle \rangle ) = 2 \gt 1 = \mu (\langle \rangle )$

  • $\mu (a \bullet \langle t, u \rangle ) = 2 + \max (\mu (t),\mu (u)) = 1 + \max (1 + \mu (t),1 + \mu (u)) = \mu (\langle a \bullet t, a \bullet u \rangle )$

  • $\mu (\delta _{\oplus }(a \bullet t,x.v,y.w)) = 2 + \mu (t) + \max (\mu (v),\mu (w)) = \mu (a \bullet \delta _{\oplus }(t,x.v,y.w))$

4.3 Elimination contexts

The second part of the proof is a standard generalization of the notion of head variable. In the $\lambda$ -calculus, we can decompose a term $t$ as a sequence of applications $t = u\,v_1\,\ldots \,v_n$ , with terms $v_1, \dots, v_n$ and a term $u$ , which is not an application. Then $u$ may either be a variable, in which case it is the head variable of the term, or an abstraction.

Similarly, any proof in the ${\mathcal L}^{\mathcal S}$ -calculus can be decomposed into a sequence of elimination rules, forming an elimination context, and a proof $u$ that is either a variable, an introduction, a sum, or a product.

Definition 4.6 (Elimination context). An elimination context is a proof with a single free variable, written $\_$ , that is a proof in the language

\begin{equation*} K = \_ \mid \delta _{{\mathfrak {1}}}(K,u) \mid K\,u \mid \delta _{\otimes }(K,x y.v) \mid \delta _{{\mathfrak {0}}}(K) \mid \delta _{\&}^1(K,x.r) \mid \delta _{\&}^2(K,x.r) \mid \delta _{\oplus }(K,x.r,y.s) \end{equation*}

where $u$ is a closed proof, $FV(v) = \{x,y\}$ , $FV(r) \subseteq \{x\}$ , and $FV(s) \subseteq \{y\}$ .

In the case of elimination contexts, Lemma 4.3 can be strengthened.

Lemma 4.7. $\mu (K\{t\}) = \mu (K) + \mu (t)$

Proof. By induction on $K$

  • If $K = \_$ , then $\mu (K) = 0$ and $K\{t\} = t$ . We have $\mu (K\{t\}) = \mu (t) = \mu (K) + \mu (t)$ .

  • If $K = \delta _{{\mathfrak{1}}}(K_1,u)$ then $K\{t\} = \delta _{{\mathfrak{1}}}(K_1\{t\},u)$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\}) + \mu (u) = 1 + \mu (K_1) + \mu (t) + \mu (u) = \mu (K) + \mu (t)$ .

  • If $K = K_1\,u$ then $K\{t\} = K_1\{t\}\,u$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\}) + \mu (u) = 1 + \mu (K_1) + \mu (t) + \mu (u) = \mu (K) + \mu (t)$ .

  • If $K = \delta _{\otimes }(K_1,xy.v)$ , then $K\{t\} = \delta _{\otimes }(K_1\{t\},xy.v)$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\}) + \mu (v) = 1 + \mu (K_1) + \mu (t) + \mu (v) = \mu (K) + \mu (t)$ .

  • If $K = \delta _{{\mathfrak{0}}}(K_1)$ , then $K\{t\} = \delta _{{\mathfrak{0}}}(K_1\{t\})$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\})= 1 + \mu (K_1) + \mu (t) = \mu (K) + \mu (t)$ .

  • If $K = \delta _{\&}^1(K_1,x.r)$ , then $K\{t\} = \delta _{\&}^1(K_1\{t\},x.r)$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\}) + \mu (r) = 1 + \mu (K_1) + \mu (t) + \mu (r) = \mu (K) + \mu (t)$ . The same holds if $K = \delta _{\&}^2(K_1,y.s)$ .

  • If $K = \delta _{\oplus }(K_1,x.r,y.s)$ , then $K\{t\} = \delta _{\oplus }(K_1\{t\},x.r,y.s)$ . We have, by induction hypothesis, $\mu (K\{t\}) = 1 + \mu (K_1\{t\}) + \max (\mu (r), \mu (s)) = 1 + \mu (K_1) + \mu (t) + \max (\mu (r), \mu (s)) = \mu (K) + \mu (t)$ .

Note that in Example 4.4, $t = \delta _{{\mathfrak{0}}}(y)$ is not an elimination context as $\_$ does not occur in it. Note also that the proof of Lemma 4.7 uses the fact that the function $\mu$ is a mix between a size function and a depth function. The corresponding lemma holds neither for the size function nor for the depth function.

Lemma 4.8 (Decomposition of a proof). If $t$ is an irreducible proof such that $x:C \vdash t:A$ , then there exist an elimination context $K$ , a proof $u$ , and a proposition $B$ , such that $\_:B \vdash K:A$ , $x:C \vdash u:B$ , $u$ is either the variable $x$ , an introduction, a sum, or a product, and $t = K\{u\}$ .

Proof. By induction on the structure of $t$ .

  • If $t$ is the variable $x$ , an introduction, a sum, or a product, we take $K = \_$ , $u = t$ , and $B = A$ .

  • If $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ , then $t_1$ is not a closed proof as otherwise it would be a closed irreducible proof of $\mathfrak{1}$ , hence, by Theorem 2.30, it would be an introduction and $t$ would not be irreducible. Thus, by the inversion property, $x:C \vdash t_1:{\mathfrak{1}}$ and $\vdash t_2:A$ . By induction hypothesis, there exist $K_1$ , $u_1$ and $B_1$ such that $\_:B_1 \vdash K_1:{\mathfrak{1}}$ , $x:C \vdash u_1:B_1$ , and $t_1 = K_1\{u_1\}$ . We take $u = u_1$ , $K = \delta _{{\mathfrak{1}}}(K_1,t_2)$ , and $B = B_1$ . We have $\_:B \vdash K:A$ , $x:C \vdash u:B$ , and $K\{u\} = \delta _{{\mathfrak{1}}}(K_1\{u_1\},t_2) = t$ .

  • If $t = t_1\,t_2$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \delta _{\otimes }(t_1,yz.t_2)$ , then $t_1$ is not a closed proof as otherwise it would be a closed irreducible proof of a multiplicative conjunction $\otimes$ , hence, by Theorem 2.30, it would be an introduction, a sum, or a product, and $t$ would not be irreducible. Thus, by the inversion property, $x:C \vdash t_1:D_1 \otimes D_2$ and $y:D_1, z:D_2 \vdash t_2:A$ . By induction hypothesis, there exist $\_:B_1 \vdash K_1:C \otimes D$ , $x:C \vdash u_1:B_1$ , and $t_1 = K_1\{u_1\}$ . We take $u = u_1$ , $K = \delta _{\otimes }(K_1,y z.t_2)$ , and $B = B_1$ . We have $\_:B \vdash K:A$ , $x:C \vdash u:B$ , and $K\{u\} = \delta _{\otimes }(K_1\{u_1\},y z.t_2) = t$ .

  • If $t = \delta _{{\mathfrak{0}}}(t_1)$ , then, by Theorem 2.30, $t_1$ is not a closed proof as there is no closed irreducible proof of $\mathfrak{0}$ . Thus, by the inversion property, $x:C \vdash t_1:{\mathfrak{0}}$ . By induction hypothesis, there exist $K_1$ , $u_1$ , and $B_1$ such that $\_:B_1 \vdash K_1:{\mathfrak{0}}$ , $x:C \vdash u_1:B_1$ , and $t_1 = K_1\{u_1\}$ . We take $u = u_1$ , $K = \delta _{{\mathfrak{0}}}(K_1)$ , and $B = B_1$ . We have $\_:B, \vdash K:A$ , $x:C \vdash u:B$ , and $K\{u\} = \delta _{{\mathfrak{0}}}(K_1\{u_1\}) = t$ .

  • If $t = \delta _{\&}^1(t_1,y.t_2)$ or $t = \delta _{\&}^2(t_1,y.t_2)$ , we apply the same method as for the case $t = \delta _{{\mathfrak{1}}}(t_1,t_2)$ .

  • If $t = \delta _{\oplus }(t_1,y.t_2,z.t_3)$ , we apply the same method as for the case $t = \delta _{\otimes }(t_1,yz.t_2)$ .

A final lemma shows that we can always decompose an elimination context $K$ different from $\_$ into a smaller elimination context $K_1$ and a last elimination rule $K_2$ . This is similar to the fact that we can always decompose a nonempty list into a smaller list and its last element.

Lemma 4.9 (Decomposition of an elimination context). If $K$ is an elimination context such that $\_:A \vdash K:B$ and $K \neq \_$ , then $K$ has the form $K_1\{K_2\}$ where $K_1$ is an elimination context and $K_2$ is an elimination context formed with a single elimination rule, that is, the elimination rule of the top symbol of $A$ .

Proof. As $K$ is not $\_$ , it has the form $K = L_1\{L_2\}$ . If $L_2 = \_$ , we take $K_1 = \_$ , $K_2 = L_1$ and, as the proof is well-formed, $K_2$ must be an elimination of the top symbol of $A$ . Otherwise, by induction hypothesis, $L_2$ has the form $L_2 = K'_{\!\!1}\{K'_{\!\!2}\}$ , and $K'_{\!\!2}$ is an elimination of the top symbol of $A$ . Hence, $K = L_1\{K'_{\!\!1}\{K'_{\!\!2}\}\}$ . We take $K_1 = L_1\{K'_{\!\!1}\}$ , $K_2 = K'_{\!\!2}$ .

4.4 Linearity

We now have the tools to prove the linearity theorem expressing that if $A$ is a proposition, $B$ is a proposition of $\mathcal V$ , $t$ is a proof such that $x:A \vdash t:B$ , and $u_1$ and $u_2$ are two closed proofs of $A$ , then

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \equiv t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a \bullet u_1\} \equiv a \bullet t\{u_1\}\end{equation*}

We proceed by induction on the measure $\mu (t)$ of the proof $t$ , but the case analysis is nontrivial. Indeed, when $t$ is an elimination, for example, when $t = t_1\,t_2$ , the variable $x$ must occur in $t_1$ , and we would like to apply the induction hypothesis to this proof. But we cannot because $t_1$ is a proof of an implication, that is not in $\mathcal{V}$ . This leads us to first decompose the proof $t$ into a proof of the form $K\{t'\}$ where $K$ is an elimination context and $t'$ is either the variable $x$ , an introduction, a sum, or a product, and analyze the different possibilities for $t'$ . The cases where $t'$ is an introduction, a sum or a product are easy, but the case where it is the variable $x$ , that is where $t = K\{x\}$ , is more complex. Indeed, in this case, we need to prove

\begin{equation*}K\{u_1 \boldsymbol {+} u_2\} \equiv K\{u_1\} \boldsymbol {+} K\{u_2\} \qquad \textrm {and}\qquad K\{a \bullet u_1\} \equiv a \bullet K\{u_1\}\end{equation*}

and this leads to a second case analysis where we consider the last elimination rule of $K$ and how it interacts with $u_1$ and $u_2$ .

For example, when $K = K_1 \{\delta _{\&}^1(\_,y.r)\}$ , then $u_1$ and $u_2$ are closed proofs of an additive conjunction $\&$ , thus they reduce to two pairs $\langle u_{11}, u_{12} \rangle$ and $\langle u_{21}, u_{22} \rangle$ , and $K\{u_1 \boldsymbol{+} u_2\}$ reduces to $K_1 \{r\}\{u_{11} \boldsymbol{+} u_{21}\}$ . So, we need to apply the induction hypothesis to the irreducible form of $K_1 \{r\}$ . To prove that this proof is smaller than $t$ , we need Lemmas 4.5 (hence Lemma 4.3) and 4.7.

In fact, this general case has several exceptions: the cases of the elimination of the multiplicative conjunction $\otimes$ and of the additive disjunction $\oplus$ are simplified because, the sum commutes with the elimination rules and not the introduction rules of these connectives. The case of the elimination of the connective $\mathfrak{0}$ is simplified because it is empty. The case of the elimination of the connective $\mathfrak{1}$ is simplified because no substitution occurs in $r$ in this case. The case of the elimination of the implication is simplified because this rule is just the modus ponens and not the generalized elimination rule of this connective. Thus, the only remaining cases are those of the elimination rules of the additive conjunction $\&$ .

Theorem 4.10 (Linearity). If $A$ is a proposition, $B$ is proposition of $\mathcal V$ , $t$ is a proof such that $x:A \vdash t:B$ , and $u_1$ and $u_2$ and two closed proofs of $A$ , then

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \equiv t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a \bullet u_1\} \equiv a \bullet t\{u_1\}\end{equation*}

Proof. Without loss of generality, we can assume that $t$ is irreducible. We proceed by induction on $\mu (t)$ .

Using Lemma 4.8, the term $t$ can be decomposed as $K\{t'\}$ where $t'$ is either the variable $x$ , an introduction, a sum, or a product.

  • If $t'$ is an introduction, as $t$ is irreducible, $K = \_$ and $t'$ is a proof of $B \in{\mathcal V}$ , $t'$ is either $a.\star$ or $\langle t_1, t_2 \rangle$ . However, since $a.\star$ is not a proof in $x:A$ , it is $\langle t_1, t_2 \rangle$ . Using the induction hypothesis with $t_1$ and with $t_2$ ( $\mu (t_1) \lt \mu (t')$ , $\mu (t_2) \lt \mu (t')$ ), we get

    \begin{align*} t\{u_1 \boldsymbol{+} u_2\} \equiv \langle t_1\{u_1\} \boldsymbol{+} t_1\{u_2\}, t_2\{u_1\} \boldsymbol{+} t_2\{u_2\} \rangle \longleftarrow t\{u_1\} \boldsymbol{+} t\{u_2\} \end{align*}
    And
    \begin{align*} t\{a \bullet u_1\} \equiv \langle a \bullet t_1\{u_1\}, a \bullet t_2\{u_1\} \rangle & \longleftarrow a \bullet t\{u_1\} \end{align*}
  • If $t' = t_1 \boldsymbol{+} t_2$ , then using the induction hypothesis with $t_1$ , $t_2$ , and $K$ ( $\mu (t_1) \lt \mu (t)$ , $\mu (t_2) \lt \mu (t)$ , and $\mu (K) \lt \mu (t)$ ) and Lemma 3.4 (1., 2., and 7.), we get

    \begin{align*} t\{u_1 \boldsymbol{+} u_2\} & \equiv K\{(t_1\{u_1\} \boldsymbol{+} t_1\{u_2\}) \boldsymbol{+} (t_2\{u_1\} \boldsymbol{+} t_2\{u_2\})\}\\ &\equiv K\{(t_1\{u_1\} \boldsymbol{+} t_2\{u_1\}) \boldsymbol{+} (t_1\{u_2\} \boldsymbol{+} t_2\{u_2\})\} \equiv t\{u_1\} \boldsymbol{+} t\{u_2\} \end{align*}
    And
    \begin{align*} t\{a \bullet u_1\} & \equiv K\{a \bullet t_1\{u_1\} \boldsymbol{+} a \bullet t_2\{u_1\}\} \equiv K\{a \bullet (t_1\{u_1\} \boldsymbol{+} t_2\{u_1\})\} \equiv a \bullet t\{u_1\} \end{align*}
  • If $t' = b \bullet t_1$ , then using the induction hypothesis with $t_1$ and $K$ ( $\mu (t_1) \lt \mu (t)$ , $\mu (K) \lt \mu (t)$ ) and $K$ and Lemma 3.4 (7. and 5.), we get

    \begin{align*} t\{u_1 \boldsymbol{+} u_2\} \equiv K\{b \bullet (t_1 \{u_1\} \boldsymbol{+} t_1\{u_2\})\} & \equiv K\{b \bullet t_1 \{u_1\} \boldsymbol{+} b \bullet t_1\{u_2\}\} \equiv t\{u_1\} \boldsymbol{+} t\{u_2\} \end{align*}
    And
    \begin{align*} t\{a \bullet u_1\} & \equiv K\{b \bullet a \bullet t_1 \{u_1\}\} \equiv K\{a \bullet b \bullet t_1 \{u_1\}\} \equiv a \bullet t\{u_1\} \end{align*}
  • If $t'$ is the variable $x$ , we need to prove

    \begin{equation*}K\{u_1 \boldsymbol {+} u_2\} \equiv K\{u_1\} \boldsymbol {+} K\{u_2\} \qquad \textrm {and}\qquad K\{a \bullet u_1\} \equiv a \bullet K\{u_1\}\end{equation*}
    By Lemma 4.9, $K$ has the form $K_1\{K_2\}$ , and $K_2$ is an elimination of the top symbol of $A$ . We consider the various cases for $K_2$ .
    1. If $K = K_1\{\delta _{{\mathfrak{1}}}(\_,r)\}$ , then $u_1$ and $u_2$ are closed proofs of $\mathfrak{1}$ , thus $u_1 \longrightarrow ^* b.\star$ and $u_2\longrightarrow ^* c.\star$ . Using the induction hypothesis with the proof $K_1$ ( $\mu (K_1) \lt \mu (K) = \mu (t)$ ) and Lemma 3.4 (8. and 5.)

      \begin{align*} K\{u_1 \boldsymbol{+} u_2\} &\longrightarrow ^* K_1 \{\delta _{{\mathfrak{1}}}({b.\star } \boldsymbol{+} c.\star,r)\} \longrightarrow ^* K_1 \{(b + c) \bullet r\} \equiv (b + c) \bullet K_1 \{r\}\\ &\equiv b \bullet K_1 \{r\} \boldsymbol{+} c \bullet K_1 \{r\} \equiv K_1 \{b \bullet r\} \boldsymbol{+} K_1 \{c \bullet r\}\\ & \mathrel{{}^*{\longleftarrow }} K_1 \{\delta _{{\mathfrak{1}}}(b.\star,r)\} \boldsymbol{+} K_1 \{\delta _{{\mathfrak{1}}}(c.\star,r)\} \mathrel{{}^*{\longleftarrow }} K\{u_1\} \boldsymbol{+} K\{u_2\} \end{align*}
      And
      \begin{align*} K\{a \bullet u_1\} &\longrightarrow ^* K_1 \{\delta _{{\mathfrak{1}}}(a \bullet b.\star,r)\} \longrightarrow ^* K_1 \{(a \times b) \bullet r\} \equiv (a \times b) \bullet K_1 \{r\}\\ & \equiv a \bullet b \bullet K_1 \{r\} \equiv a \bullet K_1 \{b \bullet r\} \mathrel{{}^*{\longleftarrow }} a \bullet K_1 \{\delta _{{\mathfrak{1}}}(b.\star,r)\} \mathrel{{}^*{\longleftarrow }} a \bullet K\{u_1\} \end{align*}
    2. If $K = K_1 \{\_\,s\}$ , then $u_1$ and $u_2$ are closed proofs of an implication, thus $u_1 \longrightarrow ^* \lambda y. u'_{\!\!1}$ and $u_2 \longrightarrow ^* \lambda y. u'_{\!\!2}$ . Using the induction hypothesis with the proof $K_1$ ( $\mu (K_1 ) \lt \mu (K) = \mu (t)$ ), we get

      \begin{align*} K\{u_1 \boldsymbol{+} u_2\} &\longrightarrow ^* K_1 \{(\lambda y. u'_{\!\!1} \boldsymbol{+} \lambda y. u'_{\!\!2})\,s\} \longrightarrow ^* K_1 \{u'_{\!\!1}\{s\} \boldsymbol{+} u'_{\!\!2}\{s\}\}\\ &\equiv K_1 \{u'_{\!\!1}\{s\}\} \boldsymbol{+} K_1 \{u'_{\!\!2}\{s\}\} \mathrel{{}^*{\longleftarrow }} K_1 \{(\lambda y. u'_{\!\!1})\,s\} \boldsymbol{+} K_1 \{(\lambda y. u'_{\!\!2})\,s\}\\ & \mathrel{{}^*{\longleftarrow }} K\{u_1\} \boldsymbol{+} K\{u_2\} \end{align*}
      And
      \begin{align*} K\{a \bullet u_1\} &\longrightarrow ^* K_1 \{(a \bullet \lambda y. u'_{\!\!1})\,s\} \longrightarrow ^* K_1 \{a \bullet u'_{\!\!1}\{s\}\}\\ & \equiv a \bullet K_1 \{u'_{\!\!1}\{s\}\} \longleftarrow a \bullet K_1 \{(\lambda y. u'_{\!\!1})\,s\} \mathrel{{}^*{\longleftarrow }} a \bullet K\{u_1\} \end{align*}
    3. If $K = K_1 \{\delta _{\otimes }(\_,y z.r)\}$ , then, using the induction hypothesis with the proof $K_1$ ( $\mu (K_1 ) \lt \mu (K) = \mu (t)$ ), we get

      \begin{align*} K\{u_1 \boldsymbol{+} u_2\} \longrightarrow K_1 \{\delta _{\otimes }(u_1,y z.r) \boldsymbol{+} \delta _{\otimes }(u_2,y z.r)\} &\equiv K\{u_1\} \boldsymbol{+} K\{u_2\} \end{align*}
      And
      \begin{align*} K\{a \bullet u_1\} \longrightarrow K_1 \{a \bullet \delta _{\otimes }(u_1,y z.r)\} &\equiv a \bullet K\{u_1\} \end{align*}
    4. The case $K = K_1 \{\delta _{{\mathfrak{0}}}(\_)\}$ is not possible as $u_1$ would be a closed proof of $\mathfrak{0}$ and there is no such proof.

    5. If $K = K_1 \{\delta _{\&}^1(\_,y.r)\}$ , then $u_1$ and $u_2$ are closed proofs of an additive conjunction $\&$ , thus $u_1 \longrightarrow ^* \langle u_{11}, u_{12} \rangle$ and $u_2 \longrightarrow ^* \langle u_{21}, u_{22} \rangle$ . Let $r'$ be the irreducible form of $K_1 \{r\}$ . Using the induction hypothesis with the proof $r'$ (because, with Lemmas 4.5 and 4.7, we have $\mu (r') \leq \mu (K_1 \{r\}) = \mu (K_1 ) + \mu (r) \lt \mu (K_1 ) + \mu (r) + 1 = \mu (K) = \mu (t)$ )

      \begin{align*} K\{u_1 \boldsymbol{+} u_2\} &\longrightarrow ^* K_1 \{\delta _{\&}^1(\langle u_{11}, u_{12} \rangle \boldsymbol{+} \langle u_{21}, u_{22} \rangle, y.r)\} \longrightarrow ^* K_1 \{r\{u_{11} \boldsymbol{+} u_{21}\}\}\\ & \longrightarrow ^* r'\{u_{11} \boldsymbol{+} u_{21}\} \equiv r'\{u_{11}\} \boldsymbol{+} r'\{u_{21}\} \mathrel{{}^*{\longleftarrow }} K_1 \{r\{u_{11}\}\} \boldsymbol{+} K_1 \{r\{u_{21}\}\}\\ & \mathrel{{}^*{\longleftarrow }} K_1 \{\delta _{\&}^1(\langle u_{11}, u_{12} \rangle,y.r)\}\boldsymbol{+} K_1 \{\delta _{\&}^1(\langle u_{21}, u_{22} \rangle,y.r)\} \mathrel{{}^*{\longleftarrow }} K\{u_1\} \boldsymbol{+} K\{u_2\} \end{align*}
      And
      \begin{align*} K\{a \bullet u_1\} &\longrightarrow ^* K_1 \{\delta _{\&}^1(a \bullet \langle u_{11}, u_{12} \rangle,y.r)\} \longrightarrow ^* K_1 \{r\{a \bullet u_{11}\}\} \longrightarrow ^* r'\{a \bullet u_{11}\}\\ & \equiv a \bullet r'\{u_{11}\} \mathrel{{}^*{\longleftarrow }} a \bullet K_1 \{r\{u_{11}\}\} \longleftarrow a \bullet K_1 \{\delta _{\&}^1(\langle u_{11}, u_{12} \rangle,y.r)\}\\ & \mathrel{{}^*{\longleftarrow }} a \bullet K\{u_1\} \end{align*}
    6. If $K = K_1 \{\delta _{\&}^2(\_,y.r)\}$ , the proof is similar.

    7. If $K = K_1 \{\delta _{\oplus }(\_,y.r, z.s)\}$ , then, using the induction hypothesis with the proof $K_1$ ( $\mu (K_1 ) \lt \mu (K) = \mu (t)$ ), we get

      \begin{align*} K\{u_1 \boldsymbol{+} u_2\} \longrightarrow K_1 \{\delta _{\oplus }( u_1,y.r, z.s) \boldsymbol{+} \delta _{\oplus }( u_2,y.r, z.s)\} &\equiv K\{u_1\} \boldsymbol{+} K\{u_2\} \end{align*}
      And
      \begin{align*} K\{a \bullet u_1\} \longrightarrow K_1 \{a \bullet \delta _{\oplus }(u_1,y.r, z.s)\} & \equiv a \bullet K\{u_1\} \end{align*}

We can now generalize the linearity result, as explained in Section 4.1, by using the observational equivalence $\sim$ (cf. Definition 4.1).

Corollary 4.11. If $A$ and $B$ are any propositions, $t$ a proof such that $x:A \vdash t:B$ and $u_1$ and $u_2$ two closed proofs of $A$ , then

\begin{equation*}t\{u_1 \boldsymbol {+} u_2\} \sim t\{u_1\} \boldsymbol {+} t\{u_2\} \qquad \qquad \textrm {and}\qquad \qquad t\{a \bullet u_1\} \sim a \bullet t\{u_1\}\end{equation*}

Proof. Let $C \in{\mathcal V}$ and $c$ be a proof such that $\_:B \vdash c:C$ . Then applying Theorem 4.10 to the proof $c\{t\}$ we get

\begin{equation*}c\{t\{u_1 \boldsymbol {+} u_2\}\} \equiv c\{t\{u_1\}\} \boldsymbol {+} c\{t\{u_2\}\} \qquad \qquad \textrm {and}\qquad \qquad c\{t\{a \bullet u_1\}\} \equiv a \bullet c\{t\{u_1\}\}\end{equation*}

and applying it again to the proof $c$ we get

\begin{equation*}c\{t\{u_1\} \boldsymbol {+} t\{u_2\}\} \equiv c\{t\{u_1\}\} \boldsymbol {+} c\{t\{u_2\}\} \qquad \qquad \textrm {and}\qquad \qquad c\{a \bullet t\{u_1\}\} \equiv a \bullet c\{t\{u_1\}\}\end{equation*}

Thus

\begin{equation*}c\{t\{u_1 \boldsymbol {+} u_2\}\} \equiv c\{t\{u_1\} \boldsymbol {+} t\{u_2\}\} \qquad \qquad \textrm {and}\qquad \qquad c\{t\{a \bullet u_1\}\} \equiv c\{a \bullet t\{u_1\}\}\end{equation*}

that is

\begin{align*} t\{u_1 \boldsymbol{+} u_2\} \sim t\{u_1\} \boldsymbol{+} t\{u_2\} &\qquad \qquad \textrm{and}\qquad \qquad t\{a \bullet u_1\} \sim a \bullet t\{u_1\} \end{align*}

The main result, as announced in Section 4.1, showing that proofs of $A\multimap B$ are linear functions, is a direct consequence of Theorem 4.10 and Corollary 4.11.

Corollary 4.12. Let $A$ and $B$ be propositions. Let $t$ be a closed proof of $A \multimap B$ and $u_1$ and $u_2$ be closed proofs of $A$ .

Then, if $B \in{\mathcal V}$ , we have

\begin{equation*}t\,(u_1 \boldsymbol {+} u_2) \equiv (t\,u_1) \boldsymbol {+} (t\,u_2) \qquad \qquad \textrm {and}\qquad \qquad t\,(a\bullet u_1) \equiv a\bullet (t\,u_1)\end{equation*}

and in the general case, we have

\begin{equation*}t\,(u_1 \boldsymbol {+} u_2) \sim (t\,u_1) \boldsymbol {+} (t\,u_2) \qquad \qquad \textrm {and}\qquad \qquad t\,(a\bullet u_1) \sim a\bullet (t\,u_1)\end{equation*}

Proof. As $t$ is a closed proof of $A \multimap B$ , using Theorem 2.30, it reduces to an irreducible proof of the form $\lambda x. t'$ . Let $u'_{\!\!1}$ be the irreducible form of $u_1$ , and $u'_{\!\!2}$ that of $u_2$ .

If $B \in{\mathcal V}$ , using Theorem 4.10, we have

\begin{align*} t\,(u_1 \boldsymbol{+} u_2) \longrightarrow ^* t'\{u'_{\!\!1} \boldsymbol{+} u'_{\!\!2}\} &\equiv t'\{u'_{\!\!1}\} \boldsymbol{+} t'\{u'_{\!\!2}\} \mathrel{{}^*{\longleftarrow }} (t\,u_1) \boldsymbol{+} (t\,u_2) \\ t\,(a\bullet u_1) \longrightarrow ^* t'\{a \bullet u'_{\!\!1}\} &\equiv a \bullet t'\{u'_{\!\!1}\} \mathrel{{}^*{\longleftarrow }} a\bullet (t\,u_1) \end{align*}

In the general case, using Corollary 4.11, we have

\begin{align*} t\,(u_1 \boldsymbol{+} u_2) \longrightarrow ^* t'\{u'_{\!\!1} \boldsymbol{+} u'_{\!\!2}\} &\sim t'\{u'_{\!\!1}\} \boldsymbol{+} t'\{u'_{\!\!2}\} \mathrel{{}^*{\longleftarrow }} (t\,u_1) \boldsymbol{+} (t\,u_2) \\ t\,(a\bullet u_1) \longrightarrow ^* t'\{a \bullet u'_{\!\!1}\} &\sim a \bullet t'\{u'_{\!\!1}\} \mathrel{{}^*{\longleftarrow }} a\bullet (t\,u_1) \end{align*}

Finally, the next corollary is the converse of Theorem 3.10.

Corollary 4.13. Let $A, B \in{\mathcal V}$ , such that $d(A) = m$ and $d(B) = n$ , and $t$ be a closed proof of $A \multimap B$ . Then the function $F$ from ${\mathcal S}^m$ to ${\mathcal S}^n$ , defined as $F(\textbf{ u}) = \underline{t\,\overline{\textbf{ u}}^A}$ is linear.

Proof. Using Corollary 4.12 and Lemmas 3.7 and 3.8, we have

\begin{align*} F(\textbf{ u} + \textbf{ v}) = \underline{t\,\overline{\textbf{ u + v}}^A} = \underline{t\,(\overline{\textbf{ u}}^A \boldsymbol{+} \overline{\textbf{ v}}^A)} &= \underline{t\,\overline{\textbf{ u}}^A \boldsymbol{+} t\,\overline{\textbf{ v}}^A} = \underline{t\,\overline{\textbf{ u}}^A} + \underline{t\,\overline{\textbf{ v}}^A} = F(\textbf{ u}) + F(\textbf{ v}) \\ F(a \textbf{ u}) = \underline{t\,\overline{a{\textbf{u}}}^A} = \underline{t\,(a \bullet \overline{\textbf{ u}}^A)} &= \underline{a\bullet t\,\overline{\textbf{ u}}^A} = a \underline{t\,\overline{\textbf{ u}}^A} = a F(\textbf{ u}) \end{align*}

Remark 4.14. The Theorem 4.10 and its corollaries hold for linear proofs, but not for nonlinear ones. The linearity is used, in an essential way, in two places. First, in the first case of the proof of Theorem 4.10 when we remark that $a.\star$ is not a proof in the context $x:A$ . Indeed, if $t$ could be $a.\star$ , then linearity would be violated as $1.\star \{4.\star \boldsymbol{+} 5.\star \} = 1.\star$ and $1.\star \{4.\star \} \boldsymbol{+} 1.\star \{5.\star \} \equiv 2.\star$ . Then, in the proof of Lemma 4.8, we remark that when $t_1\,t_2$ is a proof in the context $x:A$ then $x$ must occur in $t_1$ , and hence it does not occur in $t_2$ , that is therefore closed. This way, the proof $t$ eventually has the form $K\{u\}$ , and this would not be the case if $x$ could occur in $t_2$ as well.

4.5 No-cloning

In the proof language of propositional intuitionistic logic extended with interstitial rules and scalars, but with structural rules, the cloning function from ${\mathcal S}^2$ to ${\mathcal S}^4$ , mapping $\left (\begin{smallmatrix} a\\b\end{smallmatrix}\right )$ to $\left (\begin{smallmatrix} a^2\\ab\\ab\\b^2\end{smallmatrix}\right )$ , which is the tensor product of the vector $\left (\begin{smallmatrix} a\\b\end{smallmatrix}\right )$ with itself, can be expressed (Díaz-Caro and Dowek Reference Díaz-Caro and Dowek2023). But the proof given there is not the proof of a proposition of the ${\mathcal L}^{\mathcal S}$ -calculus.

Moreover, by Corollary 4.13, no proof of $({\mathfrak{1}} \&{\mathfrak{1}}) \multimap (({\mathfrak{1}} \&{\mathfrak{1}}) \,\&\, ({\mathfrak{1}} \&{\mathfrak{1}}))$ , in the ${\mathcal L}^{\mathcal S}$ -calculus, can express this function because it is not linear.

5. The $\odot ^{\mathbb C}$ -Calculus and Its Application to Quantum Computing

There are two issues in the design of a quantum programming language. The first, addressed in this paper, is to take into account the linearity of the unitary operators and, for instance, avoid cloning. The second, addressed in Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023), is to express the information-erasure, nonreversibility, and nondeterminism of the measurement.

5.1 The connective $\odot$

In Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023), we have introduced, besides interstitial rules and scalars, a new connective $\odot$ (read “sup” for “superposition”) and given the type ${\mathcal Q}_1 ={\mathfrak{1}} \odot{\mathfrak{1}}$ to quantum bits, that is superpositions of bits.

As to express the superposition $\alpha .{|{0}\rangle } + \beta .{|{1}\rangle }$ , we need both $|{0}\rangle$ and $|{1}\rangle$ , the connective $\odot$ has the introduction rule of the conjunction. And as the measurement in the basis $|{0}\rangle$ , $|{1}\rangle$ yields either $|{0}\rangle$ or $|{1}\rangle$ , the connective $\odot$ has the elimination rule of the disjunction. But, to express quantum algorithms, we also need to transform qubits, using unitary operators and, to express these operators, we require other elimination rules for the connective $\odot$ , similar to those of the conjunction.

Thus, the connective $\odot$ has an introduction rule $\odot$ -i similar to that of the conjunction, one elimination rule $\odot$ -e similar to that of the disjunction, used to express the information-erasing, nonreversible, and nondeterministic quantum measurement operators, and two elimination rules $\odot$ -e1 and $\odot$ -e2 similar to those of the conjunction, used to express the information-preserving, reversible, and deterministic unitary operators.

The $\odot ^{\mathbb C}$ -calculus can express quantum algorithms, including those using measurement, but as the use of variables is not restricted, it can also express nonlinear functions, such as cloning operators.

We can thus mix the two ideas and introduce the ${\mathcal L} \odot ^{\mathcal S}$ -calculus that is an extension of the ${\mathcal L}^{\mathcal S}$ -calculus with a $\odot$ connective and also a linear restriction of the $\odot ^{\mathcal S}$ -calculus. The ${\mathcal L}\odot ^{\mathcal S}$ -calculus is obtained by adding the symbols $[.,.]$ , $\delta _{\odot }^1$ , $\delta _{\odot }^2$ , $\delta _{\odot }$ , the deduction rules of Fig. 3, and the reduction rules of Fig. 4, to the ${\mathcal L}^{\mathcal S}$ -calculus.

Figure 3. The deduction rules of the

$\odot ^{\mathcal S}$ -calculus.

Figure 4. The reduction rules of the

$\odot ^{\mathcal S}$ -calculus.

We use the symbols $[.,.]$ , $\delta _{\odot }^1$ and $\delta _{\odot }^2$ , to express vectors and matrices, just like in Section 3, except that the conjunction $\&$ is replaced with the connective $\odot$ .

As the symbol $\delta _{\odot }$ enables to express measurement operators that are not linear, we cannot expect to have an analog of Corollary 4.12 for the full ${\mathcal L} \odot ^{\mathcal S}$ -calculus – more generally, we cannot expect a calculus to both enjoy such a linearity property and express the measurement operators. Thus, the best we can expect is a linearity property for the restriction of the ${\mathcal L} \odot ^{\mathcal S}$ -calculus, excluding the $\delta _{\odot }$ symbol. But this result is a trivial consequence of Corollary 4.12, as if the $\odot$ -e rule is excluded, the connective $\odot$ is just a copy of the additive conjunction $\&$ . So, we shall not give a full proof of this theorem.

In the same way, the subject reduction proof of the ${\mathcal L} \odot ^{\mathcal S}$ -calculus is similar to the proof of Theorem 2.2, and the strong termination proof of the ${\mathcal L} \odot ^{\mathcal S}$ -calculus is similar to the proof of Corollary 2.29, with a few extra lemmas proving the adequacy of the introduction and elimination symbols of the $\odot$ connective, similar to those of the strong termination proof of Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023), so we shall not repeat this proof. In contrast, the confluence property is lost because the reduction rules of the ${\mathcal L} \odot ^{\mathcal S}$ -calculus are nondeterministic.

Thus, we shall focus in this section on an informal discussion on how the ${\mathcal L} \odot ^{\mathbb C}$ -calculus can be used as a quantum programming language.

5.2 The $\odot ^{\mathbb C}$ -calculus as a quantum programming language

We first express the vectors and matrices like in Section 3, except that we use the connective $\odot$ instead of $\&$ . In particular, the $n$ -qubits, for $n \geq 1$ , are expressed, in the basis ${|{0 \ldots 00}\rangle },{|{0 \ldots 01}\rangle }, \ldots{|{1 \ldots 11}\rangle }$ , as elements of ${\mathbb C}^{2^n}$ , that is, as proofs of the vector proposition ${\mathcal Q}_n$ defined by induction on $n$ as follows: ${\mathcal Q}_0 ={\mathfrak{1}}$ and ${\mathcal Q}_{n+1} ={\mathcal Q}_n \odot{\mathcal Q}_n$ . For example, the proposition ${\mathcal Q}_2$ is $({\mathfrak{1}} \odot{\mathfrak{1}}) \odot ({\mathfrak{1}} \odot{\mathfrak{1}})$ , and the proof $[[a.\star,b.\star ],[c.\star,d.\star ]]$ represents the vector $a .{|{00}\rangle } + b .{|{01}\rangle } + c .{|{10}\rangle } + d .{|{11}\rangle }$ . For instance, the vector $\frac{1}{\sqrt{2}}{|{00}\rangle } + \frac{1}{\sqrt{2}}{|{11}\rangle }$ is represented by the proof $[[\frac{1}{\sqrt{2}}.\star,0.\star ],[0.\star,\frac{1}{\sqrt{2}}.\star ]]$ .

It has been shown (Díaz-Caro and Dowek Reference Díaz-Caro and Dowek2023) that the $\odot ^{\mathbb C}$ -calculus with a reduction strategy restricting the reduction of $\delta _{\odot }([t,u],x.v,y.w)$ to the cases where $t$ and $u$ are closed irreducible proofs, which can be used to express quantum algorithms. We now show that the same holds for the ${\mathcal L}\odot ^{\mathbb C}$ -calculus.

As we have already seen how to express linear maps in the ${\mathcal L}\odot ^{\mathbb C}$ -calculus, we now turn to the expression of the measurement operators.

Definition 5.1 (Norm of a vector). If $t$ is a closed irreducible proof of ${\mathcal Q}_n$ , we define the square of the norm $\|t\|^2$ of $t$ by induction on $n$ .

  • If $n = 0$ , then $t = a.\star$ and we take $\|t\|^2 = |a|^2$ .

  • If $n = n'+1$ , then $t = [u_1,u_2]$ and we take $\|t\|^2 = \|u_1\|^2 + \|u_2\|^2$ .

We take the convention that any closed irreducible proof $u$ of ${\mathcal Q}_{n}$ , expressing a nonzero vector $\underline{u} \in{\mathbb C}^{2^n}$ , is an alternative expression of the $n$ -qubit $\frac{\underline{u}}{\|\underline{u}\|}$ . For example, the qubit $\frac{1}{\sqrt{2}} .{|{0}\rangle } + \frac{1}{\sqrt{2}} .{|{1}\rangle }$ is expressed as the proof $[\frac{1}{\sqrt{2}}.\star,\frac{1}{\sqrt{2}}.\star ]$ , but also as the proof $[1.\star,1.\star ]$ .

Definition 5.2 (Probabilistic reduction). Probabilities are assigned to the nondeterministic reductions of closed proofs of the form $\delta _{\odot }(u,x.v,y.w)$ as follows. A proof of the form $\delta _{\odot }([u_1,u_2],x.v,y.w)$ where $u_1$ and $u_2$ are closed irreducible proofs of ${\mathcal Q}_n$ reduces to $(u_1/x)v$ with probability $\frac{\|u_1\|^2}{\|u_1\|^2 + \|u_2\|^2}$ and to $(u_2/y)w$ with probability $\frac{\|u_2\|^2}{\|u_1\|^2 + \|u_2\|^2}$ , when $\|u_1\|^2$ and $\|u_2\|^2$ are not both $0$ . When $\|u_1\|^2 = \|u_2\|^2 = 0$ , or $u_1$ and $u_2$ are proofs of propositions of a different form, we assign any probability, for example, $\tfrac{1}{2}$ , to both reductions.

Definition 5.3 (Measurement operator). If $n$ is a nonzero natural number, we define the measurement operator $\pi _n$ , measuring the first qubit of an $n$ -qubit, as the proof

\begin{equation*}\pi _n = \lambda x. \delta _{\odot }(x, y. [y,0_{{\mathcal Q}_{n-1}}],z. [0_{{\mathcal Q}_{n-1}},z])\end{equation*}

of the proposition ${\mathcal Q}_{n} \multimap{\mathcal Q}_{n}$ , where the proof $0_{{\mathcal Q}_{n-1}}$ is given in Definition 3.2 .

Remark 5.4. If $t$ is a closed irreducible proof of ${\mathcal Q}_{n}$ of the form $[u_1,u_2]$ , such that $\|t\|^2 = \|u_1\|^2 + \|u_2\|^2 \neq 0$ , expressing the state of an $n$ -qubit, then the proof $\pi _n\,t$ of the proposition ${\mathcal Q}_{n}$ reduces, with probabilities $\tfrac{\|u_1\|^2}{\|u_1\|^2 + \|u_2\|^2}$ and $\tfrac{\|u_2\|^2}{\|u_1\|^2 + \|u_2\|^2}$ to $[u_1,0_{{\mathcal Q}_{n-1}}]$ and to $[0_{{\mathcal Q}_{n-1}},u_2]$ , that are the states of the $n$ -qubit, after the partial measure of the first qubit.

The measurement operator $\pi _n$ returns the states of the $n$ -qubit, after the partial measure of the first qubit. We now show that it is also possible to return the classical result of the measure, that is, a Boolean.

Definition 5.5 (Booleans). Let $\textbf{ 0}$ and $\textbf{ 1}$ be the closed proofs of the proposition ${\mathcal B} ={\mathfrak{1}} \oplus{\mathfrak{1}}$ $\textbf{ 0}=\textit{inl}(1.\star )$ and $\textbf{ 1}=\textit{inr}(1.\star )$ .

As we do not have a weakening rule, we cannot define this measurement operator as

\begin{equation*}\lambda x. \delta _{\odot }(x, y.\textbf { 0},z.\textbf { 1})\end{equation*}

that maps all proofs of the form $[u_1,u_2]$ to $\textbf{ 0}$ or $\textbf{ 1}$ with the same probabilities as above. So, we continue to consider proofs modulo renormalization, that is, that any proof of the form $a \bullet \textbf{ 0}$ also represents the Boolean $\textbf{ 0}$ and any proof of the form $b \bullet \textbf{ 1}$ also represents the Boolean $\textbf{ 1}$ .

Definition 5.6 (Classical measurement operator). If $n$ is a nonzero natural number, we define the measurement operator $\pi '_{\!\!n}$ , as the proof

\begin{equation*} \pi '_{\!\!n} = \lambda x.\delta _{\odot }(x,y.{\delta ^{{\mathcal Q}_{n-1}}(y,\textbf { 0})},z.{\delta ^{{\mathcal Q}_{n-1}}(z,\textbf { 1})}) \end{equation*}

where $\delta ^{{\mathcal Q}_{n}}$ is defined as

\begin{equation*} \delta ^{{\mathcal Q}_{n}}(x,\textbf { b}) = \left \{ \begin {array}{ll} \delta _{{\mathfrak {1}}}(x,\textbf { b}) & \textrm {if }n=0\\ \delta _{\odot }(x,y.{\delta ^{{\mathcal Q}_{n-1}}(y,\textbf { b})},z.{\delta ^{{\mathcal Q}_{n-1}}(z,\textbf { b})}) & \textrm {if }n\gt 0 \end {array} \right . \end{equation*}

Remark 5.7. If $t$ is a closed irreducible proof of ${\mathcal Q}_{n}$ of the form $[u_1,u_2]$ , such that $\|t\|^2 = \|u_1\|^2 + \|u_2\|^2 \neq 0$ , expressing the state of an $n$ -qubit, then the proof $\pi '_{\!\!n}\,t$ of the proposition $\mathcal B$ reduces, with the same probabilities as above, to $a \bullet \textbf{ 0}$ or $b \bullet \textbf{ 1}$ . The scalars $a$ and $b$ may vary due to the probabilistic nature of the operator $\delta _{\odot }$ , but they are $0$ only with probability $0$ .

Example 5.8. The operator

\begin{equation*}\pi'_{\!\!1} = \lambda x. \delta _{\odot }(x,y.{\delta _{{\mathfrak {1}}}(y,\textbf{0})},z.{\delta _{{\mathfrak {1}}}(z, \textbf{1})}) \end{equation*}

applied to the proof $[a.\star,b.\star ]$ yields $a\bullet \textbf{0}$ or $b\bullet \textbf{1}$ with probability $\frac{|a|^2}{|a|^2+|b|^2}$ or $\frac{|b|^2}{|a|^2+|b|^2}$ respectively.

The operator

\begin{equation*}\pi'_{\!\!2} = \lambda x. \delta _{\odot }(x,y.{\delta ^{{\mathcal Q}_1}(y,\textbf{0})},z.{\delta ^{{\mathcal Q}_1}(z, \textbf{1})})\end{equation*}

applied to $[[a.\star,b.\star ],[c.\star,d.\star ]]$ reduces to $\delta ^{{\mathcal Q}_1}([a.\star,b.\star ],\textbf{0})$ or to $\delta ^{{\mathcal Q}_1}([c.\star,d.\star ],\textbf{1})$ with probabilities $\frac{|a|^2 + |b|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$ and $\frac{|c|^2 + |d|^2}{|a|^2 + |b|^2 + |c|^2 + |d|^2}$ .

Then the first proof

\begin{equation*}\delta ^{{\mathcal Q}_1}([a.\star,b.\star ],\textbf{0})\end{equation*}

always reduces to $\textbf{ 0}$ modulo some scalar multiplication, precisely to $a \bullet \textbf{ 0}$ with probability $\frac{|a|^2}{|a|^2 + |b|^2}$ and $b \bullet \textbf{ 0}$ with probability $\frac{|b|^2}{|a|^2 + |b|^2}$ . In the same way, the second always reduces to 1 modulo some scalar multiplication.

5.3 Deutsch’s algorithm

We have given in Díaz-Caro and Dowek (Reference Díaz-Caro and Dowek2023), a proof that expresses Deutsch’s algorithm. We update it here to the linear case.

As above, let $\textbf{ 0} = \textit{inl}(1.\star )$ and $\textbf{ 1} = \textit{inr}(1.\star )$ be closed irreducible proofs of ${\mathcal B} ={\mathfrak{1}} \oplus{\mathfrak{1}}$ .

For each proposition $A$ , and pair of closed terms, $u$ and $v$ , of type $A$ , we have a test operator, that is a proof of ${\mathcal B} \multimap A$

\begin{equation*}\textit {if}_{u,v} = \lambda x. \delta _{\oplus }(x,w_1.\delta _{{\mathfrak {1}}}(w_1,u),w_2.\delta _{{\mathfrak {1}}}(w_2,v))\end{equation*}

Then $\textit{if}_{u,v}\,\textbf{ 0} \longrightarrow 1 \bullet u$ and $\textit{if}_{u,v}\,\textbf{ 1} \longrightarrow 1 \bullet v$ . Deutsch’s algorithm is the proof of $({\mathcal B} \multimap{\mathcal B}) \multimap{\mathcal B}$

\begin{equation*}\textit {Deutsch} = \lambda f.\pi'_{\!\!2} ((\overline {H\otimes I})\,(U\,f\,\overline {{|{+-}\rangle }}))\end{equation*}

where $\overline{H \otimes I}$ is the proof of ${\mathcal Q}_2 \multimap Q_2$ corresponding to the matrix

\begin{equation*} \frac 1{\sqrt 2}\left ( \begin {smallmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \end {smallmatrix}\right ) \end{equation*}

as in Theorem 3.10, except that the conjunction $\&$ is replaced with the connective $\odot$ . $U$ is the proof of $({\mathcal B} \multimap{\mathcal B}) \multimap{\mathcal Q}_2 \multimap Q_2$

\begin{align*} U = \lambda f. \lambda t. &\delta _{\odot }^1 ( t,x. ( \delta _{\odot }^1(x,z_0.M_0\,z_0) \boldsymbol{+} \delta _{\odot }^2(x,z_1.M_1\,z_1) ) ) \\ \boldsymbol{+}\,& \delta _{\odot }^2 ( t, y. ( \delta _{\odot }^1(y,z_2.\,M_2\,z_2) \boldsymbol{+} \delta _{\odot }^2(y,z_3.\,M_3\,z_3) ) ) \end{align*}

where $M_0$ , $M_1$ , $M_2$ , and $M_3$ are the proofs of ${\mathfrak{1}} \multimap{\mathcal Q}_2$

\begin{equation*}M_0 = \lambda s. \delta _{{\mathfrak {1}}}(s, \textit {if}_{[[1.\star,0.\star ],[0.\star,0.\star ]], [[0.\star,1.\star ],[0.\star,0.\star ]]} \,(f\,\textbf { 0}))\end{equation*}
\begin{equation*}M_1 = \lambda s. \delta _{{\mathfrak {1}}}(s, \textit {if}_{[[0.\star,1.\star ],[0.\star,0.\star ]], [[1.\star,0.\star ],[0.\star,0.\star ]]} \,(f\,\textbf { 0}))\end{equation*}
\begin{equation*}M_2 = \lambda s. \delta _{{\mathfrak {1}}}(s, \textit {if}_{[[0.\star,0.\star ],[1.\star,0.\star ]], [[0.\star,0.\star ],[0.\star,1.\star ]]} \,(f\,\textbf { 1}))\end{equation*}
\begin{equation*}M_3 = \lambda s. \delta _{{\mathfrak {1}}}(s, {if}_{[[0.\star,0.\star ],[0.\star,1.\star ]], [[0.\star,0.\star ],[1.\star,0.\star ]]} \,(f\,\textbf { 1}))\end{equation*}

and $\overline{{|{+-}\rangle }}$ is the proof of ${\mathcal Q}_2$

\begin{equation*}\overline {{|{+-}\rangle }} = [[\frac {1}{2}.\star,\frac {-1}{2}.\star ], [\frac {1}{2}.\star,\frac {-1}{2}.\star ]]\end{equation*}

Let $f$ be a proof of ${\mathcal B} \multimap{\mathcal B}$ . If $f$ is a constant function, we have $\textit{Deutsch}\ f \longrightarrow ^* a \bullet \textbf{ 0}$ , for some scalar $a$ , while if $f$ is not constant, $\textit{Deutsch}\ f \longrightarrow ^* a \bullet \textbf{ 1}$ for some scalar $a$ .

5.4 Toward unitarity

For future work, we may want to restrict the logic further so that functions are not only linear but also unitary. Unitarity, the property that ensures that the norm and orthogonality of vectors is preserved, is a requirement for quantum gates. In the current version, we can argue that we let these unitarity constraints as properties of the program that must be proved for each program, rather than enforced by the type system.

Some methods to enforce unitarity in quantum controlled lambda calculus have been given in Altenkirch and Grattage (Reference Altenkirch and Grattage2005),Díaz-Caro et al. (Reference Díaz-Caro, Guillermo, Miquel and Valiron2019b), Díaz-Caro and Malherbe (Reference Díaz-Caro and Malherbe2022). QML (Altenkirch and Grattage Reference Altenkirch and Grattage2005) gives a restricted notion of orthogonality between terms and constructs its superpositions only over orthogonal terms. Lambda- $S_1$ (Díaz-Caro et al. Reference Díaz-Caro, Guillermo, Miquel and Valiron2019b; Díaz-Caro and Malherbe Reference Díaz-Caro and Malherbe2022) is the unitary restriction of Lambda- $S$ (Díaz-Caro et al. Reference Díaz-Caro, Dowek and Rinaldi2019a), using an extended notion of orthogonality. This kind of restrictions could be added as restrictions to the interstitial rules to achieve the same result.

6. Conclusion

The link between linear logic and linear algebra has been known for a long time, in the context of models of linear logic. We have shown in this paper that this link also exists at the syntactic level, provided we consider several proofs of $\mathfrak{1}$ , one for each scalar, we add two interstitial rules, and proof reduction rules allowing to commute these interstitial rules with logical rules, to reduce commuting cuts.

We also understand better in which way must propositional logic be extended or restricted, so that its proof language becomes a quantum programming language. A possible answer is in four parts: we need to extend it with interstitial rules, scalars, and the connective $\odot$ , and we need to restrict it by making it linear. We obtain this way the ${\mathcal L} \odot ^{\mathbb C}$ -calculus that addresses both the question of linearity and, for instance, avoids cloning, and that of the information-erasure, nonreversibility, and nondeterminism of the measurement.

Future work also includes relating the algebraic notion of tensor product and the linear logic notion of tensor for vector propositions.

Acknowledgment

The authors want to thank Thomas Ehrhard, Jean-Baptiste Joinet, Jean-Pierre Jouannaud, Dale Miller, Alberto Naibo, Simon Perdrix, Alex Tsokurov, and Lionel Vaux for useful discussions.

Funding statement

Partially founded by PICT projects 2019-1272 and 2021-I-A-00090, PIP project 1220200100368CO, CSIC-UdelaR project 22520220100073UD, and the French-Argentinian IRP SINFIN.

References

Altenkirch, T. and Grattage, J. (2005). A functional quantum programming language. In: Proceedings of LICS 2005, IEEE, 249258.Google Scholar
Arrighi, P. and Díaz-Caro, A. (2012). A system F accounting for scalars. Logical Methods in Computer Science 8 (1:11).Google Scholar
Arrighi, P., Díaz-Caro, A. and Valiron, B. (2017). The vectorial lambda-calculus. Information and Computation 254 (1) 105139.Google Scholar
Arrighi, P. and Dowek, G. (2017). Lineal: a linear-algebraic lambda-calculus. Logical Methods in Computer Science 13 (1:8) 133.Google Scholar
Blute, R. (1996). Hopf algebras and linear logic. Mathematical Structures in Computer Science 6 (2) 189217.CrossRefGoogle Scholar
Chardonnet, K. (2023). Towards a Curry-Howard Correspondence for Quantum Computation. Phd thesis, Université Paris-Saclay.Google Scholar
Coecke, B. and Kissinger, A. (2017). Picturing Quantum Processes: A First Course in Quantum Theory and Diagrammatic Reasoning. Cambridge, UK : Cambridge University Press.CrossRefGoogle Scholar
Díaz-Caro, A. and Dowek, G. (2023). A new connective in natural deduction, and its application to quantum computing. Theoretical Computer Science 957 113840.CrossRefGoogle Scholar
Díaz-Caro, A., Dowek, G. and Rinaldi, J. (2019a). Two linearities for quantum computing in the lambda calculus. BioSystems 186 104012, Postproceedings of TPNC 2017.CrossRefGoogle ScholarPubMed
Díaz-Caro, A., Guillermo, M., Miquel, A. and Valiron, B. (2019b). Realizability in the unitary sphere. In: Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2019), 113.Google Scholar
Díaz-Caro, A. and Malherbe, O. (2022a). Quantum control in the unitary sphere: lambda- ${\mathcal S}_1$ and its categorical model. Logical Methods in Computer Science 18 (3) 32.Google Scholar
Díaz-Caro, A. and Malherbe, O. (2022b). Semimodules and the (syntactically-)linear lambda calculus. Draft at arXiv:2205.02142.Google Scholar
Díaz-Caro, A. and Petit, B. (2012). Linearity in the non-deterministic call-by-value setting. In: Ong, L. and de Queiroz, R.(eds.)Proceedings of WoLLIC 2012, vol. 7456, 216231, LNCS.Google Scholar
Ehrhard, T. (2002). On Köthe sequence spaces and linear logic. Mathematical Structures in Computer Science 12 (5) 579623.Google Scholar
Girard, J.-Y. (1972). Interprétation fonctionnelle et élimination des coupures dans l’arithmétique d’ordre supérieur. Phd thesis, Université de Paris 7.Google Scholar
Girard, J.-Y. (1987). Linear logic. Theoreoretical Computer Science 50 (1) 1102.CrossRefGoogle Scholar
Girard, J.-Y. (1999). Coherent Banach spaces: a continuous denotational semantics. Theoretical Computer Science 227 (1-2) 275297.CrossRefGoogle Scholar
Mayr, R. and Nipkow, T. (1998). Higher-order rewrite systems and their confluence. Theoretical Computer Science 192 (1) 329.CrossRefGoogle Scholar
Selinger, P. and Valiron, B. (2006). A lambda calculus for quantum computation with classical control. Mathematical Structures in Computer Science 16 (3) 527552.CrossRefGoogle Scholar
Vaux, L. (2009). The algebraic lambda calculus. Mathematical Structures in Computer Science 19 (5) 10291059.CrossRefGoogle Scholar
Zorzi, M. (2016). On quantum lambda calculi: a foundational perspective. Mathematical Structures in Computer Science 26 (7) 11071195.CrossRefGoogle Scholar
Figure 0

Figure 1. The deduction rules of the ${\mathcal L}^{\mathcal S}$-calculus.

Figure 1

Figure 2. The reduction rules of the ${\mathcal L}^{\mathcal S}$-calculus.

Figure 2

Figure 3. The deduction rules of the $\odot ^{\mathcal S}$-calculus.

Figure 3

Figure 4. The reduction rules of the $\odot ^{\mathcal S}$-calculus.