Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-25T08:16:07.317Z Has data issue: false hasContentIssue false

A concrete model for a typed linear algebraic lambda calculus

Published online by Cambridge University Press:  21 November 2023

Alejandro Díaz-Caro*
Affiliation:
CONICET-Universidad de Buenos Aires, Instituto de Ciencias de la Computación (ICC), Buenos Aires, Argentina Departamento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Buenos Aires, Argentina
Octavio Malherbe
Affiliation:
IMERL, Facultad de Ingeniería, Universidad de la República, Montevideo, Uruguay
*
Corresponding author: Alejandro Díaz-Caro; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We give an adequate, concrete, categorical-based model for Lambda-${\mathcal S}$, which is a typed version of a linear-algebraic lambda calculus, extended with measurements. Lambda-${\mathcal S}$ is an extension to first-order lambda calculus unifying two approaches of non-cloning in quantum lambda-calculi: to forbid duplication of variables and to consider all lambda-terms as algebraic linear functions. The type system of Lambda-${\mathcal S}$ has a superposition constructor S such that a type A is considered as the base of a vector space, while SA is its span. Our model considers S as the composition of two functors in an adjunction relation between the category of sets and the category of vector spaces over $\mathbb C$. The right adjoint is a forgetful functor U, which is hidden in the language, and plays a central role in the computational reasoning.

Type
Paper
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

The non-cloning property of quantum computing has been treated in different ways in quantum programming languages. One way is to forbid duplication of variables with linear types (Reference AbramskyAbramsky, 1993; Girard, Reference Girard1987), and hence, a program taking a quantum argument will not duplicate it, e.g., Altenkirch and Grattage (Reference Altenkirch and Grattage2005); Green et al. (Reference Green, Lumsdaine, Ross, Selinger and Valiron2013); Pagani et al. (Reference Pagani, Selinger and Valiron2014); Selinger and Valiron (Reference Selinger and Valiron2006); Zorzi (Reference Zorzi2016). Another way is to consider all lambda-terms as expressing linear functions, in what is known as linear-algebraic lambda-calculi, e.g., Arrighi and Díaz-Caro (2012); Arrighi et al. (Reference Arrighi, Díaz-Caro and Valiron2017); Arrighi and Dowek (Reference Arrighi and Dowek2017); Díaz-Caro and Petit (2012). The first approach forbids a term $\lambda x.(x\otimes x)$ (for some convenient definition of $\otimes$ ), while the second approach distributes to, mimicking the way linear operations act on vectors in a vector space. However, adding a measurement operator to a calculus following the linear-algebraic approach needs to also add linear types: indeed, if $\pi$ represents a measurement operator, should not reduce to but to. Therefore, there must be functions taking superpositions and functions distributing over them. However, the functions taking a superposition have to be marked in some way, so to ensure that they will not use their arguments more than once (i.e., to ensure linearity in the linear-logic sense).

Lineal, the first linear-algebraic lambda-calculus, is an untyped calculus introduced by Arrighi and Dowek (Reference Arrighi and Dowek2017) to study the superposition of programs, with quantum computing as a goal. However, Lineal is not a quantum calculus in the sense that there is no construction allowing one to characterize which terms can be directly compiled into a quantum machine. Vectorial (Arrighi et al., Reference Arrighi, Díaz-Caro and Valiron2017) has been the conclusion of a long path to obtain a typed Lineal (Arrighi and Díaz-Caro, 2012; Arrighi et al., Reference Arrighi, Díaz-Caro and Valiron2017; Díaz-Caro and Petit, 2012). In Vectorial, the type system gives information on whether the final term can be considered or not as a quantum state (of norm 1). Nevertheless, it fails to establish whether typed programs can be considered quantum, in the sense of implementing unitary transformations and measurements – in any case, measurements are left out of the equation in these versions of typed Lineal.

The calculus Lambda- ${\mathcal S}$ is a start over, with a new type system not related to Vectorial. It is a first-order fragment of Lineal, extended with measurements. It has been introduced by Díaz-Caro and Dowek (2017) and slightly modified later by Díaz-Caro et al. (2019a). Following this line, Díaz-Caro et al. (2019b) presented a calculus defined through realizability techniques, which validates this long line of research on Lineal as a quantum calculus, by proving the terms which are typable with certain types coincide with implementations of unitary operators. In Díaz-Caro and Malherbe (2020), we gave a categorical model of Lambda- ${\mathcal S}$ without measurements. The object of the current paper is to set up the bases for a categorical model of Lambda- ${\mathcal S}$ in full (with measurements), by defining a concrete model with a categorical presentation, paving the way to an abstract construction in future research.

In linear logic, a type A without decoration represents a type of a term that cannot be duplicated, while ${!}A$ types duplicable terms. In Lambda- ${\mathcal S}$ instead, A are the types of the terms that cannot be superposed, while SA are the terms that can be superposed, and since superposition forbids duplication, A means that we can duplicate, while SA means that we cannot duplicate. So the S is not the same as the bang “ $!$ ,” but somehow the opposite, in the sense that we mark the fragile terms (those that cannot be duplicated). This can be explained by the fact that linear logic is focused on the possibility of duplication, while here we focus on the possibility of superposition, which implies the impossibility of duplication.

Díaz-Caro and Dowek (2017) gave a first denotational semantics for Lambda- ${\mathcal S}$ (in environment style) where the atomic type ${{\mathbb B}}$ is interpreted as while $S{{\mathbb B}} $ is interpreted as, and, in general, a type A is interpreted as a basis, while SA is the vector space generated by such a basis. In this paper, we go beyond and give a categorical interpretation of Lambda- ${\mathcal S}$ where S is a functor of an adjunction between the category $\mathbf{Set}$ and the category $\mathbf{Vec}$ . Explicitly, when we evaluate S, we obtain formal finite linear combinations of elements of a set with complex numbers as coefficients. The other functor of the adjunction, U, allows us to forget the vectorial structure.

The main structural feature of our model is that it is expressive enough to describe the bridge between the quantum and the classical universes explicitly by controlling its interaction. This is achieved by providing a monoidal adjunction. In the literature, intuitionistic linear (as in linear-logic) models are obtained by a comonad determined by a monoidal adjunction $(S,m)\dashv(U,n)$ Footnote 1 , i.e., the bang $!$ is interpreted by the comonad SU (see Benton (Reference Benton, Pacholski and Tiuryn1994)). In a different way, a crucial ingredient of our model is to consider the monad US for the interpretation of S determined by a similar monoidal adjunction. This implies that, on the one hand, we have a tight control of the Cartesian structure of the model (i.e., duplication, etc) and, on the other hand, the world of superpositions lives inside the classical world, i.e., determined externally by classical rules until we decide to explore it. This is given by the following composition of maps:

\[ US{{\mathbb B}} \times US{{\mathbb B}} \xrightarrow{n} U(S{{\mathbb B}} \otimes S{{\mathbb B}} )\xrightarrow{Um} US({{\mathbb B}}\times {{\mathbb B}})\]

that allows us to operate in a monoidal structure representing the quantum world and then to return to the Cartesian product.

This is different from linear logic, where the classical world lives inside the quantum world i.e. $({!}{{\mathbb B}})\otimes ({!}{{\mathbb B}})$ is a product inside a monoidal category.

Another source of inspiration for our model has been the work of Selinger (Reference Selinger2007) and Abramsky and Coecke (Reference Abramsky and Coecke2004) where they formalized the concept of scalars and inner product in a more abstract categorical setting, i.e., a category in which there is an abstract notion of a dagger functor. It is envisaged that this approach will provide the basis for an abstract model in future work.

The paper is structured as follows. In Section 2, we recall the definition of Lambda- ${\mathcal S}$ and give some examples, stating its main properties. Section 3 is divided into three subsections: first we define the categorical constructions needed to interpret the calculus, then we give the interpretation, and finally we prove such an interpretation to be adequate. We conclude in Section 4. An appendix with the full proofs follows the article.

2. The Calculus Lambda- ${\mathcal S}$ Lambda-S

We give a slightly modified presentation of Lambda- ${\mathcal S}$ (Díaz-Caro et al., 2019a). In particular, instead of giving a probabilistic rewrite system, where $t\to_{{\mathtt{p_k}}}r_k$ means that t reduces with probability ${\mathtt{p_k}}$ to $r_k$ , we introduce the notation $t\longrightarrow \mathtt{\{p_1\}}r_1\parallel\dots\parallel \mathtt{\{p_n\}}r_n$ , where $\mathtt{\{p_1\}}r_1\parallel\dots\parallel \mathtt{\{p_n\}}r_n$ denotes a finite distribution. This way, the rewrite system is deterministic and the probabilities are internalized.

The syntax of terms and types is given in Figure 1. We write ${{\mathbb B}}^n$ for ${{\mathbb B}}\times\cdots\times{{\mathbb B}}$ n-times, with the convention that ${{\mathbb B}}^1={{\mathbb B}}$ , and may write, for $\mathtt{\{p_1\}}t_1\parallel\cdots\parallel\mathtt{\{p_n\}}t_n$ . We use capital Latin letters ( $A,B,C,\dots$ ) for general types and the capital Greek letters $\Psi$ , $\Phi$ , $\Xi$ , and $\Upsilon$ for qubit types. ${\mathcal Q}$ is the set of qubit types, and ${\mathcal T}$ is the set of all the types ( ${\mathcal Q}\subsetneq{\mathcal T}$ ). We write ${\mathcal B}=\{{{\mathbb B}}^n\mid n\in\mathbb N\}\cup\{\Psi\Rightarrow A\mid \Psi\in{\mathcal Q},A\in{\mathcal T}\}$ , that is, the set of nonsuperposed types. In addition, ${\mathsf{Vars}}$ is the set of variables, ${\mathsf B}$ is the set of basis terms, ${\mathsf V}$ the set of values, $\Lambda$ the set of terms, and ${\mathsf D}$ the set of distributions on terms. We have ${\mathsf{Vars}}\subsetneq{\mathsf B}\subsetneq{\mathsf V}\subsetneq\Lambda\subsetneq{\mathsf D}$ , where the last inclusion is considering the constant function that associates probability 1 to any term. As customary, we may write x instead of $x^\Psi$ when the type is clear from the context. Notice that this language is in Church-style.

Figure 1: Syntax of types and terms of Lambda- ${\mathcal S}$ .

The terms are considered modulo associativity and commutativity of the syntactic symbol $+$ . On the other hand, the symbol $\parallel$ is used to represent a true distribution over terms, not as a syntactic symbol, and so it is not only associative and commutative, we also have that $\mathtt{\{p\}}t\parallel\mathtt{\{q\}}t$ is the same as $\mathtt{\{p+q\}}t$ , $\mathtt{\{p\}}t\parallel\mathtt{\{0\}}r =\mathtt{\{p\}}t$ , and Footnote 2 .

There is one atomic type ${{\mathbb B}}$ , for basis qubits and, and three constructors: $\times$ , for pairs, $\Rightarrow$ , for first-order functions, and S for superpositions.

The syntax of terms contains:

  • The three terms for first-order lambda-calculus, namely, variables, abstractions, and applications.

  • Two basic terms and to represent qubits, and one test on them. We usually write ${t}?{r}\mathord{\cdot}{s}$ for $({}?{r}\mathord{\cdot}{s})t$ , see Example 2.2 for a clarification of why to choose this presentation.

  • A product $\times$ to represent associative pairs (i.e., lists), with its destructors $\textit{head}$ and $\textit{tail}$ . We usually use the notations for, for and $\prod_{i=1}^n t_i$ for $t_1\times\cdots\times t_n$ .

  • Constructors to write linear combinations of terms, namely $+$ (sum) and $.$ (scalar multiplication), and its destructor $\pi_j$ measuring the first j qubits written as linear combinations of lists of qubits. Also, one null vector $\vec 0_{S{A}}$ for each type SA. We may write $-t$ for $-1{.}t$ . The symbol $+$ is taken to be associative and commutative (that is, our terms are expressed modulo AC (Arrighi and Dowek, Reference Arrighi and Dowek2017)), therefore, we may use the summation symbol $\sum$ , with the convention that $\sum_{i=1}^1t=t$ .

  • Two casting functions $\Uparrow_r$ and $\Uparrow_\ell$ allowing to transform lists of superpositions into superpositions of lists (see Example 2.4).

The rewrite system depends on types. Indeed, $\lambda x{:}S\Psi.t$ follows a call-by-name strategy, while $\lambda x{:}{{\mathbb B}}.t$ , which can duplicate its argument, follows a call-by-base strategy (Assaf et al., Reference Assaf, Díaz-Caro, Perdrix, Tasson and Valiron2014), that is, not only the argument must be reduced first but also it will distribute over linear combinations. Therefore, we give first the type system and then the rewrite system.

The typing relation is given in Figure 2. Recall that Lambda- ${\mathcal S}$ is a first-order calculus, so only qubit types are allowed to the left of arrows, and in the contexts. We write $S^mA$ for $SS\cdots SA$ , with $m\geq 1$ . Contexts, identified by the capital Greek letters $\Gamma$ , $\Delta$ , and $\Theta$ , are partial functions from ${\mathsf{Vars}}$ to ${\mathcal Q}$ . The contexts assigning only types of the form ${{\mathbb B}}^n$ are identified with the super-index ${{\mathbb B}}$ , e.g. $\Theta^{{\mathbb B}}$ . Whenever more than one context appear in a typing rule, their domains are considered pair-wise disjoint. Observe that all types are linear (as in linear-logic) except on basis types ${{\mathbb B}}^n$ , which can be weakened and contracted (expressed by the common contexts $\Theta^{{\mathbb B}}$ ). The particular form of rule $S_E$ , allows us to type a measurement even if its argument has been typed with an arbitrary k number of S. We choose this presentation to avoid subtyping, which was present in the original presentation of Lambda- ${\mathcal S}$ (Díaz-Caro and Dowek, 2017).

Figure 2. Typing relation.

The rewrite relation is given in Figures 3 to 10. We write $t:A$ when there exists $\Gamma$ such that $\Gamma\vdash t:A$ , and $t\!\not\,: A$ if not.

Figure 3. Beta rules.

The two beta rules (Figure 3) are applied according to the type of the argument. If the abstraction expects an argument with a superposed type, then the reduction follows a call-by-name strategy (rule $({\mathsf{\beta_n}})$ ), while if the abstraction expects a basis type, the reduction is call-by-base (rule $({\mathsf{\beta_b}})$ ): it $\beta$ -reduces only when its argument is a basis term. However, typing rules also allow typing an abstraction expecting an argument with basis type, applied to a term with superposed type (see Example 2.1). In this case, the beta reduction cannot occur and, instead, the application must distribute first, using the rules from Figure 4: the linear distribution rules.

Figure 4. Linear distribution rules.

Example 2.1. The term $\lambda x{:}{{\mathbb B}}.x\times x$ does not represent a cloning machine, but a CNOT with an ancillary qubit. Indeed,

The type derivation is the following:

The rules from Figure 4 also say how superposed first-order functions reduce, which can be useful for example to describe an operator as the superposition of simpler operators, cf. Arrighi and Dowek (Reference Arrighi and Dowek2017) for more interesting examples.

Figure 5 gives the two rules for the conditional construction. Together with the linear distribution rules (Figure 4), these rules implement the quantum-if (Altenkirch and Grattage, Reference Altenkirch and Grattage2005), as shown in the following example.

Figure 5. Rules of the conditional construction.

Example 2.2. The term ${}?{r}\mathord{\cdot}{s}$ is meant to test whether the condition is or. However, defining it as a function allows us to use the linear distribution rules from Figure 4, implementing the quantum-if:

This construction allow us to encode any quantum gate.

Figure 6 gives the rules for lists, ( ${\mathsf{head}}$ ) and ( ${\mathsf{tail}}$ ), which can only act in basis qubits, otherwise, we would be able, for example, to extract a qubit from an entangled pair of qubits.

Figure 6. Rules for lists.

Figure 7 deals with the vector space structure implementing a directed version of the vector space axioms. The direction is chosen in order to yield a canonical form (Reference Arrighi and DowekArrighi and Dowek, 2017). The rules are self-explanatory. There is a subtlety, however, on the rule ( ${\mathsf{zero_\alpha}}$ ). A simpler rule, for example “If $t:A$ then $0.t\longrightarrow \vec 0_{S{A}}$ ,” would lead to break confluence with the following critical pair: To solve the critical pair, Díaz-Caro et al. (2019a) added a new definition “ $\min A$ ,” which leaves the type A with a minimum amount of S in head position (one, if there is at least one, or zero, in other case). This solution makes sense in such a presentation of Lambda- ${\mathcal S}$ , since the interpretation of the type SSA coincides with the interpretation of SA (both are the vector space generated by the span over A). However, in our categorical interpretation these two types are not interpreted in the same way, and so, our rule $({\mathsf{zero_\alpha}})$ sends $0.{\vec 0_{S{A}}}$ to ${\vec 0_{S{A}}}$ directly. Similarly, all the rules ending in ${\vec 0_{S{A}}}$ have been modified from its original presentation in the same way, namely: $({\mathsf{zero_\alpha}})$ , $({\mathsf{zero}})$ , $({\mathsf{lin^0_r}})$ , and $({\mathsf{lin^0_\ell}})$ .

Figure 7. Rules implementing the vector space axioms.

Example 2.3.

Remind that the symbol $+$ is associative and commutative.

Figure 8 are the rules to implement the castings Footnote 3 . The idea is that $\times$ does not distribute with respect to $+$ , unless a casting allows such a distribution. This way, the types $S{{\mathbb B}}\times {{\mathbb B}} $ and $S({{\mathbb B}}\times{{\mathbb B}})$ are different. Indeed, have the first type but not the second, while have the second type but not the first. The first type gives us the information that the state is separable, while the second type does not. We can choose to take the first state as a pair of qubits forgetting the separability information, by casting its type, in the same way as in certain programming languages an integer can be cast to a float (and so, forgetting the information that it was indeed an integer and not just any float).

Figure 8. Rules for castings $\Uparrow_r$ and $\Uparrow_\ell$ .

Example 2.4. The term is the encoding of the qubit. However, while the qubit is equal to, the term will not rewrite to the encoding of it, unless it is preceded by a casting $\Uparrow_r$ :

Notice that has type $S{{\mathbb B}} \times{{\mathbb B}}$ , highlighting the fact that the second qubit is a basis qubit, i.e., duplicable, while has type $S({{\mathbb B}}\times{{\mathbb B}})$ , showing that the full term is a superposition where no information can be extracted, and hence, non-duplicable.

Figure 9 gives the rule $({\mathsf{proj}})$ for the projective measurement with respect to the basis. It acts only on superpositions of terms in normal form; however, these terms do not necessarily represent a norm-1 vector, so the measurement must perform a division by the norm of the vector prior to measure. In case the norm of the term is 0, then an error is raised. In the original version of Lambda- ${\mathcal S}$ , such an error is left as a term that does not reduce. In this paper, however, we added a new rule $({\mathsf{proj}}{_{\vec 0}})$ for the projective measurement over the null vector, in order to simplify the model (otherwise, we would have been forced to add Moggi’s exception monad (Reference MoggiMoggi, 1988) to the model). Since the model we present in this paper is already complex, we prefer to add a rule sending the error to a fixed value and focus on the novel constructions.

Figure 9. Rules for the projection.

In rule ( ${\mathsf{proj}}$ ), $j\leq n$ , and we use the following notations:

This way, is the normalized kth projection of the term.

Example 2.5. Lets measure the first two qubits of a three qubits superposition. So, in rule $({\mathsf{proj}})$ take $j=2$ and $n=3$ . Say, the term to measure is. So, we have $m=4$ , and

All in all, the reduction is as follows:

Notice that, since, we have

\[ \vdash\pi_2(|{000}+2.|{110}+3.|{001}+|{111}):{{\mathbb B}}^2\times S{{\mathbb B}} \]

Finally, Figure 10 gives the contextual rules implementing the call-by-value and call-by-name weak strategies (weak in the sense that there is no reduction under lambda).

Figure 10. Contextual rules (notice that, in particular, there is no reduction under lambda).

Example 2.6. A Hadamard gate can be implemented by, where and.

Therefore, $H:{{\mathbb B}}\Rightarrow S{{\mathbb B}} $ and we have and.

Correctness has been established in previous works for slightly different versions of Lambda- ${\mathcal S}$ , except for the case of confluence, which have only been proved for Lineal (Arrighi and Dowek, Reference Arrighi and Dowek2017). Lineal can be seen as an untyped fragment of Lambda- ${\mathcal S}$ without several constructions (in particular, without $\pi_j$ ). The proof of confluence for Lambda- ${\mathcal S}$ is delayed to future work, using the development of probabilistic confluence by Díaz-Caro and Martínez (2018). The proof of Subject Reduction and Strong Normalization are straightforward modifications from the proofs of the different presentations of Lambda- ${\mathcal S}$ .

Theorem 2.7 (Confluence of Lineal, [Arrighi and Dowek (Reference Arrighi and Dowek2017), Thm. 7.25). ] Lineal, an untyped fragment of Lambda- ${\mathcal S}$ , is confluent.

Theorem 2.8 (Subject reduction on closed terms, [Díaz-Caro et al. (2019a), Thm. 5.12). ] For any closed terms t and u and type A, if and $\vdash t:A$ , then.

Theorem 2.9 (Strong normalization, [Díaz-Caro et al. (2019a), Thm. 6.10). ] If $\vdash t:A$ , then t is strongly normalizing, that is, there is no infinite rewrite sequence starting from t.

Theorem 2.10 (Progress). If $\vdash t:A$ and t does not reduce, then t is a value.

Proof. By induction on t.

  • If t is a value, then we are done.

  • Let $t=rs$ , then $\vdash r:S(\Psi\Rightarrow C)$ . So, by the induction hypothesis, r is a value. Therefore, by its type, r is either a lambda term or a superposition of them, and so rs reduces, which is absurd.

  • Let $t=r+s$ , then by the induction hypothesis both r and s are values, and so $r+s$ is a value.

  • Let $t=\pi_jr$ , then by the induction hypothesis r is a value, and since t is typed, $\vdash r:S{{\mathbb B}}^n$ . Therefore, the only possible r are superpositions of kets, and so, t reduces, which is absurd.

  • Let $t=\alpha.r$ , then by the induction hypothesis r is a value, and so t is a value.

  • Let $t=r\times s$ , then by the induction hypothesis both r and s are values, and so t is a value.

  • Let $t=\textit{head}\ r$ , then by the induction hypothesis r is a value, and since t is typed, $\vdash r:{{\mathbb B}}^n$ . Therefore, the only possible r are products of kets, and so t reduces, which is absurd.

  • Let $t=\textit{tail}\ r$ . Analogous to previous case.

  • Let $t=\Uparrow_r r$ , then, by the induction hypothesis, r is a value. Since t is typed, $\vdash r:S(S\Psi\times\Phi)$ . Therefore, the cases for r are

      -
    • $r=x$ . Absurd, since r is closed.

    • - $r=\lambda x:\Theta.r'$ . Absurd since $\vdash r:S(S\Psi\times\Phi)$ .

    • - . Absurd since $\vdash r:S(S\Psi\times\Phi)$ .

    • - $r=| 1$ . Absurd since $\vdash r:S(S\Psi\times\Phi)$ .

    • - $r=v_1+v_2$ , then t reduces by rule $({\mathsf{dist^+_\Uparrow}})$ , which is absurd.

    • - $r={\vec 0_{S{(S\Psi\times\Phi)}}}$ , then t reduces by rule $({\mathsf{dist^0_{\Uparrow_r}}})$ or $({\mathsf{neut^\Uparrow_{0r}}})$ , which is absurd.

    • - $r=\alpha.v$ , then t, reduces by rule $({\mathsf{dist^\alpha_\Uparrow}})$ , which is absurd.

- $r={}?{s_1}\mathord{\cdot}{s_2}$ . Absurd since $\vdash r:S(S\Psi\times\Phi)$ .

    -
  • $r=v_1\times\cdots\times v_n$ , with $v_1$ not a pair, then the possible $v_1$ are:

    • * $v_1\in{\mathsf B}$ , then t reduces by rule $({\mathsf{neut^\Uparrow_r}})$ , which is absurd.

    • * $v_1=v'_1+v'_2$ , then t reduces by rule $({\mathsf{dist^+_r}})$ , which is absurd.

    • * $v_1={\vec 0_{S{(S\Psi\times\Phi)}}}$ , then t reduces by rule $({\mathsf{dist^0_r}})$ , which is absurd.

    • * $v_1=\alpha.v$ , then t reduces by rule $({\mathsf{dist^\alpha_r}})$ , which is absurd.

Let $t=\Uparrow_\ell r$ . Analogous to previous case.

3. Denotational Semantics

Even though the semantic of this article is about particular categories, i.e., the category of sets and the category of vector spaces, from the start our approach uses theory and tools from category theory in an abstract way. The idea is that the concrete situation exposed in this article will pave the way to a more abstract formulation, and that is why we develop the constructions as abstract and general as possible. A more general treatment, using a monoidal adjunction between a Cartesian closed category and a monoidal category with some extra conditions, remains a topic for future work. A first result in such direction has been published recently (Díaz-Caro andMalherbe, 2020), however, in a simplified version of Lambda- ${\mathcal S}$ without measurements.

3.1 Categorical constructions

The concrete categorical model Footnote 4 for Lambda- ${\mathcal S}$ will be given using the following constructions:

  • A monoidal adjunction

where

    -
  • $\mathbf{Set}$ is the category of sets with 1 as a terminal object.

  • - $\mathbf{Vec}$ is the category of vector spaces over $\mathbb C$ , in which $I=\mathbb C$ .

  • - S is the functor such that for each set A, SA is the vector space whose vectors are the formal finite linear combinations of the elements of A with coefficients in $\mathbb C$ , and given a function $f:A\to B$ we define $Sf:SA\to SB$ by evaluating f in A.

  • - U is the forgetful functor such that for each vector space V, UV is the underlying set of vectors in V and for each linear map f, Uf is just f as function not taking into account its linear property.

  • - m is a natural isomorphism defined by

    \[ m_{AB}: SA\otimes SB\to S(A\times B) \qquad\qquad (\sum_{a\in A}\alpha_a a)\otimes (\sum_{b\in B}\beta_b b) \mapsto \sum_{(a,b)\in A\times B}\alpha_a\beta_b(a,b) \]
  • - n is a natural transformation defined by

    \[ n_{AB}: UV\times UW \to U(V\otimes W) \qquad\qquad (v,w) \mapsto v\otimes w \]
    • $\mathbf{Vec}^\dagger$ is a subcategory of $\mathbf{Vec}$ , where every morphism $f:V\to W$ have associated a morphism $f^\dagger:W\to V$ , called the dagger of f, such that for all $f:V\to W$ and $g:W\to U$ we have

      \[ \mathsf{Id}^\dagger_V =\mathsf{Id}_V\qquad\qquad (g\circ f)^\dagger =f^\dagger\circ g^\dagger\qquad\qquad f^{\dagger\dagger} = f \]
      Notice that $\mathbf{Vec}^\dagger$ is a subcategory of $\mathbf{FinVec}$ , the category of finite vector spaces over $\mathbb C$ .
    • $\mathbf{Set}_D$ is a Kleisli category over $\mathbf{Set}$ defined with the following monoidal monad, called the distribution monad (Giry, Reference Giry1982; Moggi, Reference Moggi1988), $(D,\hat\eta,\hat\mu,\hat m_{AB},\hat m_1)$ :

      \[ D:\mathbf{Set}\to\mathbf{Set} \qquad\qquad DA=\left\{\sum_{i=1}^n p_i\chi_{a_i}\mid \sum_{i=1}^n p_i=1, a_i\in A, n\in\mathbb N\right\} \]
      where $\chi_a$ is the characteristic function of a, and $\hat\eta$ , $\hat\mu$ , $\hat m_{AB}$ , and $\hat m_1$ are defined as follows:
      \begin{align*} \begin{array}{r@{\qquad\qquad}l} \hat\eta:A \to DA & a\mapsto 1\chi_a \\ \hat\mu:DDA \to DA & \sum_{i=1}^n p_i\chi_{(\sum_{j=1}^{m_i}q_{ij}\chi_{a_{ij}})} \mapsto\sum_{i=1}^n\sum_{j=1}^{m_i} p_iq_{ij}\chi_{a_{ij}} \\ \hat m_{AB}:DA\times DB\to D(A\times B) & \left(\sum_{i=1}^np_i\chi_{a_i} , \sum_{j=1}^mq_j\chi_{b_j}\right) \mapsto \sum_{i=1}^n\sum_{j=1}^m p_iq_j(\chi_{a_i},\chi_{b_j}) \\ \hat m_1:1 \to D1 & \ast \mapsto 1\chi_\ast \end{array} \end{align*}

Remarks 3.1.

  • There exists an object ${{\mathbb B}}$ and maps $i_1$ , $i_2$ in $\mathbf{Set}$ such that for every $t:1\longrightarrow A$ and $r:1\longrightarrow A$ , there exists a unique map [t,r] making following diagram commute:

This object ${{\mathbb B}}$ is the Boolean set and such a map will allow us to interpret the if construction (Definition 3.4).

  • For every $A\in|\mathbf{Set}|$ , $\mathbf{Vec}(I,SA)$ is an abelian group with the sum defined point-wise. Therefore, there exists a map $+:USA\times USA\rightarrow USA$ in $\mathbf{Set}$ , given by $(a,b)\mapsto a+b$ using the underlying sum from SA.

  • To have an adjunction means that each function $g:A\to UV$ extends to a unique linear transformation $f:SA\to V$ , given explicitly by $f\left(\sum_i \alpha_i x_i\right)=\sum_i\alpha_i g(x_i)$ , that is, formal linear combinations in SA to actual linear combinations in V (see (Mac Lane, Reference Mac Lane1998) for details).

  • $\mathbf{Set}$ is a Cartesian closed category where $\eta^A$ is the unit and $\varepsilon^A$ is the counit of $-\times A\dashv[A,-]$ , from which we can define the curryfication ( $\mathsf{curry}$ ) and un-curryfication ( $\mathsf{uncurry}$ ) of any map.

  • The defined adjunction between $\mathbf{Set}$ and $\mathbf{Vec}$ gives rise to a monad $(T,\eta,\mu)$ in the category $\mathbf{Set}$ , where $T=US$ , $\eta:\mathsf{Id}\to T$ is the unit of the adjunction, and using the counit $\varepsilon$ , we obtain $\mu=U\varepsilon S:TT\to T$ , satisfying unity and associativity laws (see Mac Lane (Reference Mac Lane1998)).

3.2 Interpretation

Definition 3.2. Types are interpreted in the category $\mathbf{Set}_D$ , as follows:

\[ \left[\!\left[ {{{\mathbb B}}}\right]\!\right] = {{\mathbb B}}\hspace{1cm} \left[\!\left[ {\Psi\Rightarrow A}\right]\!\right] = \left[\!\left[ {\Psi}\right]\!\right]\Rightarrow\left[\!\left[ {A}\right]\!\right]\hspace{1cm} \left[\!\left[ {SA}\right]\!\right] = US\left[\!\left[ {A}\right]\!\right]\hspace{1cm} \left[\!\left[ {\Psi\times \Phi}\right]\!\right] =\left[\!\left[ {\Psi}\right]\!\right]\times\left[\!\left[ {\Phi}\right]\!\right] \]

Remark 3.3. To avoid cumbersome notation, we omit the brackets $\left[\!\left[ {\cdot}\right]\!\right]$ when there is no ambiguity. For example, we write directly USA for $\left[\!\left[ {SA}\right]\!\right]=US\left[\!\left[ {A}\right]\!\right]$ and A for $\left[\!\left[ {A}\right]\!\right]$ .

Before giving the interpretation of typing derivation trees in the model, we need to define certain maps that will serve to implement some of the constructions in the language.

To implement the if construction, we define the following map.

Definition 3.4. Given $t,r \in[\Gamma, A]$ there exists a map ${{\mathbb B}}\xrightarrow{f_{t,r}}[\Gamma, A]$ in $\mathbf{Set}$ defined by $f_{t,r}= [\hat{t},\hat{r}]$ where $\hat{t}:1\rightarrow[\Gamma, A]$ and $\hat{r}:1\rightarrow[\Gamma, A]$ are given by the constant maps $\star\mapsto t$ and $\star\mapsto s$ , respectively. Concretely this means that $i_1(\star)\mapsto t$ and $i_2(\star)\mapsto r$ .

Example 3.5. Consider $t=i_1$ and $r=i_2$ , with $t,r\in[1,{{\mathbb B}}]$ , where ${{\mathbb B}}=\{i_1(\star),i_2(\star)\}$ . To make the example more clear, let us consider and, hence. The map ${{\mathbb B}}\xrightarrow{f_{t,r}}[1,{{\mathbb B}}]$ in $\mathbf{Set}$ is defined by $f_{t,r}=[\hat{i_1},\hat{i_2}]$ , where $\hat i_k:1\rightarrow[1,{{\mathbb B}}]$ , for $k=1,2$ . Therefore, we have the following commuting diagram:

Hence, we have:

Therefore, $f_{t,r}$ is the map and.

In order to implement the projection, we define a map $\pi_j$ (Definition 3.14), which is formed from the several maps that we describe below.

A projection $\pi_{jk}$ acts in the following way: first it projects the first j components of its argument, an n-dimensional vector, to the basis vector in the vector space of dimension j, then it renormalizes it, and finally it factorizes the first j components. Then, the projection $\pi_j$ takes the probabilistic distribution between the $2^j$ projectors $\pi_{jk}$ , each of these probabilities, calculated from the normalized vector to be projected.

Example 3.6. Let us analyse the Example 2.5:

We can divide this in four projectors (since $j=2$ , we have $2^2$ projectors), which are taken in parallel (with the symbol $\parallel$ ). The four projectors are $\pi_{2,00}$ , $\pi_{2,01}$ , $\pi_{2,10}$ , and $\pi_{2,11}$ . In this case, the probability for the projectors $\pi_{2,01}$ and $\pi_{2,10}$ are $\mathtt 0$ , and hence these do not appear in the final term.

The projector $\pi_{2,00}$ acts as described before: first it projects the first 2 components of to the basis vector, obtaining. Then it renormalizes it, by dividing it by its norm, obtaining. Finally, it factorizes the vector, obtaining. Similarly, the projector $\pi_{2,11}$ gives. Finally, the probabilities to assemble the final term are calculated as $\mathtt{p_0}=\frac{|1|^2+|3|^2}{|1|^2+|2|^2+|3|^2+|1|^2}=\mathtt{\frac 23}$ and $\mathtt{p_1}=\frac{|2|^2+|1|^2}{|1|^2+|2|^2+|3|^2+|1|^2}=\mathtt{\frac 13}$ .

Categorically, we can describe the operator $\pi_{jk}$ (Definition 3.11) by the composition of three arrows: a projector arrow to the basis vector (Definition 3.7), a normalizing arrow ${\mathsf{Norm}}$ (Definition 3.8), and a factorizing arrow $\varphi_j$ (Definition 3.9). Then, the projection $\pi_j$ (Definition 3.14) maps a vector to the probabilistic distribution between the $2^j$ basis vectors, using a distribution map (Definition 3.12).

In the following definitions, if is a vector of dimension n, we write to the constant map.

Definition 3.7. The projector arrow to the basis vector ${\mathsf{Proj}}_k$ is defined as follows:

Definition 3.8. The normalizing arrow ${\mathsf{Norm}}$ is defined as follows:

Definition 3.9. The factorizing arrow $\varphi_j$ is defined as any arrow making the following diagram commute:

Example 3.10. For example, take $\varphi_j$ as the following map:

Definition 3.11. For each $k=0,\dots,2^j-1$ , the projection to the basis vector, $\pi_{jk}$ , is defined as any arrow making the following diagram commute:

where we are implicitly using the isomorphism $US{{\mathbb B}}^n \cong U(S{{\mathbb B}})^{\otimes n}$ , obtained by composing $n-1$ times the mediating arrow m and then applying the functor U.

The following distribution map is needed to assemble the final distribution of projections in Definition 3.14.

Definition 3.12. Let $\{p_i\}_{i=1}^n$ be a set with $p_i\in[0,1]$ such that $\sum_{i=1}^np_i=1$ . Then, we define ${d_{\{p_i\}_i}}$ as the arrow:

\[ d_{\{p_i\}_i}:A^n\to DA \qquad\qquad (a_1,\dots,a_n)\mapsto\sum_{i=1}^np_i\chi_{a_i} \]

Example 3.13. Consider $d_{\{\frac 12,\frac 13,\frac 16\}}:{{\mathbb B}}^3\to D{{\mathbb B}}^3$ defined by $d_{\{\frac 12,\frac 13,\frac 16\}}(b_1\times b_2\times b_3)=\frac 12\chi_{b_1}+\frac 13\chi_{b_2}+\frac 16\chi_{b_3}$ . Then, for example,.

Definition 3.14. The projective arrow is as follows, where $p_k=\overline{{\mathsf{Norm}}(|\psi)}^\dagger\circ {\mathsf{P}}_k\circ\overline{{\mathsf{Norm}}(|\psi)}$ .

Example 3.15. Consider the set ${{\mathbb B}}^2$ and the vector space $S{{\mathbb B}}^2$ . We can describe the projection $\pi_1$ as the map $\pi_1:US{{\mathbb B}}^2\to D({{\mathbb B}}\times US{{\mathbb B}} )$ such that, where, if, then $p_0=\tfrac{|\alpha_1|^2+|\alpha_2|^2}{\sqrt{\sum_{i=1}^4|\alpha_i|^2}}$ and $p_1=\tfrac{|\alpha_3|^2+|\alpha_4|^2}{\sqrt{\sum_{i=1}^4|\alpha_i|^2}}$ .

The normalizing arrow is the arrow ${\mathsf{Norm}}:US{{\mathbb B}}^2\to US{{\mathbb B}}^2$ such that:

The factorization arrow is the arrow $\varphi_1:US{{\mathbb B}}^2\to{{\mathbb B}}\times US{{\mathbb B}} $ such that:

Finally, $\pi_{10}$ and $\pi_{11}$ are defined as maps in $US{{\mathbb B}}^2\to{{\mathbb B}}\times US{{\mathbb B}} $ such that

$\pi_{10}= \varphi_1\circ{\mathsf{Norm}}\circ U{\mathsf{P}}_0 $ and $ \pi_{11}= \varphi_1\circ{\mathsf{Norm}}\circ U{\mathsf{P}}_1 $ .

We write $(US)^mA$ for $US\dots USA$ , where $m>0$ . The arrow sum on $(US)^mA$ with $A\neq USB$ will use the underlying sum in the vector space SA. To define such a sum, we need the following map.

Definition 3.16. The map $g_k:((US)^{k+1}A)\times ((US)^{k+1}A)\to (US)^k(USA\times USA)$ is defined by:

\[ g_k= (US)^{k-1}Um\circ (US)^{k-1}n\circ (US)^{k-2}Um\circ (US)^{k-2}n\circ \dots\circ Um\circ n \]

Example 3.17. We can define the sum on $(US)^3A\times (US)^3A$ by using the sum on SA as:

$(US)^3A\times (US)^3A\xrightarrow{g_2}(US)^2(USA\times USA)\xrightarrow{(US)^2+}(US)^3A$ where $g_2=USUm\circ USn\circ Um\circ n$ .

Using all the previous definitions, we can finally give the interpretation of a type derivation tree in our model. If $\Gamma\vdash t:A$ with a derivation T, we write generically $\left[\!\left[ {T}\right]\!\right]$ as $\Gamma\xrightarrow{t} A$ . In the following definition, we write $S^mA$ for $S\dots SA$ , where $m>0$ and $A\neq SB$ .

Definition 3.18. If T is a type derivation tree, we define inductively $\left[\!\left[ {T}\right]\!\right]$ as an arrow in the category $\mathbf{Set}_D$ , as follows. To avoid cumbersome notation, we omit to write the monad D in most cases (we only give it in the case of the measurement, which is the only interesting case).

Proposition 3.19 (Independence of derivation). If $\Gamma\vdash t:A$ can be derived with two different derivations T and T’, then $\left[\!\left[ {T}\right]\!\right]=\left[\!\left[ {T'}\right]\!\right]$ .

Proof. Without taking into account rules $\Rightarrow_E$ , $\Rightarrow_{ES}$ , and $S_I$ , the typing system is syntax directed. Hence, we give a rewrite system on trees such that each time a rule $S_I$ can be applied before or after another rule, we choose a direction to rewrite the tree to one of these forms. Similarly, rules $\Rightarrow_E$ and $\Rightarrow_{ES}$ , can be exchanged in few specific cases, so we also choose a direction for these.

Then, we prove that every rule preserves the semantics of the tree. This rewrite system is clearly confluent and normalizing, hence for each tree T we can take the semantics of its normal form, and so every sequent will have one way to calculate its semantics: as the semantics of the normal tree. The full proof is given in the appendix.

Remark 3.20. Proposition 3.19 allows us to write the semantics of a sequent, independently of its derivation tree. Hence, from now on, we will use $\left[\!\left[ {\Gamma\vdash t:A}\right]\!\right]$ , without ambiguity.

3.3 Soundness and adequacy

We first prove the soundness of the interpretation with respect to the reduction relation (Theorem 3.22), then we prove the computational adequacy (Theorem 3.28). Finally, we prove adequacy (Theorem 3.29) as a consequence of both results.

3.3.1 Soundness

Soundness is proved only for closed terms, since the reduction is weak (cf. Figure 10). First, we need a substitution lemma.

Lemma 3.21 (Substitution). If $x:\Psi\vdash t:A$ and $\vdash r:\Psi$ , the following diagram commutes:

That is, $\left[\!\left[ {\vdash(r/x)t:A}\right]\!\right]=\left[\!\left[ {x:\Psi\vdash t:A}\right]\!\right]\circ\left[\!\left[ {\vdash r:\Psi}\right]\!\right]$ .

Proof. We prove, more generally, that if $\Gamma',x:\Psi,\Gamma\vdash t:A$ and $\vdash r:\Psi$ , the following diagram commutes:

That is, $\left[\!\left[ {\Gamma',\Gamma\vdash(r/x)t:A}\right]\!\right]=\left[\!\left[ {\Gamma',x:\Psi,\Gamma\vdash t:A}\right]\!\right]\circ(\mathsf{Id}\times\left[\!\left[ {\vdash r:\Psi}\right]\!\right]\times\mathsf{Id})$ . Then, by taking $\Gamma=\Gamma'=\emptyset$ , we get the result stated by the lemma.

We proceed by induction on the derivation of $\Gamma',x:\Psi,\Gamma\vdash t:A$ . The full proof is given in the appendix.

Theorem 3.22 (Soundness). If $\vdash t:A$ , and $t\longrightarrow r$ , then $\left[\!\left[ {\vdash t:A}\right]\!\right] = \left[\!\left[ {\vdash r:A}\right]\!\right]$ .

Proof. By induction on the rewrite relation, using the first derivable type for each term. The full proof is given in the appendix.

3.3.2 Computational adequacy

We adapt Tait’s proof for strong normalization to prove the computational adequacy of Lambda- ${\mathcal S}$ .

Definition 3.23. Let $\mathfrak A,\mathfrak B$ be sets of closed terms. We define the following operators on them:

  • Closure by antireduction: $\overline{\mathfrak A}=\{t\mid t\longrightarrow^*r_, \textrm{ with }r\in \mathfrak A \textrm{ and }FV(t)=\emptyset\}$ .

  • Closure by distribution: .

  • Product: $\mathfrak A\times \mathfrak B=\{t\times u\mid t\in \mathfrak A\textrm{ and }u\in \mathfrak B\}$ .

  • Arrow: $\mathfrak A\Rightarrow \mathfrak B=\{t\mid\forall u\in\mathfrak A, tu\in \mathfrak B\}$ .

  • Span: $S\mathfrak A=\{\sum_i[\alpha_i.]r_i\mid r_i\in\mathfrak A\}$ .

The set of computational closed terms of type A (denoted $\mathfrak{C}_{A}$ ), is defined by

A substitution $\sigma$ is valid in a context $\Gamma$ (notation $\sigma\vDash\Gamma$ ) if for each $x:A\in\Gamma$ , $\sigma x\in\mathfrak{C}_{A}$ .

Lemma 3.24. If $\vdash t:A$ then $t\in\mathfrak{C}_{A}$ .

Proof. We prove, more generally, that if $\Gamma\vdash t:A$ and $\sigma\vDash\Gamma$ , then $\sigma t\in\mathfrak{C}_{A}$ . We proceed by induction on the derivation of $\Gamma\vdash t:A$ . The full proof is given in the appendix.

Definition 3.25 (Elimination context). An elimination context is a term of type ${{\mathbb B}}$ produced by the following grammar, where exactly one subterm has been replaced with a hole $[\!\cdot\!]$ .

\[ C:=[\cdot]\mid Ct \mid tC\mid \pi_jC\mid\textit{head}\ C\mid\textit{tail}\ C\mid\Uparrow_r C\mid\Uparrow_\ell C \]

We write C[t] for the term of type ${{\mathbb B}}$ obtained from replacing the hole of C by t.

Definition 3.26 (Operational equivalence). We write $t\approx_e r$ if, for every elimination context C, there exists s such that $C[t]\longrightarrow^* s$ and $C[r]\longrightarrow^* s$ .

We define the operational equivalence $\approx_{op}$ inductively by

  • If $t\approx_e r$ , then $t\approx_{op} r$ .

  • If $t\approx_{op} r$ then $\alpha.t\approx_{op}\alpha.r$ .

  • If $t_1\approx_{op} r_1$ and $t_2\approx_{op} r_2$ , then $t_1+t_2\approx_{op} r_2+r_2$ .

  • If $t_1\approx_{op} r_1$ and $t_2\approx_{op} r_2$ , then $t_1\times t_2\approx_{op} r_1\times r_2$ .

Remark that operational equivalence differ from the standard notion of observational equivalence since $t\approx_{op} r$ does not imply $\lambda x:\Psi.t\approx_{op}\lambda x:\Psi.r$ , as a consequence of not having reductions under lambda.

Lemma 3.27. If $C[t]\approx_{op} C[r]$ , then $t\approx_{op} r$ .

Proof. By the shape of C, the only possibility for $C[t]\approx_{op} C[r]$ is $C[t]\approx_e C[r]$ . Then, by definition, there exists a term s and a context D such that $D[C[t]]\longrightarrow^*s$ and $D[C[r]]\longrightarrow^*s$ . Consider the context $E =D[C]$ , we have $E[t]=D[C[t]]\longrightarrow^*s$ and $E[r]=D[C[r]]\longrightarrow^*t'$ . Therefore, $t\approx_e r$ , and so $t\approx_{op} r$ .

Theorem 3.2 (Computational adequacy). If $\left[\!\left[ {\vdash t:A}\right]\!\right]=\left[\!\left[ {\vdash v:A}\right]\!\right]$ , then $t\approx_{op} v$ .

Proof. We proceed by induction on A.

  • $A={{\mathbb B}}$ . By Lemma 3.24, we have $t\in\mathfrak{C}_{A}$ , thus,, and, by the same lemma,. Hence, by Theorem 3.22, we have. So,. Therefore, $\mathtt{p_i}=\mathtt{q_i}$ , thus $t\longrightarrow^* v$ .

  • $A=C_1\times C_2$ . By Lemma 3.24, we have $t\in\mathfrak{C}_{A}$ , thus,, with $w_{ij}\in \mathfrak{C}_{C_j}$ , and, by the same lemma,, with $v_{ij}\in \mathfrak{C}_{C_j}$ . Hence, by Theorem 3.22, we have. So, $(1^2)^n\xrightarrow{ (w_{11}\times w_{12})\times\dots\times (w_{n1}\times w_{n2})}(C_1\times C_2)^n\xrightarrow{d_{\{{q_i}\}_i}}D(C_1\times C_2) =(1^2)^n\xrightarrow{(v_{11}\times v_{12})\times\dots\times (v_{m1}\times v_{m2})}(C_1\times C_2)^n\xrightarrow{d_{\{{p_i}\}_i}}D(C_1\times C_2)$ . Therefore, $\mathtt{p_i}=\mathtt{q_i}$ , $m=n$ , and $\left[\!\left[ {\vdash v_{ij}:C_j}\right]\!\right]=\left[\!\left[ {\vdash w_{ij}:C_j}\right]\!\right]$ . Therefore, by the induction hypothesis, $w_{ij}\approx_{op} v_{ij}$ , and so, $t\approx_{op} v$ .

  • $A=\Psi\Rightarrow C$ . The only possibility for v, a value of type $\Psi\Rightarrow C$ , is.

Hence, let $f=\left[\!\left[ {\vdash t:A}\right]\!\right]=\left[\!\left[ {\vdash v:A}\right]\!\right]=1^n\xrightarrow{(\eta^\Psi)^n}[\Psi,1\times\Psi]^n\xrightarrow{[\mathsf{Id},r_i]^n}[\Psi, C^n]\xrightarrow{d_{\{{p_i}\}_i}}D[\Psi, C]$ .

By Lemma 3.24, we have $t\in\mathfrak{C}_{A}$ . Hence, $t\longrightarrow^* t'$ , such that for all $s\in\mathfrak{C}_{\Psi}$ , $t's\in\mathfrak{C}_{C}$ .

Let $w\in\mathfrak{C}_{\Psi}$ be a value, and $g=\left[\!\left[ {\vdash w:\Psi}\right]\!\right]=1\xrightarrow{w}\Psi\xrightarrow{d_{\{1\}}}D\Psi$ .

Thus, $\left[\!\left[ {\vdash tw:C}\right]\!\right]=\left[\!\left[ {\vdash vw:C}\right]\!\right]= 1^{n+1}\xrightarrow{f\times g} D[\Psi,C]\times D\Psi\xrightarrow{m_D}D([\Psi,C]\times\Psi)\xrightarrow{D\varepsilon} DC$ .

By Theorem 2.9 and Theorem 2.10, there exists u value, such that $vw\longrightarrow^*u$ , and by Theorem 2.8, $\vdash u:C$ . So, by Theorem 3.22, $\left[\!\left[ {\vdash u:C}\right]\!\right]=\left[\!\left[ {\vdash vw:C}\right]\!\right]$ .

Therefore, by the induction hypothesis, $tw\approx_{op} u$ . Since $vw\longrightarrow^* u$ , we have $u\approx_{op} vw$ . Hence, $tw\approx_{op} vw$ , and so, by Lemma 3.27, $t\approx_{op} v$ .

  • $A=SC$ . By Lemma 3.24, we have $t\in\mathfrak{C}_{A}$ , thus,, with $w_{ij}\in \mathfrak{C}_{C}$ , and, by the same lemma,, with $v_{ik}\in \mathfrak{C}_{C}$ . Hence, by Theorem 3.22, we have. So, $d_{\{{q_i}\}_i}\circ US+\circ w_{11}\times\dots\times w_{nm} = d_{\{{p_i}\}_i}\circ US+ \circ v_{11}\times\dots\times v_{n'm'}$ . Therefore, $\mathtt{p_i}=\mathtt{q_i}$ , $m=m'$ , $n=n'$ and $\left[\!\left[ {\vdash w_{ij}:C}\right]\!\right]=\left[\!\left[ {\vdash v_{ij}:C}\right]\!\right]$ . Therefore, by the induction hypothesis, $w_{ij}\approx_{op} v_{ij}$ , and so, $t\approx_{op} v$ .

3.3.3 Adequacy

Adequacy is a consequence of Theorems 2.8 (subject reduction), 2.9 (strong normalization), 2.10 (progress), 3.22 (soundness), and 3.28 (computational adequacy).

Theorem 3.3 (Adequacy) If $\left[\!\left[ {\vdash t:A}\right]\!\right]=\left[\!\left[ {\vdash r:A}\right]\!\right]$ , then $t\approx_{op} r$ .

Proof. By Theorem 2.9, t is strongly normalizing, and by Theorem 2.10, it normalizes to a value. Hence, there exists v such that $t\longrightarrow^*v$ , and, by Theorem 2.8, we have $\vdash v:A$ . By Theorem 3.22, $\left[\!\left[ {\vdash v:A}\right]\!\right]=\left[\!\left[ {\vdash t:A}\right]\!\right]=\left[\!\left[ {\vdash r:A}\right]\!\right]$ . Then, by Theorem 2.28, $v\approx_{op} t$ and $v\approx_{op} r$ . Hence, $t\approx_{op} r$ .

4. Conclusion

We have revisited the concrete categorical semantics for Lambda- ${\mathcal S}$ presented in our LSFA’18 paper (Díaz-Caro and Malherbe, 2019) by slightly modifying the operational semantics of the calculus, obtaining an adequate model (Theorem 3.29).

Our semantics highlights the dynamics of the calculus: the algebraic rewriting (linear distribution, vector space axioms, and typing casts rules) emphasizes the standard behavior of vector spaces. The natural transformation n takes these arrows from the Cartesian category $\mathbf{Set}$ to the tensorial category $\mathbf{Vec}$ , where such a behavior occurs naturally, and then are taken back to the Cartesian realm with the natural transformation m. This way, rules such as $({\mathsf{lin^+_r}})$ : $t(u+v)\longrightarrow tu+tv$ , are simply considered as $Um\circ n$ producing $(u+v,t)\mapsto (u,t)+(v,t)$ in two steps: $(u+v,t) \mapsto(u+v)\otimes t=u\otimes t+v\otimes t \mapsto (u,t)+(v,t)$ , using the fact that, in $\mathbf{Vec}$ , $(u+v)\otimes t=u\otimes t+v\otimes t$ .

We have constructed a concrete mathematical semantic model of Lambda- ${\mathcal S}$ based on a monoidal adjunction with some extra conditions. The construction depends crucially on inherent properties of the categories of set and vector spaces. In a future work, we will study the semantics from a more abstract point of view. Our approach will be based on recasting the concrete model at a more abstract categorical level of monoidal categories with some axiomatic properties that are now veiled in the concrete model. Some of these properties, such as to consider an abstract dagger instead of an inner product, were introduced in the concrete model from the very beginning, but others are described in Remark 3.1 and Definitions 3.4, 3.8, 3.9, 3.11, 3.12, and 3.14. Another question we hope to address in future work is the exact categorical relationship between the notion of amplitude and probability in the context of the abstract semantics. While some research has been done in this topic (e.g., Abramsky and Coecke (Reference Abramsky and Coecke2004); Selinger (Reference Selinger2007)), it differs from our point of view in some important aspects: for example to consider a notion of abstract normalization as primitive.

Competing interests

The authors declare none.

Appendix A. Detailed Proofs

Proposition 3.19 (Independence of derivation). If $\Gamma\vdash t:A$ can be derived with two different derivations T and T’, then $\left[\!\left[ {T}\right]\!\right]=\left[\!\left[ {T'}\right]\!\right]$ .

Proof. Without taking into account rules $\Rightarrow_E$ , $\Rightarrow_{ES}$ , and $S_I$ , the typing system is syntax directed. Hence, we give a rewrite system on trees such that each time a rule $S_I$ can be applied before or after another rule, we choose a direction to rewrite the tree to one of these forms. Similarly, rules $\Rightarrow_E$ and $\Rightarrow_{ES}$ can be exchanged in few specific cases, so we also choose a direction for these.

Then, we prove that every rule preserves the semantics of the tree. This rewrite system is clearly confluent and normalizing, hence for each tree T we can take the semantics of its normal form, and so every sequent will have one way to calculate its semantics: as the semantics of the normal tree.

In order to define the rewrite system, we first analyse the typing rules containing only one premise, and check whether these rules allow for a previous and posterior rule $S_I$ . If both are allowed, we choose a direction for the rewrite rule. Then we continue with rules with more than one premise and check under which conditions a commutation of rules is possible, choosing also a direction.

Rules with one premise:

  • Rule $\alpha_I$ :

    (1)
  • Rules $S_E$ , $\Rightarrow_I$ , $\times_{E_r}$ , $\times_{E_\ell}$ , $\Uparrow_r$ , and $\Uparrow_\ell$ : These rules end with a specific types not admitting two S in the head position (i.e. ${{\mathbb B}}^j\times S{{\mathbb B}}^{n-j}$ , $\Psi\Rightarrow A$ , ${{\mathbb B}}$ , ${{\mathbb B}}^{n-1}$ , and $S(\Psi\times\Phi)$ ) hence removing an S or adding an S would not allow the rule to be applied, and hence, these rules followed or preceded by $S_I$ cannot commute.

Rules with more than one premise:

  • Rule $+_I$ :

    (2)
  • Rules $\Rightarrow_E$ and $\Rightarrow_{ES}$ :

    (3)
  • Rule $\parallel$ :

    (4)
  • Rules $\textit{If}$ and $\times_I$ : these rules end with specific types not admitting two S in the head position (i.e. ${{\mathbb B}}\Rightarrow A$ and $\Psi\times\Phi$ ), hence removing an S or adding an S would not allow the rule to be applied, and hence, these rules followed or preceded by $S_I$ cannot commute.

The confluence of this rewrite system is easily inferred from the fact that there are no critical pairs. The normalization follows from the fact that the trees are finite and all the rewrite rules push the $S_I$ to the root of the trees.

It only remains to check that each rule preserves the semantics.

  • Rule (1): The following diagram gives the semantics of both trees (we only treat, without loss of generality, the case where $A\neq S(A')$ ). This diagram commutes by the naturality of $\eta$ .

  • Rule (2): The following diagram gives the semantics of both trees (we only treat, without lost of generality, the case where $A\neq SA'$ ).

This diagram commutes since the maps are as follows:

$(t,r)\stackrel{\eta\times\eta}{\mapsto} (t,r)\stackrel{g_1}\mapsto (t,r)\stackrel{US+}\mapsto t+r$ and $(t,r)\stackrel{\mathsf{Id}}\mapsto (t,r)\stackrel{+}\mapsto t+r\stackrel\eta\mapsto t+r$

  • Rule (3): The following diagram gives the semantics of both trees. The lower diagram with the dotted arrow commutes by the naturality of $\eta$ , and the upper diagram commutes because $\eta$ is a monoidal natural transformation.

  • Rule (4): The following diagram gives the semantics of both trees.

The mappings are as follows:

$(a_1,\dots,a_n)\stackrel{\eta^n}\mapsto (a_1,\dots,a_n)\stackrel{d_{\{{p_i}\}_i}}\mapsto \sum_i\mathtt{p_i}\chi_{a_i}$ and $(a_1,\dots,a_n)\stackrel{d_{\{{p_i}\}_i}}\mapsto\sum_i\mathtt{p_i}\chi_{a_i}\stackrel{\eta}\mapsto\sum_i\mathtt{p_i}\chi_{a_i}$

Lemma A.1. If $\Gamma\vdash t:A$ , then $\Gamma,\Delta^{{\mathbb B}}\vdash t:A$ . Moreover, $\left[\!\left[ {\Gamma,\Delta^{{\mathbb B}}\vdash t:A}\right]\!\right]=\left[\!\left[ {\Gamma\vdash t:A}\right]\!\right]\circ(\mathsf{Id}\times{!})$ .

Proof. A derivation of $\Gamma\vdash t:A$ can be turned into a derivation $\Gamma,\Delta^{{\mathbb B}}\vdash t:A$ just by adding $\Delta^{{\mathbb B}}$ in its axioms’ contexts. Since $FV(t)\cap\Delta^{{\mathbb B}}=\emptyset$ , we have $\left[\!\left[ {\Gamma,\Delta^{{\mathbb B}}\vdash t:A}\right]\!\right]=\left[\!\left[ {\Gamma\vdash t:A}\right]\!\right]\circ(\mathsf{Id}\times{!})$ .

Lemma 3.21 (Substitution). If $x:\Psi\vdash t:A$ and $\vdash r:\Psi$ , the following diagram commutes:

That is, $\left[\!\left[ {\vdash(r/x)t:A}\right]\!\right]=\left[\!\left[ {x:\Psi\vdash t:A}\right]\!\right]\circ\left[\!\left[ {\vdash r:\Psi}\right]\!\right]$ .

Proof. We prove, more generally, that if $\Gamma',x:\Psi,\Gamma\vdash t:A$ and $\vdash r:\Psi$ , the following diagram commutes:

That is, $\left[\!\left[ {\Gamma',\Gamma\vdash(r/x)t:A}\right]\!\right]=\left[\!\left[ {\Gamma',x:\Psi,\Gamma\vdash t:A}\right]\!\right]\circ(\mathsf{Id}\times\left[\!\left[ {\vdash r:\Psi}\right]\!\right]\times\mathsf{Id})$ . Then, by taking $\Gamma=\Gamma'=\emptyset$ , we get the result stated by the lemma.

We proceed by induction on the derivation of $\Gamma',x:\Psi,\Gamma\vdash t:A$ . In this proof, we write $d=(\mathsf{Id}\times\sigma\times\mathsf{Id})\circ(\mathsf{Id}\times\delta)$ . Also, we take the rules $\alpha_I$ and $+_I$ with $m=1$ , the generalization is straightforward.

  • By Lemma A.1, $\left[\!\left[ {\Delta^{{\mathbb B}}\vdash r:\Psi}\right]\!\right]=\left[\!\left[ {\vdash r:\Psi}\right]\!\right]\circ{!}$ . Hence,

This diagram commutes by the naturality of the projection.

This diagram commutes by the naturality of the projection.

  • The cases and are analogous to the previous case.

This diagram commutes by the induction hypothesis.

This diagram commutes by the induction hypothesis.

If $x\in FV(u)$ or $x\in FV(u)\cap FV(t)$ the cases are analogous.

This diagram commutes by the induction hypothesis.

This diagram commutes by the induction hypothesis.

  • where $(r/x)G = \mathsf{curry}(\mathsf{uncurry}(f_{(r/x)t,(r/x)s})\circ\mathsf{swap})$ and $G =\mathsf{curry}(\mathsf{uncurry}(f_{t,s})\circ\mathsf{swap})$

By the induction hypothesis, $(r\times\mathsf{Id})\circ t=(r/x)t$ and $(r\times\mathsf{Id})\circ s=(r/x)s$ , hence, $(r\times\mathsf{Id})\circ f_{t,s}=f_{(r/x)t,(r/x)s}$ and so $(r/x)G=(r\times\mathsf{Id})\circ G$ , which makes the diagram commute.

The dotted arrow divides the diagram in two. The upper part commutes by the IH and the functoriality of, while the lower part commutes by the naturality of $\eta^\Phi$ .

This diagram commutes by the induction hypothesis and the functoriality of the product.

Analogous to previous case.

This diagram commutes by the induction hypothesis and the functoriality of the product.

Analogous to previous case.

This diagram commutes by the induction hypothesis and coherence results.

Analogous to previous case.

This diagram commutes by the induction hypothesis.

This diagram commutes by the induction hypothesis.

This diagram commutes by the induction hypothesis.

Analogous to previous case.

This diagram commutes by the induction hypothesis.

Theorem 3.22 (Soundness). If $\vdash t:A$ , and $t\longrightarrow r$ , then $\left[\!\left[ {\vdash t:A}\right]\!\right] = \left[\!\left[ {\vdash r:A}\right]\!\right]$ .

Proof. By induction on the rewrite relation, using the first derivable type for each term. We take the rules $\alpha_I$ and $+_I$ with $m=1$ , the generalization is straightforward.

  • ( ${\mathsf{comm}}$ ) $(t+r)=(r+t)$ . We have

Then

This diagram commutes by the commutativity of sum in SA as vector space.

  • ( ${\mathsf{assoc}}$ ) $((t+r)+s)=(t+(r+s))$ . We have

Then

This diagram commutes by the associativity of sum in SA as vector space.

  • (βb ) If b has type ${{\mathbb B}}^n$ and $b\in{\mathsf B}$ , then $(\lambda x{:}{{{\mathbb B}}^n}.t)b\longrightarrow (b/x)t$ . We have

Then

This diagram commutes using Lemma 3.21.

  • (βn ) If u has type $S\Psi$ , then $(\lambda x{:}{S\Psi}.t)u\longrightarrow (u/x)t$ . We have

Then

This diagram commutes using Lemma 3.21.

  • ( ${\mathsf{lin^+_r}}$ ) If t has type ${{\mathbb B}}^n\Rightarrow A$ , then $t(u+v)\longrightarrow tu+tv$ . We have

and

The mappings are as follows:

$\ast\mapsto(\ast,\ast,\ast)\mapsto(u,v,t)\mapsto(u,v,t)\mapsto (u+v,t)\mapsto (u+v)\otimes t = u\otimes t+v\otimes t\mapsto (u,t)+(v,t)\mapsto t(u)+t(v)$

$ \ast\mapsto(\ast,\ast,\ast,\ast)\mapsto(u,t,v,t)\mapsto(u,t,v,t)\mapsto (u\otimes t,v\otimes t)\mapsto (u,t,v,t)\mapsto (t(u),t(v))\mapsto (t(u),t(v))\mapsto t(u)+t(v) $

  • If t has type ${{\mathbb B}}^n\Rightarrow A$ , then $t(\alpha.u)\longrightarrow \alpha.(tu)$ . We have

Then

The mappings are as follows:

$ (\ast,\ast)\mapsto(u,t)\mapsto(u\otimes 1,t)\mapsto(u\otimes\alpha,t)\mapsto(\alpha.u,t)\mapsto\alpha.u\otimes t=\alpha.(u\otimes t)\mapsto\alpha.(u,t)\mapsto\alpha.t(u) $

$ (\ast,\ast)\mapsto (u,t)\mapsto (u,t)\mapsto u\otimes t\mapsto (u,t)\mapsto t(u)\mapsto t(u)\otimes 1\mapsto t(u)\otimes\alpha\mapsto \alpha.t(u) $

  • ( ${\mathsf{lin^0_r}}$ ) If t has type ${{\mathbb B}}^n\Rightarrow A$ , then. We have

Then

The mappings are as follows:

$ \ast\mapsto(\ast,\ast)\mapsto(\vec 0,t)\mapsto(\mathbf 0,t)\mapsto\vec 0\otimes t=\vec 0\mapsto\vec 0\mapsto\vec 0 $

$ \ast\mapsto\vec 0 $

  • $(t+u)v\longrightarrow (tv+uv)$ . We have

and

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast,\ast)\mapsto (v,t,u)\mapsto (v,t,u)\mapsto (v,t+u)\mapsto v\otimes (t+u)=v\otimes t+v\otimes u\mapsto (v,t)+(v,u)\mapsto t(v)+u(v)$

$\ast\mapsto(\ast,\ast,\ast,\ast)\mapsto (v,t,v,u)\mapsto (v\otimes t,v\otimes u)\mapsto (v,t,v,u)\mapsto (t(v),u(v))\mapsto t(v)+u(v)$

  • ( ${\mathsf{lin^\alpha_\ell}}$ ) $(\alpha.t)u\longrightarrow \alpha.(tu)$ . We have

Then

The mappings are as follows:

$(\ast,\ast)\mapsto (u,t)\mapsto(u,t\otimes 1)\mapsto (u,t\otimes\alpha)\mapsto (u,\alpha.t)\mapsto u\otimes(\alpha.t)=\alpha.(u\otimes t)\mapsto\alpha.(u,t)\mapsto\alpha.t(u)$

$(\ast,\ast)\mapsto (u,t)\mapsto u\otimes t\mapsto (u,t)\mapsto t(u)\mapsto t(u)\otimes 1\mapsto t(u)\otimes\alpha\mapsto\alpha.t(u)$

  • ( ${\mathsf{lin^0_\ell}}$ ) ${\vec 0_{S{({{\mathbb B}}^n\Rightarrow A)}}} t\longrightarrow {\vec 0_{S{A}}} $ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto (t,\vec 0)\mapsto t\otimes\mathbf 0=\vec 0\mapsto\vec 0\mapsto\vec 0$

$\ast\mapsto\vec 0$

  • ( ${\mathsf{if_{1}}}$ ). We have

Then

Notice that $\mathsf{curry}(\mathsf{uncurry}(f_{t,r})\circ\mathsf{swap})$ transforms the arrow ${{\mathbb B}}\xrightarrow{f_{t,r}}[1, A]$ (which is the arrow,) into an arrow $1\xrightarrow{}[{{\mathbb B}}, A]$ , and hence,.

  • ( ${\mathsf{if_{0}}}$ ) Analogous to ( ${\mathsf{if_{1}}}$ ).

  • ( ${\mathsf{head}}$ ) If $h\neq u\times v$ , and $h\in{\mathsf B}$ , $\textit{head}\ h\times t\longrightarrow h$ . We have

Then

This diagram commutes since $\textit{head}$ is just the projection $\pi_{{\mathbb B}}$ .

  • ( ${\mathsf{tail}}$ ) If $h\neq u\times v$ , and $h\in{\mathsf B}$ , $\textit{tail}\ h\times t\longrightarrow t$ . We have

Then

This diagram commutes since $\textit{tail}$ is just the projection $\pi_{{{\mathbb B}}^{n-1}}$ .

  • ( ${\mathsf{neutral}}$ ) $({\vec 0_{S{A}}} +t)\longrightarrow t$ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto (\vec 0,t)\mapsto(\vec 0,t)\mapsto t$

$\ast\mapsto t$

  • ( ${\mathsf{unit}}$ ) $1.t\longrightarrow t$ . We have

Then

The mappings are as follows:

$\ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes 1\mapsto 1.t=t$

$\ast\mapsto t$

  • ( ${\mathsf{zero_\alpha}}$ ) Cases:

      -
    • If $t:A$ with $A\in{\mathcal B}$ , $0.t\longrightarrow {\vec 0_{S{A}}} $ . We have

Then

The mappings are as follows:

$ \ast\mapsto t\mapsto t\mapsto t\otimes 1\mapsto t\otimes 0=\vec 0\mapsto \vec 0 $

$\ast\mapsto\vec 0$

    -
  • If $t:SA$ and $t\!\not\,: A$ , $0.t\longrightarrow {\vec 0_{S{A}}} $ . We have

Then

The mappings are as follows:

$ \ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes 0=\vec 0\mapsto \vec 0 $

$\ast\mapsto\vec 0$

( ${\mathsf{zero}}$ ) $\alpha.{\vec 0_{S{A}}} \longrightarrow {\vec 0_{S{A}}} $ . We have

Then

The mappings are as follows:

$ \ast\mapsto\vec 0\mapsto\vec 0\otimes 1\mapsto\vec 0\otimes\alpha=\vec 0\mapsto\vec 0 $

$\ast\mapsto\vec 0$

  • ( ${\mathsf{prod}}$ ) $\alpha.(\beta.t)\longrightarrow (\alpha\beta).t$ . We have

Then

The mappings are as follows:

$\ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes\beta\mapsto\beta.t\mapsto\beta.t\otimes 1\mapsto\beta.t\otimes\alpha\mapsto\alpha.(\beta.t)=(\alpha.\beta).t$

$\ast\mapsto t\mapsto\mapsto t\otimes 1\mapsto t\otimes (\alpha.\beta)\mapsto (\alpha.\beta).t$

  • ( ${\mathsf{\alpha dist}}$ ) $\alpha.(t+u)\longrightarrow \alpha.t+\alpha.u$ . We have

Then

The mappings are as follows:

$(t,u)\mapsto(t,u)\mapsto t+u\mapsto (t+u)\otimes 1\mapsto (t+u)\otimes\alpha\mapsto\alpha.t+\alpha.u$

$(t,u)\mapsto (t\otimes 1,u\otimes 1)\mapsto(t\otimes\alpha,u\otimes\alpha)\mapsto (\alpha.t,\alpha.u)\mapsto\alpha.t+\alpha.u$

  • ( ${\mathsf{fact}}$ ) $(\alpha.t+\beta.t)\longrightarrow (\alpha+\beta).t$ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto(t,t)\mapsto(t\otimes 1,t\otimes 1)\mapsto(t\otimes\alpha,t\otimes\beta)\mapsto(\alpha.t,\beta.t)\mapsto(\alpha.t,\beta.t)\mapsto(\alpha+\beta).t$

$\ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes(\alpha+\beta)\mapsto(\alpha+\beta).t$

  • ( ${\mathsf{fact^1}}$ ) $(\alpha.t+t)\longrightarrow (\alpha+1).t$ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto(t,t)\mapsto(t\otimes 1,t)\mapsto(t\otimes\alpha,t)\mapsto(\alpha.t,t)\mapsto(\alpha.t,t)\mapsto(\alpha+1).t$

$\ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes(\alpha+1)\mapsto(\alpha+1).t$

  • ( ${\mathsf{fact^2}}$ ) $(t+t)\longrightarrow 2.t$ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto(t,t)\mapsto(t,t)\mapsto 2.t$

$\ast\mapsto t\mapsto t\otimes 1\mapsto t\otimes 2\mapsto 2.t$

  • ( ${\mathsf{dist^+_r}}$ ) $\Uparrow_r ((r+s)\times u)\longrightarrow \Uparrow_r (r\times u)+\Uparrow_r (s\times u)$ .

We have

Then

The mappings are as follows:

\begin{align*} &\ast\mapsto(\ast,\ast,\ast)\mapsto(r,s,u)\mapsto(r,s,u)\mapsto (r+s,u)\mapsto (r+s,u)\mapsto (r+s,u)\\ &\qquad\mapsto (r+s)\otimes u=(r\otimes u)+(s\otimes u)\mapsto (r,u)+(s,u)\mapsto (r,u)+(s,u)\\ &\ast\mapsto(\ast,\ast,\ast,\ast)\mapsto (r,u,s,u)\mapsto(r,u,s,u)\mapsto(r,u,s,u)\\ &\qquad\mapsto (r\otimes u,s\otimes u)\mapsto (r,u,s,u)\mapsto (r,u,s,u)\mapsto(r,u,s,u)\mapsto (r,u)+(s,u) \end{align*}
  • ( ${\mathsf{dist^+_\ell}}$ ) $\Uparrow_\ell u\times (r+s)\longrightarrow \Uparrow_\ell (u\times r)+\Uparrow_\ell (u\times s)$ . Analogous to case $(\mathsf{dist}_r^+)$

  • ( ${\mathsf{dist^\alpha_r}}$ ) $\Uparrow_r (\alpha.r)\times u\longrightarrow \alpha.\Uparrow_r r\times u$ . We have

Then

The mappings are as follows:

$(\ast,\ast)\mapsto(r,u)\mapsto(r\otimes 1,u)\mapsto(r\otimes\alpha,u)\mapsto(\alpha.r,u)\mapsto(\alpha.r,u)\mapsto(\alpha.r,u)\mapsto\alpha.r\otimes u\mapsto\alpha.(r,u)\mapsto\alpha.(r,u)$

$(\ast,\ast)\mapsto(r,u)\mapsto(r,u)\mapsto(r,u)\mapsto r\otimes u\mapsto (r,u)\mapsto(r,u)\mapsto(r,u)\otimes 1\mapsto(r,u)\otimes\alpha\mapsto\alpha.(r,u)$

  • ( ${\mathsf{dist^\alpha_\ell}}$ ) $\Uparrow_\ell u\times(\alpha.r)\longrightarrow \alpha.\Uparrow_\ell u\times r$ . Analogous to case $({\mathsf{dist^\alpha_r}})$ .

  • ( ${\mathsf{dist^0_r}}$ ) If u has type $\Phi$ , $\Uparrow_r{\vec 0_{S{A}}} [\Psi]\times u\longrightarrow{\vec 0_{S{A}}} [(\Psi\times \Phi)]$ . We have

Then

The mappings are as follows:

$\ast\mapsto(\ast,\ast)\mapsto(\vec 0,u)\mapsto(\vec 0,u)\mapsto(\vec 0,u)\mapsto\vec 0\otimes u=\vec 0\mapsto\vec 0\mapsto\vec 0$

$\ast\mapsto\vec 0$

  • ( ${\mathsf{dist^0_\ell}}$ ) If u has type $\Psi$ , $\Uparrow_\ell u\times{\vec 0_{S{A}}} [\Phi]\longrightarrow{\vec 0_{S{A}}} [(\Psi\times\Phi)]$ . Analogous to case $({\mathsf{dist^0_r}})$ .

  • ( ${\mathsf{dist^+_\Uparrow}}$ ) $\Uparrow (t+u)\longrightarrow (\Uparrow t+\Uparrow u)$ . We only give the details for $\Uparrow_r$ , the case $\Uparrow_\ell$ is analogous.

Then

The mappings are as follows. For $i=1,\dots,m$ , let $a_i=\sum_{k_i}\gamma_{ik_i}.a_{ik_i}$ , $t=\sum_{i=1}^n\beta_i(a_i,b_i)$ and $u=\sum_{i=n+1}^m\beta_i(a_i,b_i)$ . To avoid a more cumbersome notation, we only consider the case where $\Psi$ and $\Phi$ do not have an S in head position, and we omit the steps not modifying the argument.

\begin{align*} (t,u)&\mapsto t+u\mapsto\sum_{i=1}^m\beta_i. a_i\otimes b_i= \sum_{i=1}^m\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i}\otimes b_i)\\ &\mapsto\sum_{i=1}^m\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i}, b_i)\mapsto\sum_{i=1}^m\sum_{k_i}\beta_i.\gamma_{ik_i}.(a_{ik_i}, b_i) \\ \\ (t,u) &\mapsto\left(\sum_{i=1}^n\beta_i.a_i\otimes b_i,\sum_{i=n+1}^m\beta_i.a_i\otimes b_i\right)\\ &\mapsto\left(\sum_{i=1}^n\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i},b_i),\sum_{i=n+1}^m\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i},b_i)\right)\\ &\mapsto\left(\sum_{i=1}^n\sum_{k_i}\beta_i.\gamma_{ik_i}.(a_{ik_i},b_i),\sum_{i=n+1}^m\sum_{k_i}\beta_i.\gamma_{ik_i}.(a_{ik_i},b_i)\right)\\ &\mapsto\sum_{i=1}^m\sum_{k_i}\beta_i.\gamma_{ik_i}.(a_{ik_i},b_i) \end{align*}
  • $\Uparrow (\alpha.t)\longrightarrow \alpha.\Uparrow t$ . We only give the details for $\Uparrow_r$ , the case $\Uparrow_\ell$ is similar.

Then

The mappings are as follows. For $i=1,\dots,m$ , let $a_i=\sum_{k_i}\gamma_{ik_i}.a_{ik_i}$ and $t=\sum_i\beta_i.(a_i,b_i)$ . To avoid a more cumbersome notation, we only consider the case where $\Psi$ and $\Phi$ do not have an S in head position, and we omit the steps not modifying the argument.

\begin{align*} t &\mapsto t\otimes 1\mapsto t\otimes\alpha=(\sum_i\beta_i.(a_i,b_i))\otimes\alpha=\sum_i\alpha\beta_i.(a_i,b_i)\mapsto\sum_i\alpha\beta_i.(a_i\otimes b_i)\\ &=\sum_i\alpha\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i}\otimes b_i)\mapsto\sum_i\alpha\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i},b_i)\mapsto\sum_i\sum_{k_i}\alpha\beta_i\gamma_{ik_i}.(a_{ik_i},b_i)\\ \\ t &\mapsto\sum_i\beta_i.(a_i\otimes b_i)=\sum_i\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i}\otimes b_i)\mapsto\sum_i\beta_i.\sum_{k_i}\gamma_{ik_i}.(a_{ik_i},b_i)\\ &\mapsto\sum_i\sum_{k_i}\beta_i\gamma_{ik_i}.(a_{ik_i},b_i)\mapsto(\sum_i\sum_{k_i}\beta_i\gamma_{ik_i}.(a_{ik_i},b_i))\otimes 1\mapsto(\sum_i\sum_{k_i}\beta_i\gamma_{ik_i}.(a_{ik_i},b_i))\otimes \alpha \\ &=\alpha.\sum_i\sum_{k_i}\beta_i\gamma_{ik_i}.(a_{ik_i},b_i)\mapsto\sum_i\sum_{k_i}\alpha\beta_i\gamma_{ik_i}.(a_{ik_i},b_i) \end{align*}
  • ( ${\mathsf{dist^0_{\Uparrow_r}}}$ ) $\Uparrow_r{\vec 0_{S{(S(S\Psi)\times\Phi)}}} \longrightarrow\Uparrow_r{\vec 0_{S{(S\Psi\times\Phi)}}} $ . We have

Then

Both mappings start with $\ast\mapsto\vec 0$ , and then continue mapping, by linearity, to $\vec 0$ .

  • ( ${\mathsf{dist^0_{\Uparrow_\ell}}}$ ) $\Uparrow_\ell{\vec 0_{S{(\Phi\times SS\Psi)}}} \longrightarrow\Uparrow_\ell{\vec 0_{S{(\Phi\times S\Psi)}}} $ . Analogous to case $({\mathsf{dist^0_{\Uparrow_r}}})$ .

  • ( ${\mathsf{neut^\Uparrow_{0r}}}$ ) $\Uparrow_r{\vec 0_{S{(S{{\mathbb B}}^n \times\Phi)}}} \longrightarrow{\vec 0_{S{({{\mathbb B}}^n\times\Phi)}}}$ . We have

Then

Both mappings start with $\ast\mapsto\vec 0$ , and then continue mapping, by linearity, to $\vec 0$ .

  • ( ${\mathsf{neut^\Uparrow_{0\ell}}}$ ) $\Uparrow_\ell{\vec 0_{S{(\Phi\times S{{\mathbb B}}^n)}}} \longrightarrow{\vec 0_{S{(\Phi\times{{\mathbb B}}^n)}}}$ . Analogous to case ( ${\mathsf{neut^\Uparrow_{0r}}}$ ).

  • ( ${\mathsf{neut^\Uparrow_r}}$ ) If $u\in{\mathsf B}$ , $\Uparrow_r u\times v\longrightarrow u\times v$ . We have

Then

Both mappings are the identity, so we do not give the mappings. Notice that even if v is a linear combination, the $\eta$ on $\Phi$ will freeze its linearity by considering it as a basis vector in a new vector space $US\Phi$ having $\Phi$ as base.

  • ( ${\mathsf{neut^\Uparrow_\ell}}$ ) If $v\in{\mathsf B}$ , $\Uparrow_\ell u\times v\longrightarrow u\times v$ . Analogous to case $({\mathsf{neut^\Uparrow_r}})$ .

  • ( ${\mathsf{proj}}$ ),

where

We have

and

The following diagram, where

commutes.

Indeed

  • ( ${\mathsf{proj}}$ ) ${_{\vec 0}}$ . We have

Then

The mappings are as follows:

  • Contextual rules Trivial by composition law.

Lemma 3.24. If $\vdash t:A$ then $t\in\mathfrak{C}_{A}$ .

Proof. We prove, more generally, that if $\Gamma\vdash t:A$ and $\sigma\vDash\Gamma$ , then $\sigma t\in\mathfrak{C}_{A}$ . We proceed by structural induction on the derivation of $\Gamma\vdash t:A$ . In order to avoid cumbersome notation, we do not take the closure by parallelism into account, except when needed. The extension of this proof to such a closure is straightforward.

  • Let $\Gamma^{{\mathbb B}},x:\Psi\vdash x:\Psi$ as a consequence of rule ${\mathsf{Ax}}$ . Since $\sigma\vDash\Gamma^{{\mathbb B}},x:\Psi$ , we have $\sigma x\in\mathfrak{C}_{\Psi}$ .

  • ${\mathsf{Ax}}_{\vec 0}$ , and are trivial since, by definition ${\vec 0_{S{A}}}\in\mathfrak{C}_{SA}$ , and.

  • Let $\Gamma\vdash \alpha.t:SA$ as a consequence of $\Gamma\vdash t:SA$ and rule $\alpha_I$ . By the IH, $\sigma t\in\mathfrak{C}_{SA}$ , hence, by definition $\alpha.\sigma t=\sigma\alpha.t\in\mathfrak{C}_{SA}$ .

  • Let ${\Gamma,\Delta,\Xi^{{\mathbb B}}\vdash({t}+{u}):SA}$ as a consequence of $\Gamma,\Xi^{{\mathbb B}}\vdash t:SA$ , $\Delta,\Xi^{{\mathbb B}}\vdash u:SA$ , and rule $+_I$ . By the IH, $\sigma_1\sigma t,\sigma_2\sigma u\in\mathfrak{C}_{SA}$ , hence, by definition $\sigma_1\sigma t+\sigma_2\sigma t=\sigma_1\sigma_2\sigma(t+u)\in\mathfrak{C}_{SA}$ .

  • Let $\Gamma\vdash\pi_j t:{{\mathbb B}}^j\times S{{\mathbb B}}^{n-j}$ as a consequence of $\Gamma\vdash t:S{{\mathbb B}}^n$ and rule $S_E$ . By the IH, $\sigma t\in\mathfrak{C}_{S{{\mathbb B}}^n }=\overline{S\mathfrak{C}_{{{\mathbb B}}^n}\cup\{{\vec 0_{S{{{\mathbb B}}^n}}} \}}$ . Then,, so, with $b_{ij}=0$ or $b_{ij}=1$ . Therefore,.

  • Let $\Gamma\vdash{}?{t}\mathord{\cdot}{r}:{{\mathbb B}}\Rightarrow A$ as a consequence of $\Gamma\vdash t:A$ , $\Gamma\vdash r:A$ and rule ${\mathsf{If}}$ . By the induction hypothesis, $\sigma t\in\mathfrak{C}_{A}$ and $\sigma r\in\mathfrak{C}_{A}$ . Hence, for any $s\in\mathfrak{C}_{{{\mathbb B}}}$ , ${s}?{\sigma t}\mathord{\cdot}{\sigma r}$ reduces either to $\sigma t$ or to $\sigma r$ , hence it is in $\mathfrak{C}_{A}$ , therefore, ${}?{\sigma t}\mathord{\cdot}{\sigma r}\in\mathfrak{C}_{{{\mathbb B}}\Rightarrow A}$ .

  • Let $\Gamma\vdash\lambda x{:}\Psi.t:\Psi\Rightarrow A$ as a consequence of $\Gamma,x:\Psi\vdash t:A$ and rule $\Rightarrow_I$ . Let $r\in\mathfrak{C}_{\Psi}$ . Then, $\sigma(\lambda x{:}\Psi.t)r=(\lambda x{:}\Psi.\sigma t)r\rightarrow (r/x)\sigma t$ . Since $(r/x)\sigma t\vDash\Gamma,x{:}\Psi$ , we have, by the IH, that $(r/x)\sigma t\in\mathfrak{C}_{A}$ . Therefore, $\lambda x{:}\Psi.t\in\mathfrak{C}_{\Psi\Rightarrow A}$ .

  • Let $\Delta,\Gamma,\Xi^{{\mathbb B}}\vdash tu:A$ as a consequence of $\Delta,\Xi^{{\mathbb B}}\vdash u:\Psi$ , $\Gamma,\Xi^{{\mathbb B}}\vdash t:\Psi\Rightarrow A$ and rule $\Rightarrow_E$ . By the IH, $\sigma_1\sigma u\in\mathfrak{C}_{\Psi}$ and $\sigma_2\sigma t\in\mathfrak{C}_{\Psi\Rightarrow A}$ . Then, by definition, $\sigma_1\sigma t\sigma_2\sigma r=\sigma_1\sigma_2\sigma(tr)\in\mathfrak{C}_{A}$ .

  • Let $\Delta,\Gamma,\Xi^{{\mathbb B}}\vdash tu:SA$ as a consequence of $\Delta,\Xi^{{\mathbb B}}\vdash u:S\Psi$ , $\Gamma,\Xi^{{\mathbb B}}\vdash t:S(\Psi\Rightarrow A)$ and rule $\Rightarrow_{ES}$ . By the IH $\sigma_1\sigma t\in\mathfrak{C}_{S(\Psi\Rightarrow A)}=\overline{S\mathfrak{C}_{\Psi\Rightarrow A}\cup\{{\vec 0_{S{(\Psi\Rightarrow A)}}} \}}$ and $\sigma_2\sigma u\in\mathfrak{C}_{S\Psi}=\overline{S\mathfrak{C}_{\Psi}\cup\{{\vec 0_{S{\Psi}}} \}}$ . Cases:

    • * $\sigma_1\sigma t\longrightarrow^*{\vec 0_{S{(\Psi\Rightarrow A)}}}$ and $\sigma_2\sigma u\rightarrow{\vec 0_{S{\Psi}}} $ . Then $\sigma_1\sigma_2\sigma (tu)=\sigma_1\sigma t\sigma_2\sigma u\longrightarrow^*{\vec 0_{S{(\Psi\Rightarrow A)}}} {\vec 0_{S{\Psi}}} \rightarrow{\vec 0_{S{A}}} \in\mathfrak{C}_{SA}$ .

    • * $\sigma_1\sigma t\longrightarrow^*{\vec 0_{S{(\Psi\Rightarrow A)}}} $ and $\sigma_2\sigma u\rightarrow\sum_j\beta_ju_j$ , with $u_j\in\mathfrak{C}_{\Psi}$ . Then $\sigma_1\sigma_2\sigma (tu)=\sigma_1\sigma t\sigma_2\sigma u\longrightarrow^*{\vec 0_{S{(\Psi\Rightarrow A)}}} \sum_j\beta_ju_j\rightarrow{\vec 0_{S{A}}} \in\mathfrak{C}_{SA}$ .

    • * $\sigma_1\sigma t\longrightarrow^*\sum_i\alpha_it_i$ with $t_i\in\mathfrak{C}_{\Psi\Rightarrow A}$ and $\sigma_2\sigma u\rightarrow{\vec 0_{S{\Psi}}} $ . Then $\sigma_1\sigma_2\sigma (tu)=\sigma_1\sigma t\sigma_2\sigma u\longrightarrow^*\sum_i\alpha_i(t_i{\vec 0_{S{\Psi}}} )\longrightarrow^*{\vec 0_{S{A}}} \in\mathfrak{C}_{SA}$ .

    • * $\sigma_1\sigma t\longrightarrow^*\sum_i\alpha_it_i$ with $t_i\in\mathfrak{C}_{\Psi\Rightarrow A}$ and $\sigma_2\sigma u\rightarrow\sum_j\beta_ju_j$ , with $u_j\in\mathfrak{C}_{\Psi}$ . Then $\sigma_1\sigma_2\sigma (tu)=\sigma_1\sigma t\sigma_2\sigma u\longrightarrow^*\sum_{ij}\alpha_i\beta_jt_iu_j$ with $t_iu_j\in\mathfrak{C}_{A}$ , therefore, $\sigma_1\sigma_2\sigma (tu)\in\mathfrak{C}_{SA}$ .

  • Let $\Gamma\vdash t:SA$ as a consequence of $\Gamma\vdash t:A$ and rule $S_I$ . By the IH, $\sigma t\in\mathfrak{C}_{A}\subseteq S\mathfrak{C}_{A}\subseteq\mathfrak{C}_{SA}$ .

  • Let $\Gamma,\Delta,\Xi^{{\mathbb B}}\vdash t\times u:\Psi\times\Phi$ as a consequence of $\Gamma,\Xi^{{\mathbb B}}\vdash t:\Psi$ , $\Delta,\Xi^{{\mathbb B}}\vdash u:\Phi$ and rule $\times_I$ . By the IH, $\sigma_1\sigma t\in\mathfrak{C}_{\Psi}$ and $\sigma_2\sigma u\in\mathfrak{C}_{\Phi}$ , hence, $\sigma_1\sigma t\times\sigma_2\sigma u=\sigma_1\sigma_2\sigma (t\times u)\in\mathfrak{C}_{\Psi}\times\mathfrak{C}_{\Phi}\subseteq\mathfrak{C}_{\Psi\times\Phi}$ .

  • Let $\Gamma\vdash \textit{head}~t:{{\mathbb B}}$ as a consequence of $\Gamma\vdash t:{{\mathbb B}}^n$ and rule $\times_{Er}$ . By the IH, $\sigma t\in\mathfrak{C}_{{{\mathbb B}}^n}=\overline{\mathfrak{C}_{{{\mathbb B}}}\times\mathfrak{C}_{{{\mathbb B}}^{n-1}}}=\{u\mid u\longrightarrow^*u_1\times u_2\textrm{ with }u_1\in\mathfrak{C}_{{{\mathbb B}}}\textrm{ and} u_2\in\mathfrak{C}_{{{\mathbb B}}^{n-1}}\}$ . Hence, $\sigma(\textit{head}\ t)=\textit{head}\ \sigma t\longrightarrow^*\textit{head}(u_1\times u_2)\rightarrow u_1\in\mathfrak{C}_{{{\mathbb B}}}$ .

  • Let $\Gamma\vdash \textit{tail}~t:{{\mathbb B}}^{n-1}$ as a consequence of $\Gamma\vdash t:{{\mathbb B}}^n$ and rule $\times_{E\ell}$ . By the IH, $\sigma t\in\mathfrak{C}_{{{\mathbb B}}^n}=\overline{\mathfrak{C}_{{{\mathbb B}}}\times\mathfrak{C}_{{{\mathbb B}}^{n-1}}}=\{u\mid u\longrightarrow^*u_1\times u_2\textrm{ with }u_1\in\mathfrak{C}_{{{\mathbb B}}}\textrm{ and} u_2\in\mathfrak{C}_{{{\mathbb B}}^{n-1}}\}$ . Hence, $\sigma(\textit{tail}\ t)=\textit{tail}\ \sigma t\longrightarrow^*\textit{tail}(u_1\times u_2)\rightarrow u_2\in\mathfrak{C}_{{{\mathbb B}}^{n-1}}$ .

  • Let $\Gamma\vdash \Uparrow_rt:S(\Psi\times \Phi)$ , as a consequence of $\Gamma\vdash t:S(S\Psi\times \Phi)$ and rule $\Uparrow_r$ . By the IH, we have that $\sigma t\in\mathfrak{C}_{S(S\Psi\times\Phi)}$ . Therefore,. Cases:

    • * $\sigma t\longrightarrow^*{\vec 0_{S{(S\Psi\times\Phi)}}} $ , then $\sigma \Uparrow_r t=\Uparrow_r\sigma t\longrightarrow^*\Uparrow_r{\vec 0_{S{(S\Psi\times\Phi)}}} \longrightarrow{\vec 0_{S{(\Psi\times\Phi)}}} \in\mathfrak{C}_{S(\Psi\times\Phi)}$ .

    • * Otherwise,, so $\sigma t\longrightarrow^*\sum_i\alpha_i (r_i\times u_i)$ with $u_i\in\mathfrak{C}_{\Phi}$ and $r_i\in\overline{S\mathfrak{C}_{\Psi}\cup\{{\vec 0_{S{\Psi}}} \}}$ . Cases: If $r_i\longrightarrow^*{\vec 0_{S{\Psi}}} $ , then $\Uparrow_r r_i\times u_i\longrightarrow^* \Uparrow_r{\vec 0_{S{\Psi}}} \times u_i\longrightarrow {\vec 0_{S{(\Psi\times\Phi)}}} \in\mathfrak{C}_{S(\Psi\times\Phi)}$ . If $r_i\longrightarrow^*\sum_{j=1}^{n_i}\beta_{ij}r_{ij}$ , with $r_{ij}\in\mathfrak{C}_{\Psi}$ . Hence, $\Uparrow_r r_i\times u_i\longrightarrow^*\sum_{j=1}^{n_i}\beta_{ij}\Uparrow_r(r_{ij}\times u_i)\in\mathfrak{C}_{S(\Psi\times\Phi)}$ . Hence, if all the $r_i$ reduce to ${\vec 0_{S{\Psi}}} $ , $\sigma\Uparrow_rt=\Uparrow_r\sigma t\longrightarrow^*{\vec 0_{S{(\Psi\times\Phi)}}} $ . Otherwise, let I be the set of index of $r_i$ not reducing to ${\vec 0_{S{\Psi}}} $ , therefore, $\sigma\Uparrow_rt=\Uparrow_r\sigma t\longrightarrow^*\sum_i\alpha_i(r_i\times u_i)\rightarrow\sum_i\alpha_i\Uparrow_r(r_i\times u_i)\longrightarrow^*\sum_{i\in I}\alpha_i\sum_{j=1}^{n_i}\beta_{ij}\Uparrow_r(r_{ij}\times u_i)\longrightarrow^*\sum_{i\in I}\sum_{j=1}^{n_i}\alpha_i\beta_{ij}\Uparrow_r(r_{ij}\times u_i)\in\mathfrak{C}_{S(\Psi\times\Phi)}$ .

  • Let $\Gamma\vdash \Uparrow_\ell t:S(\Psi\times \Phi)$ as a consequence of $\Gamma\vdash t:S(\Psi\times S\Phi)$ and rule $\Uparrow_\ell$ . Analogous to previous case.

  • Let $\Gamma\vdash\mathtt{\{p_1\}}t_1 \parallel\dots\parallel \mathtt{\{p_n\}}t_n :A$ as a consequence of $\Gamma\vdash t_i:A$ and rule $\parallel$ . By the IH each $\sigma t_i\in\mathfrak{C}_{A}$ , hence, by definition $\sigma(\mathtt{\{p_1\}}t_1\parallel\dots\parallel\mathtt{\{p_n\}}t_n)=\mathtt{\{p_1\}}\sigma t_1\parallel\dots\parallel\mathtt{\{p_n\}}\sigma t_n\in\mathfrak{C}_{A}$ .

Footnotes

Partially founded by PICT projects 2019-1272 and 2021-I-A-00090, PIP project 1220200100368CO, and CSIC-UdelaR project 22520220100073UD.

This paper is the long journal version of (Díaz-Caro and Malherbe, 2019). In the present paper, the main new result is to revisit some rewrite rules in order to prove a theorem of adequacy.

1 Where m and n are the mediating arrows given by the monoidality of the adjunction.

2 As a remark, notice that $\parallel$ can be seen as the $+$ symbol of the algebraic lambda calculus (Vaux, Reference Vaux2009), where the equality is confluent since scalars are positive, while the $+$ symbol in Lambda- ${\mathcal S}$ coincides with the $+$ from Lineal (Arrighi and Dowek, Reference Arrighi and Dowek2017) (see Assaf et al. (Reference Assaf, Díaz-Caro, Perdrix, Tasson and Valiron2014) for a more detailed discussion on different presentations of algebraic lambda calculi).

3 The subtlety about ${\vec 0_{S{A}}}$ explained for Figure 7 has led us to add some extra rules to Lambda- ${\mathcal S}$ , with respect to its original presentation, in Figure 8. Those are $({\mathsf{dist^0_{\Uparrow_r}}})$ , $({\mathsf{dist^0_{\Uparrow_\ell}}})$ , $({\mathsf{neut^\Uparrow_{0r}}})$ , and $({\mathsf{neut^\Uparrow_{0\ell}}})$ .

4 Although “concrete categorical” seems paradoxical, since a model can either be concrete or categorical, we chose to use this terms to stress the fact that we use a categorical presentation of this concrete model.

References

Abramsky, S. (1993). Computational interpretations of linear logic. Theoretical Computer Science 111 (1) 357.CrossRefGoogle Scholar
Abramsky, S. and Coecke, B. (2004). A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science (LICS). IEEE, 415425.CrossRefGoogle Scholar
Altenkirch, T. and Grattage, J. (2005). A functional quantum programming language. In Proceedings of the 20th Annual IEEE Symposium on Logic in Computer Science (LICS). IEEE, 249258.CrossRefGoogle Scholar
Arrighi, P. and Díaz-Caro, A. (2012). A System F accounting for scalars. Logical Methods in Computer Science 8 (1) 11.Google Scholar
Arrighi, P., Díaz-Caro, A. and Valiron, B. (2017). The vectorial lambda-calculus. Information and Computation 254 (1) 105139.CrossRefGoogle Scholar
Arrighi, P. and Dowek, G. (2017). Lineal: A linear-algebraic lambda-calculus. Logical Methods in Computer Science 13 (1) 8.Google Scholar
Assaf, A., Díaz-Caro, A., Perdrix, S., Tasson, C. and Valiron, B. (2014). Call-by-value, call-by-name and the vectorial behaviour of the algebraic λ-calculus. Logical Methods in Computer Science 10 (4) 8.Google Scholar
Benton, N. (1994). A mixed linear and non-linear logic: Proofs, terms and models. In Pacholski, L. and Tiuryn, J. (eds.) Computer Science Logic (CSL 1994). Lecture Notes in Computer Science, vol. 933. Berlin, Heidelberg: Springer, 121–135.Google Scholar
Díaz-Caro, A. and Dowek, G. (2017). Typing quantum superpositions and measurement. In Theory and Practice of Natural Computing (TPNC 2017). Lecture Notes in Computer Science, vol. 10687. Cham: Springer, 281–293.CrossRefGoogle Scholar
Díaz-Caro, A., Dowek, G., and Rinaldi, J. (2019a). Two linearities for quantum computing in the lambda calculus. BioSystems, 186 104012. Postproceedings of TPNC 2017.CrossRefGoogle ScholarPubMed
Díaz-Caro, A., Guillermo, M., Miquel, A. and Valiron, B. (2019b). Realizability in the unitary sphere. In Proceedings of the 34th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2019), 1–13.CrossRefGoogle Scholar
Díaz-Caro, A. and Malherbe, O. (2019). A concrete categorical semantics for Lambda-S. In Accattoli, B. and Olarte, C. (eds.) Proceedings of the 13th Workshop on Logical and Semantic Frameworks with Applications (LSFA’18). Electronic Notes in Theoretical Computer Science, vol. 344. Elsevier, 83–100.CrossRefGoogle Scholar
Díaz-Caro, A. and Malherbe, O. (2020). A categorical construction for the computational definition of vector spaces. Applied Categorical Structures 28 (5) 807844.CrossRefGoogle Scholar
Díaz-Caro, A. and Martínez, G. (2018). Confluence in probabilistic rewriting. In Alves, S. and Wassermann, R. (eds.) Proceedings of the 12th International Workshop on Logical and Semantic Frameworks with Applications (LSFA 2017). Electronic Notes in Teoretical Computer Science, vol. 338, 115–131. Elsevier.CrossRefGoogle Scholar
Díaz-Caro, A. and Petit, B. (2012). Linearity in the non-deterministic call-by-value setting. In Ong, L. and de Queiroz, R. (eds.) Logic, Language, Information and Computation. Lecture Notes in Computer Science, vol. 7456. Berlin, Heidelberg: Springer, 216–231.Google Scholar
Girard, J.-Y. (1987). Linear logic. Theoretical Compututer Science 50 1102.CrossRefGoogle Scholar
Giry, M. (1982). A categorical approach to probability theory. In Categorical Aspects of Topology and Analysis. Lecture Notes in Mathematics, vol. 915. Springer, 68–85.CrossRefGoogle Scholar
Green, A. S., Lumsdaine, P. L., Ross, N. J., Selinger, P. and Valiron, B. (2013). Quipper: A scalable quantum programming language. ACM SIGPLAN Notices (PLDI’13) 48 (6) 333342.CrossRefGoogle Scholar
Mac Lane, S. (1998). Categories for the Working Mathematician, 2nd edn. Springer.Google Scholar
Moggi, E. (1988). Computational lambda-calculus and monads. Technical Report ECS-LFCS-88-66, Lab. for Foundations of Computer Science, University of Edinburgh.Google Scholar
Pagani, M., Selinger, P. and Valiron, B. (2014). Applying quantitative semantics to higher-order quantum computing. ACM SIGPLAN Notices (POPL’14) 49 (1) 647658.CrossRefGoogle Scholar
Selinger, P. (2007). Dagger compact closed categories and completely positive maps. In 3rd International Workshop on Quantum Programming Languages (QPL 2005). Electronic Notes in Theoretical Computer Science, vol. 170, 139–163.CrossRefGoogle Scholar
Selinger, P. and Valiron, B. (2006). A lambda calculus for quantum computation with classical control. Mathematical Structures in Computer Science 16 (3) 527552.CrossRefGoogle Scholar
Vaux, L. (2009). The algebraic lambda calculus. Mathematical Structures in Computer Science 19 10291059.CrossRefGoogle Scholar
Zorzi, M. (2016). On quantum lambda calculi: a foundational perspective. Mathematical Structures in Computer Science 26 (7) 11071195.CrossRefGoogle Scholar
Figure 0

Figure 1: Syntax of types and terms of Lambda-${\mathcal S}$.

Figure 1

Figure 2. Typing relation.

Figure 2

Figure 3. Beta rules.

Figure 3

Figure 4. Linear distribution rules.

Figure 4

Figure 5. Rules of the conditional construction.

Figure 5

Figure 6. Rules for lists.

Figure 6

Figure 7. Rules implementing the vector space axioms.

Figure 7

Figure 8. Rules for castings $\Uparrow_r$ and $\Uparrow_\ell$.

Figure 8

Figure 9. Rules for the projection.

Figure 9

Figure 10. Contextual rules (notice that, in particular, there is no reduction under lambda).