Hostname: page-component-f554764f5-fnl2l Total loading time: 0 Render date: 2025-04-15T15:52:09.103Z Has data issue: false hasContentIssue false

Generalized Knill–Laflamme theorem for families of isoclinic subspaces

Published online by Cambridge University Press:  17 March 2025

David W. Kribs*
Affiliation:
Department of Mathematics & Statistics, University of Guelph, Guelph, ON N1G 2W1, Canada e-mail: [email protected] [email protected]
Rajesh Pereira
Affiliation:
Department of Mathematics & Statistics, University of Guelph, Guelph, ON N1G 2W1, Canada e-mail: [email protected] [email protected]
Mukesh Taank
Affiliation:
Department of Mathematics & Statistics, University of Guelph, Guelph, ON N1G 2W1, Canada e-mail: [email protected] [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Isoclinic subspaces have been studied for over a century. Quantum error correcting codes were recently shown to define a subclass of families of isoclinic subspaces. The Knill–Laflamme theorem is a seminal result in the theory of quantum error correction, a central topic in quantum information. We show there is a generalized version of the Knill–Laflamme result and conditions that applies to all families of isoclinic subspaces. In the case of quantum stabilizer codes, the expanded conditions are shown to capture logical operators. We apply the general conditions to give a new perspective on a classical subclass of isoclinic subspaces defined by the graphs of anti-commuting unitary operators. We show how the result applies to recently studied mutually unbiased quantum measurements (MUMs), and we give a new construction of such measurements motivated by the approach.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Canadian Mathematical Society

1 Introduction

The study of families of isoclinic subspaces naturally grew from the introduction of canonical angles between subspaces in Euclidean geometry, as initiated by Jordan [Reference Jordan21] a century and a half ago, which in turn was built on earlier work of Hamilton [Reference Hamilton17]. Subsequently, the notion has arisen and been studied in a variety of settings in the context of matrix theory and operator theory; as a selection of examples, we note the (relatively) more recent structural results and formulations of equivalent conditions and applications found in [Reference Balla and Sudakov1, Reference Björck and Golub6, Reference Garling14, Reference Hoggar19, Reference Wong39, Reference Wong41, Reference Zhang44] (see also references therein and forward references).

On the other hand we have the subject of quantum error correction, which has much more recent origins going back to the early days of modern quantum information theory in the 1990s [Reference Bennett, DiVincenzo, Smolin and Wootters2, Reference Gottesman16, Reference Knill and Laflamme23, Reference Knill, Laflamme and Viola24, Reference Kribs26, Reference Nielsen and Chuang34Reference Steane36]. With original (and continuing) motivations coming from the goal to build quantum computers, quantum error correction seems to now touch on all areas of quantum information more generally. The Knill–Laflamme theorem [Reference Knill and Laflamme23] is a seminal result in the theory of quantum error correction, and is arguably one of the most important results in all of quantum information theory. It built on important early examples of quantum error correcting codes and played a significant role in giving the subject a firm mathematical and theoretical foundation from which to grow, as a forward reference search on [Reference Knill and Laflamme23] will confirm. The theorem gives explicit algebraic conditions to test if a quantum code is correctable for a given set of error operators. It has had many applications in the subject of quantum error correction and beyond; we mention Gottesman’s stabilizer formalism [Reference Gottesman16], which shows how to construct codes for Pauli error models, as an important such instance. In the context of this discussion, we also note it was recently observed [Reference Kribs, Mammarella and Pereira28] that quantum error correcting codes define a subclass of isoclinic subspace families.

It is natural to ask, therefore, if there is a generalization of the Knill–Laflamme conditions and theorem that describes all families of isoclinic subspaces? In this article, we present a positive answer to this question. Specifically, we identify and establish the existence of generalized Knill–Laflamme conditions for families of isoclinic subspaces, which in operator form look like this:

$$ \begin{align*} P A_i^* A_j P = \lambda_{ij} U_{ij} P = \lambda_{ij} P U_{ij} , \end{align*} $$

where $\{A_i\}_i$ are operators on a Hilbert space, $A_i^*$ are the (Hilbert space) adjoints, P is an orthogonal projection on the space, $\lambda _{ij}$ are complex scalars, and $U_{ij}$ are unitary operators that commute with the projection (or equivalently, partial isometries with P as their initial and final projections).

The Knill–Laflamme conditions are captured in the special case in which each unitary $U_{ij}$ is the identity operator. We discuss that subclass, giving a brief review and an explanation of how the broadened conditions capture logical operators for stabilizer codes. We revisit a classical family of isoclinic subspaces that are built from graphs of anti-commuting unitary operators [Reference Wong39Reference Wong and Mok42], finding a new perspective motivated by the generalized conditions. We show how the result applies to recently studied mutually unbiased quantum measurements (MUMs) [Reference Farkas, Kaniewski and Nayak13, Reference Tavakoli, Farkas, Rosset, Bancal and Kaniewski37], where we apply the conditions to find an alternate proof of their canonical forms, and we give a new construction of such measurements motivated by the approach.

This article is organized as follows. Section 2 includes the basic details of isoclinic subspaces and an important equivalent condition we will make use of. Section 3 includes the derivation of the generalized Knill–Laflamme conditions. In Section 4, we consider the three subclasses of isoclinic subspace families noted above in light of these results. We finish in Section 5 with some concluding remarks.

2 Preliminaries

The origins of investigations into families of isoclinic subspaces go back at least a century and a half, where we find the classical notion of canonical angles between pairs of subspaces (sometimes referred to as principal angles) as formulated by Jordan [Reference Jordan21].

Suppose that $\mathcal V$ and $\mathcal W$ are finite dimensional subspaces of a Hilbert space $\mathcal {H}$ and let $k = \min \{\dim (\mathcal V), \dim (\mathcal W)\}.$ Then the canonical angles $\{ \theta _1, \ldots , \theta _l \}$ between $\mathcal V$ and $\mathcal W$ are defined as follows: the first canonical angle is the unique number $\theta _1 \in [0,\frac {\pi }{2}]$ such that

$$\begin{align*}\cos(\theta_1) = \max \{\lvert \langle x, y \rangle \rvert : x \in\mathcal V, y\in\mathcal W, \lVert x \rVert = \lVert y \rVert = 1\}. \end{align*}$$

Let $x_1$ and $y_1$ be unit vectors in $\mathcal V$ and $\mathcal W$ for which the previous maximum is attained. Then we define the second canonical angle as the unique number $\theta _2 \in [0,\frac {\pi }{2}]$ such that

$$\begin{align*}\cos(\theta_2) = \max \{\lvert \langle x, y \rangle \rvert : x \in\mathcal V\cap \{x_1\}^\perp, y \in\mathcal W \cap \{ y_1\}^\perp, \lVert x \rVert = \lVert y \rVert = 1 \}. \end{align*}$$

For each $k \leq l$ , similarly choose unit vectors $x_2, \ldots x_{k-1}$ and $y_2, \ldots y_{k-1}$ in $\mathcal V$ and $\mathcal W$ , respectively, in each case where the previous maximum is attained. Then $\theta _k$ is taken to be the unique number such that $\cos (\theta _k)$ is equal to the maximum of $\lvert \langle x, y \rangle \rvert $ with unit vectors $x \in \mathcal V \cap \{ x_1,\ldots , x_{k-1} \}^\perp $ and $y \in \mathcal W \cap \{ y_1,\ldots , y_{k-1} \}^\perp $ . More recently (though still over fifty years ago), Bjorck and Golub [Reference Björck and Golub6] found a computationally-friendly way to find the canonical angles, showing that they can be characterized in terms of the singular values of the product of two matrices that encode their respective subspace.

As the name suggests, subspaces of the same dimension are isoclinic when all their canonical angles are the same.

Definition 2.1 Let $\mathcal V$ and $\mathcal W$ be two k-dimensional subspaces of a Hilbert space $\mathcal {H}$ , where $1\le k < \dim (\mathcal {H})$ . Then $\mathcal V$ and $\mathcal W$ are said to be isoclinic if all k canonical angles between $\mathcal V$ and $\mathcal W$ are equal. If that angle is $\theta $ , then the subspaces are said to be isoclinic at angle $\theta $ . A family of k-dimensional subspaces of a Hilbert space are said to be isoclinic if all pairs of distinct subspaces from the family are pairwise isoclinic.

Any family of mutually orthogonal subspaces are isoclinic at angle $\frac {\pi }{2}$ , and this case is often a distinguished special case in certain subclasses (we shall see such instances in the examples below). However, there are many more possibilities, as the following characterization indicates.

Theorem 2.2 Let $\mathcal V$ and $\mathcal W$ be two k-dimensional subspaces of a Hilbert space $\mathcal {H}$ , with $k \geq 1$ . Let $P_{\mathcal {V}}$ and $P_{\mathcal {W}}$ denote the orthogonal projections of $\mathcal H$ onto the subspaces $\mathcal {V}$ and $\mathcal {W}$ , respectively. Then the following statements are equivalent:

  1. (i) $\mathcal V$ and $\mathcal W$ are isoclinic subspaces.

  2. (ii) There exists $\lambda \geq 0$ such that

    (2.1) $$ \begin{align} P_{\mathcal V}P_{\mathcal W}P_{\mathcal V} = \lambda P_{\mathcal V} \quad \quad \mathrm{and} \quad \quad P_{\mathcal W}P_{\mathcal V}P_{\mathcal W} = \lambda P_{\mathcal W}. \end{align} $$

    Here, $\lambda = \cos (\theta )$ where $\mathcal V$ , $\mathcal W$ are isoclinic at angle $\theta $ .

This equivalency was noted without proof in [Reference Hoggar19] and linked with other conditions around the same time [Reference Hoggar19, Reference Wong41]. This seems to be somewhat typical with the notion of isoclinic subspaces, in that it has arisen in so many places that its descriptions have become almost “folklore” type results. (See [Reference Kribs, Mammarella and Pereira28] for a recent combined proof of various equivalent conditions.)

3 Knill–Laflamme type conditions for isoclinic subspaces

In this section, we will derive conditions for families of isoclinic subspaces that generalize the Knill–Laflamme conditions of quantum error correction. We break up the result into a pair of results that both apply more broadly.

The Hilbert spaces $\mathcal H$ considered in the following results can be either finite or infinite dimensional, though the specific subclasses we discuss in the next section are all finite dimensional. Given a subspace $\mathcal C$ , we will denote the projection of $\mathcal H$ onto the subspace by $P_{\mathcal C}$ .

Let us recall some basic properties of partial isometries, which are essential operators in this discussion. An operator V on $\mathcal H$ is a partial isometry if and only if $P = V^*V$ , or equivalently $Q = V V^*$ , is a (orthogonal) projection. In this case, P, respectively Q, is called the initial, respectively final, projection for V, and the following identities are satisfied:

$$\begin{align*}V= VP = VV^* V = QV = QVP. \end{align*}$$

Note that V is a partial isometry if and only if $V^*$ is a partial isometry, with the roles of the initial and final projections reversed. Of course, V is unitary when both of these projections are the identity operator. More general partial isometries are fundamental operators in matrix and operator theory; in particular, we will make use of the polar decomposition of a generic operator A on $\mathcal H$ , which gives a partial isometry V such that $A = V |A|$ where $|A| = \sqrt {|A^* A|}$ . This partial isometry is not unique, for instance it can be taken to be unitary by extending its action to the whole Hilbert space by defining it unitarily on the orthogonal complement of the initial projection space mapping to a space orthogonal to the final projection space, which does not change the polar decomposition equation. We will make use of some other properties of the polar decomposition operators in the proofs below.

We first identify generalized Knill–Laflamme conditions that imply the isoclinic subspace projection type equations (Eq. (2.1)) when they are satisfied.

Theorem 3.1 Suppose that $\{A_i\}_i$ are operators on a Hilbert space $\mathcal H$ and $\mathcal C\subseteq \mathcal H$ is a subspace such that for all $i,j$ ,

(3.1) $$ \begin{align} P_{\mathcal C} A_i^* A_j P_{\mathcal C} = \lambda_{ij} U_{ij} P_{\mathcal C} = \lambda_{ij} P_{\mathcal C} U_{ij} , \end{align} $$

where $\lambda _{ij}\in \mathbb {C}$ and $U_{ij}$ are unitary operators that commute with $P_{\mathcal C}$ . Then if $\{ P_i \}_i$ are the range projections of the operators $A_i P_{\mathcal C}$ , we have

(3.2) $$ \begin{align} P_i P_j P_i = | \gamma_{ij} |^2 P_i \quad \quad \forall\, i,j, \end{align} $$

where

$$ \begin{align*} \gamma_{ij} = \left\{ \begin{array}{cl} \frac{\lambda_{ij}}{\sqrt{\lambda_{ii}\lambda_{jj}}} & \mathrm{if}\,\, \lambda_{ii} \neq 0 \,\, \mathrm{and} \,\, \lambda_{jj} \neq 0 \\ 0 & \mathrm{otherwise.} \end{array} \right. \end{align*} $$

Further, the projections $P_i$ and $P_j$ have the same rank as $P_{\mathcal C}$ whenever $\gamma _{ij}\neq 0$ .

Proof First note that if $\lambda _{ii}=0$ for some i, then it follows from Eq. (3.1) that $A_i P_{\mathcal C} = 0 = P_{\mathcal C} A_i^*$ , and hence that $\lambda _{ij}=0$ for all j. Hence for such i and all j, Eq. (3.2) is trivially satisfied with $\gamma _{ij} = 0$ .

So for the rest of the proof we assume every $\lambda _{ii}\neq 0$ . By Eq. (3.1), we have that $\lambda _{ii} U_{ii} P_{\mathcal C}$ is a positive operator, and, by dividing both sides of the equation by the positive part of $\lambda _{ii}$ , we further observe that this operator must in fact be a positive scalar multiple of $P_{\mathcal C}$ . Thus, without loss of generality, we can assume $\lambda _{ii}> 0$ and $U_{ii}=I$ is the identity operator. We can then take the polar decomposition

$$\begin{align*}A_i P_{\mathcal C} = V_i \sqrt{P_{\mathcal C} A_i^* A_i P_{\mathcal C}} = \sqrt{\lambda_{ii}} V_i P_{\mathcal C} , \end{align*}$$

where $V_i$ is a partial isometry with initial projection $P_{\mathcal C}$ and final projection $P_i$ . In particular, these operators satisfy the following identities:

$$\begin{align*}V_i^* V_i = P_{\mathcal C}, \quad\quad V_i V_i^* = P_i, \quad\quad V_i P_{\mathcal C} = V_i, \quad\quad V_i P_{\mathcal C} V_i^* = P_i. \end{align*}$$

Thus we have for all such $i,j$ ,

$$\begin{align*}\lambda_{ij} U_{ij} P_{\mathcal C} = P_{\mathcal C} A_i^* A_j P_{\mathcal C} = (\sqrt{\lambda_{ii}} P_{\mathcal C} V_i^*) (\sqrt{\lambda_{jj}} V_j P_{\mathcal C}) = \sqrt{\lambda_{ii}\lambda_{jj}} P_{\mathcal C} V_i^* V_j P_{\mathcal C} , \end{align*}$$

from which it follows that

$$\begin{align*}P_{\mathcal C} V_i^* V_j P_{\mathcal C} = \big( \lambda_{ij}\big(\sqrt{\lambda_{ii}\lambda_{jj}}\big)^{-1} \big) U_{ij} P_{\mathcal C}. \end{align*}$$

If we let this last operator be equal to $B_{ij}$ and put $\gamma _{ij} = \lambda _{ij}(\sqrt {\lambda _{ii}\lambda _{jj}})^{-1}$ , then using the fact that $U_{ij}$ is unitary and commutes with $P_{\mathcal C}$ we have

$$\begin{align*}B_{ij} B_{ij}^* = |\gamma_{ij}|^2 (U_{ij} P_{\mathcal C}) (U_{ij} P_{\mathcal C})^* = |\gamma_{ij}|^2 P_{\mathcal C}. \end{align*}$$

On the other hand, we also have

$$ \begin{align*} B_{ij} B_{ij}^* = |\gamma_{ij}|^2 P_{\mathcal C} &= (P_{\mathcal C} V_i^* V_j P_{\mathcal C}) (P_{\mathcal C} V_j^* V_i P_{\mathcal C}) \\ &= P_{\mathcal C} V_i^* V_j V_j^* V_i P_{\mathcal C} \\ &= P_{\mathcal C} V_i^* P_j V_i P_{\mathcal C}, \end{align*} $$

where here we have used the algebraic relations between the operators $V_i$ , $P_i$ , and $P_{\mathcal C}$ . Now multiply the left-side of this equation by $V_i$ and the right-side by $V_i^*$ to obtain the isoclinic equation:

$$\begin{align*}|\gamma_{ij}|^2 P_i = |\gamma_{ij}|^2 V_i P_{\mathcal C} V_i^* = (V_i P_{\mathcal C} V_i^* ) P_j (V_i P_{\mathcal C} V_i^*) = P_i P_j P_i. \end{align*}$$

Finally, note from the argument above that a scalar $\lambda _{ij}$ in Eq. (3.1) is non-zero if and only if both $\lambda _{ii}$ and $\lambda _{jj}$ are non-zero, and that in such cases the projections $P_i$ and $P_j$ have the same rank as $P_{\mathcal C}$ with the partial isometries $V_i$ and $V_j$ intertwining the corresponding subspaces with $\mathcal C$ .

Note that we cannot conclude in Theorem 3.1 that the family of subspaces are isoclinic only with the satisfaction of Eq. (3.2). While one can show that projections satisfying these equations with non-zero scalars $\gamma _{ij}$ must have the same rank, one can simply have orthogonal subspaces (of any dimension) that trivially satisfy the relations with $\gamma _{ij} =0$ . (It could also be added that the orthogonal subspace case is the most mathematically uninteresting in the context of the general isoclinic subspace relations, but it must be allowed for.)

The next result moves us in the converse direction of the previous result. We recall that a partial isometry is nothing more than a unitary operator restricted to a subspace, and hence the following result could equally be phrased with hypotheses given by a family of unitary operators and a subspace, with the projections given by the ranges of the restricted unitaries.

Lemma 3.2 Suppose that $\{V_i\}_i$ are partial isometries on a Hilbert space $\mathcal H$ with common initial space $\mathcal C\subseteq \mathcal H$ , and that their final projections $P_i = V_i V_i^*$ satisfy the following equations for some scalars $\lambda _{ij} \geq 0$ :

(3.3) $$ \begin{align} P_i P_j P_i = \lambda_{ij} P_i \quad \quad \forall\, i,j. \end{align} $$

Then there are unitary operators $U_{ij}$ that commute with $P_{\mathcal C}$ such that for all $i,j$ ,

(3.4) $$ \begin{align} P_{\mathcal C} V_i^* V_j P_{\mathcal C} = \gamma_{ij} U_{ij} P_{\mathcal C} = \gamma_{ij} P_{\mathcal C} U_{ij}, \end{align} $$

where $\gamma _{ij} = \sqrt {\lambda _{ij} }$ .

Proof First note that $\lambda _{ij} = \lambda _{ji}$ for all $i,j$ , since the projections $P_i$ , $P_j$ have the same rank (as images of partial isometries with the same initial projection) and Eq. (3.3) gives us via the trace,

$$\begin{align*}\lambda_{ij} \mathrm{Tr}(P_i) = \mathrm{Tr}(P_i P_j P_i) = \mathrm{Tr} (P_i P_j) = \mathrm{Tr}(P_jP_iP_j) = \lambda_{ji} \mathrm{Tr}(P_j). \end{align*}$$

We have $P_{\mathcal C} = V_i^* V_i$ for all i and $P_i = V_i V_i^*$ . Hence expanding Eq. (3.3) yields

$$\begin{align*}(V_i V_i^*)( V_j V_j^*) (V_i V_i^*) = \lambda_{ij} V_i V_i^*. \end{align*}$$

We can then multiply both sides of this equation on the left-side by $V_i^*$ and the right-side by $V_i$ , and insert $P_{\mathcal C}^2$ in the middle of the left-side of the equation (using $V_jP_{\mathcal C}=V_j$ and its adjoint equation), to obtain,

$$\begin{align*}(P_{\mathcal C} V_j^* V_i P_{\mathcal C})^* (P_{\mathcal C} V_j^* V_i P_{\mathcal C} ) = \lambda_{ij} P_{\mathcal C} P_{\mathcal C} = \lambda_{ij} P_{\mathcal C}. \end{align*}$$

In the case that $\lambda _{ij}=0 = \lambda _{ji}$ for some pair $i,j$ , it follows that $P_{\mathcal C} V_j^* V_i P_{\mathcal C} = 0 = 0P_{\mathcal C}$ (and the same is true for the adjoint operator with $i,j$ roles reversed), which gives Eq. (3.4) trivially with $U_{ij} = I$ .

So suppose that $0\leq \lambda _{ij} = \lambda _{ji}\neq 0$ and put $\gamma _{ij} = \sqrt {\lambda _{ij}}$ . The above equation thus gives us $P_{\mathcal C} = B_{ij}^* B_{ij}$ where we define

$$\begin{align*}B_{ij} = \gamma_{ij}^{-1} P_{\mathcal C} V_j^* V_i P_{\mathcal C}. \end{align*}$$

Reversing the roles of i and j and using $\gamma _{ij} = \gamma _{ji}$ , we also have $P_{\mathcal C} = B_{ij} B_{ij}^*$ . It follows that $B_{ij}$ is a partial isometry with $P_{\mathcal C}$ as both its initial and final projection, from which we have $B_{ij} = B_{ij} P_{\mathcal C} = P_{\mathcal C} B_{ij}$ and $B_{ij}^* = P_{\mathcal C} B_{ij}^* = B_{ij}^* P_{\mathcal C}$ .

In particular, this implies that the von Neumann algebra $\mathrm {W}^*(B_{ij})$ generated by $B_{ij}$ (which, in the finite-dimensional case, is the set of polynomials in $B_{ij}$ and $B_{ij}^*$ ) is contained in the commutant $\{ P_{\mathcal C} \}^\prime $ of the projection $P_{\mathcal C}$ . When we examine the polar decomposition

$$\begin{align*}B_{ij} = U_{ij} \sqrt{B_{ij}^* B_{ij}} = U_{ij} P_{\mathcal C}, \end{align*}$$

we obtain a unitary $U_{ij} \in \mathrm {W}^*(B_{ij})$ (which is always the case for a unitary in the polar decomposition of an operator [Reference Davidson11]) and hence $U_{ij}\in \{ P_{\mathcal C} \}^\prime $ .

We have thus shown that for all $i,j$ with $\lambda _{ij}\neq 0$ , there is a unitary $U_{ij}$ that commutes with $P_{\mathcal C}$ such that $P_{\mathcal C} V_j^* V_i P_{\mathcal C} = \sqrt {\lambda _{ij}} U_{ij} P_{\mathcal C}$ , and this completes the proof.

Combining the previous two results in the cases of most relevance to families of isoclinic subspaces gives us the following result.

Theorem 3.3 Suppose that $\{A_i\}_i$ are operators on a Hilbert space $\mathcal H$ that are scalar multiples of partial isometries with common initial subspace $\mathcal C\subseteq \mathcal H$ . Then the range spaces of $A_i P_{\mathcal C}$ form an isoclinic family of subspaces if and only if there are scalars $\lambda _{ij}\in \mathbb {C}$ and unitary operators $U_{ij}$ that commute with $P_{\mathcal C}$ such that

$$ \begin{align*} P_{\mathcal C} A_i^* A_j P_{\mathcal C} = \lambda_{ij} U_{ij} P_{\mathcal C} = \lambda_{ij} P_{\mathcal C} U_{ij} \quad \quad \forall\, i,j. \end{align*} $$

Proof By Theorem 3.1, if Eq. (3.1) are satisfied, then so are the projection identities given in Eq. (3.2). Whenever the scalar in one of the projection equations is non-zero, the trace argument used in the proof above shows that the corresponding projections have the same rank. The fact that the operators $A_i$ are multiples of partial isometries with the same initial space comes into this direction of the argument just to ensure that, even when the scalars are zero, the range projections have the same rank. Since the range projections have the same rank and satisfy Eq. (3.2), any pair of these range projections satisfy both Eq. (2.1) and the hypotheses of Theorem 2.2. Therefore, it follows from Theorem 2.2 that the range spaces form an isoclinic family.

The other direction of the proof, when the range spaces of the $A_i P_{\mathcal C}$ are assumed to be isoclinic, is captured by the special case of Lemma 3.2 (with Theorem 2.2 applied to obtain the projection equations) in which all the projections have the same rank (i.e., even when the scalars in the equation are zero). The scalars that define the $A_i$ as multiples of partial isometries can be absorbed into the scalars $\gamma _{ij}$ of Eq. (3.4).

Remark 3.4 While this result includes hypotheses on the operators considered, in particular a common support space, we note that it can be applied to every family of isoclinic subspaces in the following way. Given a family $\{ \mathcal V_i \}_{i\geq 1}$ of k-dimensional isoclinic subspaces with associated rank-k projections $P_i$ , we can pick any k-dimensional subspace $\mathcal C = \mathcal V_0$ of $\mathcal H$ with projection $P_{\mathcal C}$ , and define intertwining partial isometries $V_i$ with final projections $P_i = V_i V_i^*$ and all with the initial projection $P_{\mathcal C} = V_i^* V_i$ . One can then apply Theorem 3.3 with $A_i = P_i$ , to obtain Eq. (3.1) and unitary operators $U_{ij}$ . As it turns out, in many cases of interest, including for the subclasses considered in the next section, there is a natural “base” or “anchor” subspace for the isoclinic family as in this discussion.

4 Applications and examples

4.1 Quantum error correcting codes

The subclass of isoclinic subspace families defined through quantum error correction come from the actions of sets of error operators $\{ E_i \}$ on a code subspace $\mathcal C \subseteq \mathcal H$ that can be corrected after their actions. Formally, the operators are generally assumed to define a completely positive and trace-preserving (i.e., $\sum _i E_i^* E_i = I$ ) map given by $\mathcal E(\rho ) = \sum _i E_i \rho E_i^*$ . The code $\mathcal C$ is then correctable for $\mathcal E$ if there is a completely positive trace-preserving map $\mathcal R$ on $\mathcal H$ such that $(\mathcal R \circ \mathcal E)(\rho ) = \rho $ for all operators $\rho $ on $\mathcal H$ that are supported on $\mathcal C$ .

The Knill–Laflamme theorem [Reference Knill and Laflamme23] gives testable conditions for a set of error operators to be correctable for a given code, and it is the special case of the isoclinic conditions above with the unitary operators all equal to the identity operator. In other words, $\mathcal C$ is correctable for $\mathcal E$ if and only if there exist scalars $\lambda _{ij}\in \mathbb {C}$ such that for all $i,j$ ,

(4.1) $$ \begin{align} P_{\mathcal C} E_i^* E_j P_{\mathcal C} = \lambda_{ij} P_{\mathcal C}. \end{align} $$

In this case, the scalars $\Lambda = ( \lambda _{ij} )$ form a positive and trace one (i.e., density) matrix. The recovery operation $\mathcal R$ is then constructed by using the partial isometries obtained in the polar decomposition of the $E_i$ and factoring through the scalar unitary that diagonalizes $\Lambda $ through these equations. See [Reference Kribs26, Reference Nielsen and Chuang34] for a more comprehensive introduction to quantum error correction.

It was shown in [Reference Kribs, Mammarella and Pereira28] that for any code subspace $\mathcal C$ and set of (non-degenerate) error operators $\{ E_i \}$ , the range subspaces of the restrictions of the $E_i$ to $\mathcal C$ form a family of isoclinic subspaces. As noted above, we can now see this as a distinguished special case of the general isoclinic conditions. There is a partial converse to this result: any pair of isoclinic subspaces can naturally be viewed as arising from a quantum error correcting code with two error operators (essentially the small number of subspaces limits the number of relevant equations and allows for this equivalence). More generally, however, the isoclinic subspace conditions of Eq. (3.1) define a broader class. It is natural to ask though, what, if anything, do these more general equations describe in the context of sets of operators and (code) subspaces that are of relevance in quantum information? To this end, let us consider a motivating class of codes from quantum error correction.

Let $X,Y,Z$ be the usual (single qubit) Pauli operators on $\mathbb {C}^2$ with orthonormal basis $\{| 0 \rangle , | 1 \rangle \}$ [Reference Nielsen and Chuang34]. Given a positive integer $n\geq 1$ , we can consider the n-tensors of these operators given by products of the operators $X_1 = X \otimes I \otimes I \otimes \ldots $ , $Z_2 = I \otimes Z \otimes I \otimes \ldots $ , etc, that act on n-qubit Hilbert space $(\mathbb {C}^2)^{\otimes n}$ , which has orthonormal basis vectors $| i_1\cdots i_n \rangle = | i_1 \rangle \otimes \ldots \otimes | i_n \rangle $ given by tensor products of the basis vectors from its single qubit subsystems. The n-qubit Pauli group $\mathcal P_n$ is the subgroup of unitary operators generated by the $X_i, Z_i$ , and $iI$ .

The stabilizer formalism for quantum error correction invented by Gottesman [Reference Gottesman16] shows how to build correctable codes for sets of Pauli error operators, and applies the Knill–Laflamme theorem to give a complete group-theoretic characterization of which error operators are correctable for a given “stabilizer code.” The starting point for the formalism is an Abelian subgroup $\mathcal S$ of $\mathcal P_n$ that does not contain $-I$ ; for this discussion, let us take $\mathcal S = \langle Z_1, \ldots , Z_{n-k} \rangle $ for some fixed $k \in \{ 1,2, \ldots , n\}$ . The stabilizer subspace for $\mathcal S$ , which is designated as the code space, is $\mathcal C = \mathcal C(\mathcal S) = \mathrm {span} \{ | \psi \rangle \, : \, S | \psi \rangle = | \psi \rangle \,\, \forall \, S \in \mathcal S\}$ . In the example, we get the k-qubit subspace $\mathcal C = \mathrm {span} \{ | 0 \rangle ^{\otimes (n-k)} | i_1 \cdots i_k \rangle \, : \, i_j\in \{0,1\} \}$ .

One can check that the normalizer subgroup $\mathcal N(\mathcal S)$ of $\mathcal S$ inside $\mathcal P_n$ coincides with its centralizer $\mathcal Z(\mathcal S)$ , as every element of $\mathcal P_n$ either commutes or anti-commutes and $-I \notin \mathcal S$ . A main result in the stabilizer formalism asserts that a stabilizer code $\mathcal C(\mathcal S)$ is correctable for a set of Pauli error operators $\{E_i\}$ exactly when all the operator products $E_i^* E_j$ satisfy the constraint: $E_i^* E_j \notin \mathcal N(\mathcal S) \setminus \langle \mathcal S, iI \rangle $ . In the example, one can see directly that $\mathcal N(\mathcal S) = \mathcal Z(\mathcal S)$ is the group generated by $\langle \mathcal S, iI \rangle $ and the operators $\mathcal L = \langle X_i, Z_i \, : \, n-k+1 \leq i \leq n \rangle $ .

The operators in $\mathcal L$ are called logical operators, as they can be used to implement logical operations on the code space. (We note that while this is a somewhat canonical choice of logical operators and convenient for our discussion, it is not unique.) They form a multiplicatively closed set (as a group), that clearly belongs to $\mathcal N(\mathcal S) \setminus \langle \mathcal S, iI \rangle $ , and hence they are not correctable by the result noted above (in particular they do not satisfy the Knill–Laflamme conditions). However, observe they do satisfy the more general isoclinic equations. Indeed, given $L\in \mathcal L$ , we have $L\in \mathcal Z(\mathcal S)$ , and so L commutes with all the spectral projections of every element of $\mathcal S$ , which in turn implies that L commutes with $P_{\mathcal C}$ (this can be seen from the explicit form for $P_{\mathcal C}$ noted below). Hence $LP_{\mathcal C} = P_{\mathcal C} L$ and $\mathcal C$ is a reducing subspace for all $L\in \mathcal L$ ; i.e., $\mathcal C$ is an invariant subspace for both L and $L^*$ . Given any pair $L_1,L_2\in \mathcal L$ , we have $L= L_1L_2\in \mathcal L$ and thus $P_{\mathcal C} L_1 L_2 P_{\mathcal C} = L P_{\mathcal C} = P_{\mathcal C} L$ . So by Theorem 3.1, it follows that the set of operators $\mathcal L$ (or any subset of them) together with $\mathcal C$ define an isoclinic family of subspaces given by $\{ L\mathcal C : L\in \mathcal L\}$ , the ranges of the operators L restricted to $\mathcal C$ .

Now, recall from the definition of $\mathcal C = \mathcal C(\mathcal S)$ that every $S\in \mathcal S$ satisfies $SP_{\mathcal C} = P_{\mathcal C} = P_{\mathcal C} S$ . More generally, one can show that an operator $L\in \mathcal P_n$ commutes with $P_{\mathcal C}$ if and only if $L\in \mathcal N(\mathcal S)$ . Indeed, the eigenvalue 1 eigenspace projection for $S\in \mathcal S$ is $P_S = \frac 12 (I+S)$ , and $P_{\mathcal C}$ is the product of all the $P_S$ . If $L\in \mathcal P_n$ does not belong to $\mathcal N(\mathcal S)= \mathcal Z(\mathcal S)$ , then it anti-commutes with some $S\in \mathcal S$ , and we have $LP_S = \frac 12 (I-S)L$ , which on the right side of this equation is the eigenvalue -1 eigenspace projection for S multiplied by L on the right. It follows that such an L cannot commute with $P_{\mathcal C}$ . Thus, it also follows that the set of Pauli group operators that satisfy the general isoclinic equations with a non-trivial unitary for a given stabilizer code $\mathcal C(\mathcal S)$ , are precisely the operators that belong to $\mathcal N(\mathcal S) \setminus \langle \mathcal S, iI \rangle $ .

Combining this observation with the results of the last section and some basic properties of correctable errors for stabilizer codes, allows us to conclude the following.

Proposition 4.1 Let $\mathcal C = \mathcal C(\mathcal S)$ be an n-qubit stabilizer code and let $\{E_i\}_i \subseteq \mathcal P_n$ be any subset of Pauli operators. Then the range subspaces of the operators $E_i P_{\mathcal C}$ are isoclinic. Moreover, the products amongst the set $\{ E_i^* E_j \}_{i,j}$ that belong to $\mathcal N(\mathcal S) \setminus \langle \mathcal S, iI \rangle $ are precisely those that satisfy Eq.(3.1) with non-trivial unitary operators.

Proof In fact we can say precisely how Eq. (3.1) are satisfied here. Indeed, the argument above shows that any operator product $E_i^* E_j := E$ that belongs to the (logical) operator set $\mathcal N(\mathcal S) \setminus \langle \mathcal S, iI \rangle $ , must satisfy these equations with E acting as the non-trivial unitary. (The scalar multiple $\lambda _{ij}$ will be modulus one, as we have assumed the operators belong to $\mathcal P_n$ , but we note that in general the error operators will be scalar multiples of such operators, so the error model will be trace-preserving.) The remaining cases correspond to correctable errors. Indeed, If E belongs to $\langle \mathcal S, iI \rangle $ , then clearly $EP_{\mathcal C} = P_{\mathcal C}$ and the condition is trivially satisfied. Further, if $E \notin \mathcal N(\mathcal S)$ , then a standard stabilizer formalism argument can be used (using the explicit formula for the projection $P_{\mathcal C}$ noted above) to show that $E_iP_{\mathcal C}$ and $E_jP_{\mathcal C}$ have orthogonal ranges, and hence the isoclinic conditions are satisfied with $\lambda _{ij}=0$ .

Remark 4.2 This suggests an interesting possibility and line of investigation. We can ask if these isoclinic subspace results for stabilizer codes, and in particular the identification of logical operators with the generalized Knill–Laflamme conditions that have non-trivial commuting unitary operators, extends to arbitrary quantum error correcting codes.

4.2 Isoclinic n-planes in Euclidean $2n$ -space

Next, we consider a class of examples that form a subclass of isoclinic subspace families with roots dating back over a century. First some background.

The study of rotations in higher dimensions gained momentum with the discovery of quaternions by Hamilton [Reference Hamilton17], which provided the first systematic approach to understanding rotations in three and four dimensions [Reference Hardy18, Reference Zhang44]. Subsequently Clifford extended this framework to spaces of higher dimensions, introducing what are now called Clifford algebras [Reference Garling14], which generalized the approach to spaces of higher dimensions. Clifford algebras underpin the algebraic structure of rotations, leading naturally to explorations of planes that exhibit unique rotational properties, including isoclinic rotations as a distinguished case. The concept of isoclinic n-planes thus applies the idea of isoclinic subspaces to higher dimensional subspaces within Euclidean vector spaces.

Isoclinic n-planes are subspaces of $\mathbb {R}^{n}$ (or, more generally, $\mathbb {C}^{n}$ ) where rotations affect all vectors in a consistent way: pairs of orthogonal vectors within these planes rotate by the same angle while preserving their orthogonality. They are particularly interesting in even-dimensional spaces and Euclidean spaces, and have special properties that make them useful in higher-dimensional geometry and theoretical physics [Reference Wong39Reference Wong and Mok42]. As such, study on even-dimensional space, $\mathbb {R}^{2n}$ , has been heavily investigated. An n-plane in $\mathbb {R}^{2n}$ is an n-dimensional vector subspace of $\mathbb {R}^{2n}$ accompanied by its induced inner product. Two n-planes, $\mathcal V, \mathcal W$ are said to be isoclinic with each other if the angle between any non-zero vector in $\mathcal V$ and its projection onto $\mathcal W$ is the same for every non-zero vector in $\mathcal V$ . If this angle is given by $\theta $ , then $\mathcal V$ and $\mathcal W$ are isoclinic at angle $\theta $ in the sense defined above.

We can extend the concepts within this class of isoclinic subspaces to utilize the Knill–Laflamme conditions and give a new perspective on the class. We note that [Reference Kribs, Mammarella and Pereira28] contains a similar class of examples for $n=2$ and $2\times 2$ matrices as part of its exposition (though without the Knill–Laflamme viewpoint).

Example 4.3 Given a (bounded) operator A on a Hilbert space $\mathcal H$ , we can consider its graph,

$$\begin{align*}\mathcal C_A = \{ (x, Ax ) \, : \, x\in\mathcal H \}, \end{align*}$$

which is a subspace of $\mathcal H \oplus \mathcal H$ that is closed (due to the closed graph theorem in the infinite-dimensional case). We will also consider the subspace of $\mathcal H \oplus \mathcal H$ given by,

$$\begin{align*}\mathcal C_\infty = \{ (0, x ) \, : \, x\in\mathcal H \}. \end{align*}$$

Now suppose that A be a unitary operator on $\mathcal H$ . Observe that the projection, which we denote by $P_A$ , of $\mathcal H \oplus \mathcal H$ onto $\mathcal C_A$ is given in block matrix form by,

$$\begin{align*}P_{A} = \frac12 \begin{pmatrix} I & A^* \\ A & I \end{pmatrix}. \end{align*}$$

Indeed, an easy way to see this is to note that $P_{A} \, \begin {pmatrix} I & A \end {pmatrix}^t = P_{A}$ . Also observe the projection $P_\infty $ onto $\mathcal C_\infty $ , is given in block matrix form by,

$$\begin{align*}P_\infty = \begin{pmatrix} 0 & 0 \\ 0 & I \end{pmatrix}. \end{align*}$$

And note the subspaces $\mathcal C_A$ and $\mathcal C_\infty $ have dimension equal to $\dim \mathcal H$ (for $\mathcal C_A$ using the fact that A is unitary). One can verify by direct matrix calculation that for any unitary A, we have

$$\begin{align*}P_\infty P_A P_\infty = \frac12 P_\infty \quad \mathrm{and} \quad P_A P_\infty P_A = \frac12 P_A. \end{align*}$$

Now suppose B is another unitary operator on $\mathcal H$ , and calculate the following operator product:

$$ \begin{align*} P_A P_B P_A &= \frac18 \begin{pmatrix} I & A^* \\ A & I \end{pmatrix} \begin{pmatrix} I & B^* \\ B & I \end{pmatrix} \begin{pmatrix} I & A^* \\ A & I \end{pmatrix} \\ &= \frac18 \begin{pmatrix} 2I + A^*B + B^*A & 2A^* + B^* + A^*BA^* \\ 2A + B + AB^*A & 2I + BA^* + AB^* \end{pmatrix}. \end{align*} $$

Let us further assume that both $A = A^* = A^{-1}$ and $B = B^* = B^{-1}$ are Hermitian unitary operators and they anti-commute, $AB = -BA$ . Then observe that the matrix on the right-side of this equation reduces to a scalar multiple of $P_A$ ; in fact we have

$$\begin{align*}P_A P_B P_A = \frac12 P_A. \end{align*}$$

It follows that, given any family of Hermitian and pairwise anti-commuting unitary operators acting on a common Hilbert space, the graph subspaces $\mathcal C_A$ of the operators inside the direct sum of the Hilbert space with itself, together with the subspace $\mathcal C_\infty $ , form an isoclinic family. We are thus in the scenario of Theorem 3.3 and we can investigate what the commuting unitary operators might be in the generalized Knill–Laflamme conditions. Interestingly, for this class of examples, there is a natural base subspace that stands out from the others, namely $\mathcal C_\infty $ .

Hence, we first find partial isometries $V_\infty $ and $V_A$ on $\mathcal H \oplus \mathcal H$ such that $P_\infty = V_\infty V_\infty ^*$ , $P_A = V_A V_A^*$ , and $P_\infty = V_A^* V_A$ . One can check that $V_\infty = P_\infty $ and

(4.2) $$ \begin{align} V_A = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 & I \\ 0 & A \end{pmatrix} \end{align} $$

satisfy these identities. Knowing from Theorem 3.3 that we will find the commuting unitary form in Eq. (3.1), we compute to find the following (with $P_\infty $ in place of $P_{\mathcal C}$ ):

$$\begin{align*}P_\infty V_A^* V_B P_\infty = \frac{1}{\sqrt{2}} V_{A,B}^{(\infty)} P_\infty = \frac{1}{\sqrt{2}} P_\infty V_{A,B}^{(\infty)} , \end{align*}$$

where

$$\begin{align*}V_{A,B}^{(\infty)} : = \begin{pmatrix} 0 & 0 \\ 0 & U_{A,B}^{(\infty)} \end{pmatrix}, \end{align*}$$

and where $U_{A,B}^{(\infty )} = \frac {I + A^*B}{\sqrt {2}} = \frac {I + AB}{\sqrt {2}}$ is a unitary operator on $\mathcal H$ .

We can use the Pauli group introduced in the last subsection to produce maximal families of such isoclinic families. To see this, we recall the following result of Kestelman which gives a restriction on the dimension of spaces where there are large sets pairwise anti-commuting invertible matrices.

Theorem 4.4 [Reference Kestelman22, Theorem 2]

If there exist at least $2q$ pairwise anti-commuting invertible $m \times m$ matrices, then $2^q$ divides m.

We also note that if we have an even number $\{ A_j\}_{j=1}^{2q}$ of pairwise anti-commuting invertible matrices, then their product $A_1A_2...A_{2q-1}A_{2q}$ is an invertible matrix that anti-commutes with every element of $\{ A_j\}_{j=1}^{2q}$ . Hence the maximal number of pairwise anti-commuting invertible matrices is always an odd number.

It follows from Theorem 4.4 that if $m = 2^qp$ , where p is odd, then the maximum possible number of pairwise anti-commuting invertible $m \times m$ matrices is $2q+1$ . An inductive construction of Kestleman shows that this upper bound is always attained with a set of Hermitian unitary matrices. We can use elements of the Pauli groups to give explicit maximal sets of examples. For the base case where $q=1$ , the matrices $X\otimes I_p$ , $Y\otimes I_p$ and $Z\otimes I_p$ are three anti-commuting Hermitian unitary $m \times m$ matrices. For the inductive step, suppose $\{ B_i\}_{i=1}^{2q+1}$ are set of $2q+1$ pairwise anti-commuting $m \times m$ Hermitian unitary matrices, then $\{ X\otimes B_i\}_{i=1}^{2q+1}\bigcup \{Y\otimes I_m,Z\otimes I_m\}$ are a set of $2q+3$ pairwise anti-commuting $2m \times 2m$ Hermitian unitary matrices.

We can then use the construction from earlier in this section together with these anti-commuting Hermitian unitary matrices to construct pairwise isoclinic subspaces. Let n be an even number with $n=2^kp$ where p is odd. Then we have $2k-1$ anti-commuting $\frac {n}{2} \times \frac {n}{2}$ Hermitian unitary matrices. Hence the above construction applied pairwise to these matrices gives $2k$ pairwise isoclinic $\frac {n}{2}$ -dimensional subspaces of $\mathbb {C}^n$ .

4.3 Mutually unbiased quantum measurements

In this section, we show how the generalized Knill–Laflamme conditions can be applied to a recently introduced notion in quantum information theory that is used in quantum cryptography.

By a d-outcome (quantum) measurement acting on a Hilbert space $\mathcal H$ , we mean a set of orthogonal projections $\{P_a \}_{a=1}^d$ with mutually orthogonal ranges that span all of $\mathcal H$ , so $\sum _a P_a = I$ . The following definition is from [Reference Farkas, Kaniewski and Nayak13, Reference Tavakoli, Farkas, Rosset, Bancal and Kaniewski37], and was motivated by the desire to find higher dimensional versions of mutually unbiased bases [Reference Durt, Englert, Bengtsson and Życzkowski12], which are captured in the special case with one-dimensional projections.

Definition 4.5 Two d-outcome measurements $\{P_a \}_{a=1}^d$ and $\{Q_b \}_{b=1}^d$ on a Hilbert space $\mathcal H$ are called mutually unbiased measurements (MUMs) if

(4.3) $$ \begin{align} P_a = d \, P_a Q_b P_a \quad \quad \mathrm{and} \quad \quad Q_b = d \, Q_b P_a Q_b, \end{align} $$

for all $1 \leq a,b \leq d$ .

By computing traces on both sides of these equations (as we did in an earlier proof), one sees that all of the projections $P_a$ , $Q_b$ must have the same rank, which we will denote in this section by the integer $k = \dim (P_a \mathcal H) = \dim (Q_b \mathcal H)$ . Since the individual sets of projections have mutually orthogonal ranges, and hence trivially satisfy the isoclinic equations Eq. (2.1) (with $\lambda =0$ ), it follows from Theorem 2.2 and Eq. (4.3) that the entire set of ranges of the projections $\{P_a, Q_b \}_{a,b}$ forms a family of isoclinic subspaces.

We can thus apply the results of the previous section to MUMs, and doing so yields the following testable conditions for MUMs when given a family of unitary operators and a subspace, paralleling the Knill–Laflamme conditions for quantum error correction. To keep simple notation, we state the result for “2-element” MUMs (i.e., those with two measurements), but the result extends in an obvious way to MUMs of any size.

Theorem 4.6 A family of unitary operators $\{ U_i \}_i$ on a Hilbert space $\mathcal H$ and a subspace $\mathcal C$ defines a d-outcome 2-element MUM if and only if there is a partition of the partial isometries $\{ U_i P_{\mathcal C} \}_i$ into two sets $\{ V_a \}_{a=1}^d$ , $\{ W_b \}_{b=1}^d$ for which the projection sets $\{ V_a V_a^*\}_a$ and $\{ W_b W_b^* \}_b$ define measurements and there are unitary operators $U_{ab}$ such that

(4.4) $$ \begin{align} P_{\mathcal C} V_a^* W_b P_{\mathcal C} = \frac{1}{\sqrt{d}} \, U_{ab} P_{\mathcal C} = \frac{1}{\sqrt{d}} \, P_{\mathcal C} U_{ab} \quad \quad \forall\, a,b. \end{align} $$

Proof The forward direction of the proof follows directly from Lemma 3.2, and the backward direction follows from Theorem 3.1.

As shown in [Reference Farkas, Kaniewski and Nayak13], any pair of finite-dimensional MUMs can be written in a canonical form, as two sets of d orthogonal (rank-k) projections on $\mathbb {C}^k\otimes \mathbb {C}^d$ given by (up to a change of basis),

$$\begin{align*}P_a = I_k \otimes |a\rangle\!\langle a | \quad\quad \forall\, 1\leq a \leq d, \end{align*}$$

where $\{ | a \rangle \}_{a=1}^d$ is the computational basis for $\mathbb {C}^d$ , and in the same basis,

$$\begin{align*}Q_b = \frac1d \sum_{i,j}^d V^b_{ij} \otimes |i\rangle\!\langle j | \quad \quad \forall\, 1 \leq b \leq d, \end{align*}$$

where $V^b_{ij}$ are operators on $\mathbb {C}^k$ that satisfy the following operator relations:

(4.5) $$ \begin{align} \left\{\!\! \begin{array}{rcll} V^b_{ii}\kern-8pt &=&\kern-8pt I_k & \forall\, b,i \\(V^b_{ij})^*\kern-8pt &=&\kern-8pt V^b_{ji} & \forall\, b,i,j \\V^b_{i_1 i_2}\kern-8pt &=&\kern-8pt V^b_{i_1 j} V^b_{j i_2} & \forall\, b,i_1, i_2, j \\\sum_b V^b_{ij}\kern-8pt &=&\kern-8pt \delta_{ij} \, d \, I_k & \forall\, i, j. \end{array} \right. \end{align} $$

Let us show how the canonical form is related to the characterization of MUMs given in Theorem 4.6. Firstly, if we have a MUM given by rank-k projections $\{ P_a , Q_b\}_{a,b=1}^d$ in its canonical form, we can pick one of the $P_a$ to identify as $P_{\mathcal C}$ , let us say $P_{\mathcal C} = P_1 = I_k \otimes |1\rangle \!\langle 1 |$ . Then we can define $V_a = I_k \otimes |a\rangle \!\langle 1 |$ for $1\leq a \leq d$ , and

$$\begin{align*}W_b = \frac{1}{\sqrt{d}} \, \sum_{j=1}^d V_{j1}^b \otimes |j\rangle\!\langle 1 | \quad \quad \forall\, 1\leq b \leq d. \end{align*}$$

It then follows that for all $a,b$ , we have $V_a^* V_a = P_{\mathcal C}$ , $P_a = V_a V_a^*$ , $W_b^* W_b = P_{\mathcal C}$ , $Q_b = W_b W_b^*$ , and Eq. (4.4) are satisfied with $U_{ab} = V_{a1}^b \otimes |1\rangle \!\langle 1 |$ .

On the other hand, we can use the generalized Knill–Laflamme conditions of Theorem 4.6 to give an alternate derivation of the MUM canonical form. We state this as a result.

Corollary 4.7 Suppose that $\{ V_a\}_{a=1}^d$ and $\{ W_b \}_{b=1}^d$ are partial isometries on $\mathcal H$ with k-dimensional mutually orthogonal ranges in the individual sets and satisfying the relations of Eq. (4.4) for a subspace $\mathcal C$ . Then there is a unitary $U: \mathcal H \rightarrow \mathbb {C}^k \otimes \mathbb {C}^d$ such that the family of projections $\{ U V_a V_a^* U^* , U W_b W_b^* U^* \}_{a,b=1}^d$ is a MUM in the canonical form.

Proof First note that we can assume without loss of generality that $V_1$ is a projection and satisfies $V_1 = V_1 V_1^* = P_{\mathcal C}$ .

Indeed, this can be accomplished by the “pull-back” technique used earlier, wherein we multiply both sides of Eq. (4.4) on the left by $V_1$ and on the right by $V_1^*$ . Then one can verify the equations will be satisfied by the operators $V_a^\prime = V_a V_1^*$ , $W_b^\prime = W_b V_1^*$ and $U_{ab}^\prime = V_1 U_{ab} V_1^*$ .

As $\{P_a: = V_a V_a^* \}_a$ is a family of projections with mutually orthogonal (k-dimensional) ranges, we can find a unitary $U: \mathcal H \rightarrow \mathbb {C}^k \otimes \mathbb {C}^d$ such that $UP_aU^* = I_k \otimes |a\rangle \!\langle a |$ and $V_{a,U}:= UV_a U^* = I_k \otimes |a\rangle \!\langle 1 |$ .

Now observe that for all b, we have $W_{b,U}:= U W_b U^* = (UW_b U^*)(UP_1 U^*)$ , and so there are operators $V_{j1}^b$ on $\mathbb {C}^k$ such that $W_{b,U} = \sum _{j=1}^d V_{j1}^b \otimes |j\rangle \!\langle 1 |$ . Then, by defining $Q_b:= W_{b,U} W_{b,U}^*$ , we leave it to the interested reader to verify that the projections $\{ P_a, Q_b\}_{a,b=1}^d$ define a MUM in canonical form.

We next extend the construction that leads to the classical family of isoclinic subspaces from n-planes presented in the previous subsection, and use it to construct a family of MUMs.

Example 4.8 Let $d\geq 1$ be a fixed positive integer and let $\omega = e^{\frac {2\pi i}{d}}$ be a primitive dth root of unity. Suppose that A is a unitary operator on a Hilbert space $\mathcal H$ with $A^d=I$ . We will focus on finite-dimensional spaces here, but the construction goes through for any Hilbert space.

Let $k= \dim \mathcal H$ and consider the k-dimensional (as A is unitary) subspace of $\mathcal K := \mathcal H^{(d)} \cong \mathcal H \otimes \mathbb {C}^d$ given by

$$\begin{align*}\mathcal C_A = \big\{ (x, Ax, \ldots , A^{d-1} x ) \, : \, x\in\mathcal H \big\}. \end{align*}$$

Generalizing the $d=2$ case above, note that the projection $P_A$ of $\mathcal K$ onto $\mathcal C_A$ is given in $d\times d$ block matrix form as,

$$\begin{align*}P_{A} = \frac1d \begin{pmatrix} I & A^* & (A^*)^2 & \cdots & (A^*)^{d-1} \\ A & I & A^* & \dots&(A^*)^{d-2} \\ A^2 & A & I & \ddots & \vdots\\ \vdots & \vdots & \ddots & \ddots & A^* \\ A^{d-1} & A^{d-2} & \cdots & A & I \end{pmatrix}. \end{align*}$$

We can similarly carry through this subspace and projection description for each of the d unitary operators $\omega ^r A$ , for $0 \leq r \leq d-1$ , replacing A by $\omega ^r A$ in the projection above, and obtain the rank-k projections $\{ P_{(\omega ^r A)} \}_{r=0}^{d-1}$ with ranges

$$\begin{align*}P_{(\omega^r A)} \mathcal K = \big\{ (x, (\omega^r A) x, \ldots , (\omega^r A)^{d-1} x ) \, : \, x\in\mathcal H \big\}. \end{align*}$$

Observe that this family of (d in total) projections satisfies the completeness relation

$$\begin{align*}\sum_{r=0}^{d-1} P_{(\omega^r A)} = I, \end{align*}$$

which makes use of the cyclotomic polynomial root equation $1 + (\omega ^s) + \cdots + (\omega ^s)^{d-1} = 0$ for all $0 \leq s \leq d-1$ . This also implies the projections have mutually orthogonal ranges as, for all s,

$$\begin{align*}P_{(\omega^s A)} = P_{(\omega^s A)} \, I = \sum_{r=0}^{d-1} P_{(\omega^s A)} P_{(\omega^r A)} = P_{(\omega^s A)} + \sum_{r\neq s} P_{(\omega^s A)} P_{(\omega^r A)}, \end{align*}$$

and so each $P_{(\omega ^s A)} P_{(\omega ^r A)} =0$ .

Below we will also make use of a generalization of the projection $P_\infty $ , which projects onto the subspace $\mathcal C_\infty = \{ (0, \ldots , 0, x ) \, : \, x\in \mathcal H \}$ of $\mathcal H^{(d)}$ . In the $d\times d$ block matrix form used above, $P_\infty $ is represented by the matrix with I in the $(d,d)$ entry and $0$ ’s in all other entries.

We now apply Theorem 4.6 to the above class of projections to construct MUMs.

Corollary 4.9 Let d be a positive integer and let $\omega $ be a primitive dth root of unity. Suppose $A_1,\dots , A_n$ are unitary operators on a Hilbert space $\mathcal H$ such that:

$$\begin{align*}A_i^d = I \quad \mathrm{and} \quad A_i A_j = \omega A_j A_i \quad \mathrm{for} \quad 1 \leq i < j \leq n. \end{align*}$$

Then the sets of projections $\mathcal P_i = \big \{ P_{(\omega ^r A_i)} \big \}_{r=0}^{d-1}$ , for $1\leq i \leq n$ , form an n-element family of d-outcome MUMs, with projections of rank equal to $\dim \mathcal H$ .

Proof Let us focus for the proof on a pair of unitary operators A, B on $\mathcal H$ with $A^d = I = B^d$ and $AB = \omega BA$ . To apply Theorem 4.6, we will use $\mathcal C = \mathcal C_\infty $ as the anchor subspace recall with projection $P_\infty $ . We define (for any such A) a partial isometry $V_A$ on $\mathcal H^{(d)}$ with adjoint given in $d\times d$ block matrix form by

$$\begin{align*}V_A^* = \frac{1}{\sqrt{d}} \begin{pmatrix} 0 & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \ldots & 0 \\ I & A^* & \ldots & (A^*)^{d-1} \end{pmatrix}. \end{align*}$$

Then observe that $P_A = V_A V_A^*$ and $P_\infty = V_A^* V_A$ , and $V_A = V_A P_\infty $ . As we did above, we can replace A with $\omega ^rA$ , for $0\leq r \leq d-1$ , to define $V_{(\omega ^r A)}$ accordingly.

It remains to verify Eq. (4.4) are satisfied by this family of partial isometries (with the $V_{(\omega ^r A)}$ , respectively $V_{(\omega ^s B)}$ , playing the roles of $V_a$ ’s, respectively $W_b$ ’s). To this end, observe that for all $0\leq r,s\leq d-1$ , we have

$$\begin{align*}P_\infty V_{(\omega^r A)}^* V_{(\omega^s B)} P_\infty = V_{(\omega^r A)}^* V_{(\omega^s B)} = \frac{1}{\sqrt{d}} V_{(A,r,B,s)}^{(\infty)} P_\infty = \frac{1}{\sqrt{d}} P_\infty V_{(A,r,B,s)}^{(\infty)} , \end{align*}$$

where $V_{(A,r,B,s)}^{(\infty )}$ given by

$$\begin{align*}V_{(A,r,B,s)}^{(\infty)} = \begin{pmatrix} 0 & \ldots & 0 \\ \vdots & \vdots & \vdots \\ 0 & \ldots & U_{(A,r,B,s)} \end{pmatrix}, \end{align*}$$

with

$$\begin{align*}U_{(A,r,B,s)} = \frac{1}{\sqrt{d}} \sum_{j=0}^{d-1} (\omega^{s-r})^j \, (A^*)^j B^j. \end{align*}$$

Thus we complete the proof by showing $U_{(A,r,B,s)}$ is unitary. Begin the calculation as follows, using the facts that $A^* = A^{d-1}$ , $B^* = B^{d-1}$ and their anti-commutation relation:

$$ \begin{align*} U_{(A,r,B,s)} U_{(A,r,B,s)}^* &= \frac{1}{d} \sum_{j_1,j_2=0}^{d-1} (\omega^{s-r})^{j_1-j_2} \, (A^*)^{j_1} B^{j_1} (B^*)^{j_2} A^{j_2} \\ &= \frac{1}{d} \sum_{j_1,j_2=0}^{d-1} (\omega^{s-r})^{j_1-j_2} \, (A^*)^{j_1} B^{[j_1 + j_2(d-1)]} A^{j_2} \\ &= I \, + \, \frac{1}{d} \sum_{j_1 \neq j_2} \omega^{f(r,s,j_1,j_2)} \, (A^{[j_1(d-1) + j_2]}) (B^{[j_1 + j_2(d-1)]}), \end{align*} $$

where $f(r,s,j_1,j_2) = (s-r) (j_1-j_2) - j_2[j_1 + j_2(d-1)]$ . In the sum, make the substitution $i = j_1 - j_2 \, (\mathrm {mod}\,d)$ for $j_1\neq j_2$ , and observe that:

$$\begin{align*}\left\{\!\! \begin{array}{lcll} \hfill f(r,s,j_1,j_2)\kern-8pt & \equiv &\kern-8pt (s-r)i - j_2 i & (\mathrm{mod}\,d) \\\hfill j_1(d-1) + j_2\kern-8pt & \equiv &\kern-8pt -i & (\mathrm{mod}\,d) \\\hfill j_1 + j_2(d-1)\kern-8pt & \equiv &\kern-8pt i & (\mathrm{mod}\,d) \end{array} \right.. \end{align*}$$

Hence, with this substitution, the (non-identity) sum in the last line of the calculation above becomes:

$$\begin{align*}\frac{1}{d} \sum_{i=1}^{d-1} \sum_{j_2=0}^{d-1} \omega^{[(s-r)i - j_2 i]} \, A^{-i} B^{i} = \frac{1}{d} \sum_{i=1}^{d-1} \omega^{(s-r)i} \Big( \sum_{j_2=0}^{d-1} ( \omega^{-i})^{j_2} \Big) \, A^{-i} B^{i} = 0, \end{align*}$$

again here using the cyclotomic polynomial identity. It now follows that $U_{(A,r,B,s)}$ is unitary.

Thus we have shown that Eq. (4.4) are satisfied for this family of partial isometries, and so it follows from Theorem 4.6, and extending the above argument to the whole family $\{A_i\}_{i=1}^n$ , that the projection families $\{\mathcal P_i\}_{i=1}^n$ form a MUM; in particular, we have

$$\begin{align*}P_{(\omega^r A_i)} P_{(\omega^s A_j)} P_{(\omega^r A_i)} = \frac1d P_{(\omega^r A_i)} , \end{align*}$$

for all $0 \leq r,s \leq d-1$ and all $1 \leq i < j \leq n$ .

Remark 4.10 This MUM construction appears to be new. It is also “coordinate-free,” in that it only relies on the algebraic relations of the defining unitary operators and not a particular Hilbert space representation of them. It would be interesting to investigate possible connections between the class of MUMs that are generated by this construction and constructions determined by specific representations (e.g., such as the n-qudit construction of [Reference Farkas, Kaniewski and Nayak13]).

5 Concluding remarks

This work has brought together the classical topic of isoclinic subspaces with the modern topic of quantum error correction, with a generalization of a key result from the latter to the former. This generates a number of potentially new lines of investigation, some as direct outgrowths of the current work and others more speculative. We briefly note a few possibilities here.

The Knill–Laflamme theorem and conditions have sparked several generalizations and applications even within the subject of quantum error correction itself. As noted above, key results in the stabilizer formalism [Reference Gottesman16] rely on it, and as we showed, the generalized conditions presented here capture the logical operators for a stabilizer code along with the code’s correctable error sets. We wonder if this idea, and in particular using the general conditions to algebraically describe logical operators, extends to more general codes and error models.

It would be interesting to know if extensions of the Knill–Laflamme theory for other types of quantum error correction could themselves have natural extensions that would generalize the conditions considered here. Specifically, we note the operator algebra generalizations of quantum error correction [Reference Bény, Kempf and Kribs3Reference Bény, Kempf and Kribs5] and related notions and applications such as complementarity with private codes [Reference Crann, Kribs, Levene and Todorov10, Reference Jochym-O’Connor, Kribs, Laflamme and Plosker20, Reference Kretschmann, Kribs and Spekkens25, Reference Kribs, Levick, Nelson, Pereira and Rahaman27]. From matrix theory and (operator) dilation theory perspectives, we further expect that the notions of higher-rank numerical ranges [Reference Choi, Giesinger, Holbrook and Kribs7Reference Choi, Kribs and Zyczkowski9, Reference Gau, Li, Poon and Sze15, Reference Li and Poon29Reference Martinez-Avendano33, Reference Woerdeman38], both singular and joint, consideration of which was initially motivated by the Knill–Laflamme conditions, have extensions motivated by the generalized conditions.

Whenever a generalized framework brings different notions under the same umbrella, new crossover lines of investigation can also be considered. The applications and examples we presented in Section 4 have already provided some indications of this; indeed, this included Pauli groups motivated by quantum error models giving new examples of constructions of isoclinic subspaces, the generalized conditions giving a new perspective on a classical family of isoclinic subspaces, the classical construction motivating a new construction of MUMs, and the generalized conditions giving an alternate construction of the canonical form for MUMs, which we think suggests a deeper (and yet to be explored) connection between the theory of MUMs and quantum error correction.

We also wonder if the generalized Knill–Laflamme conditions might provide new information or perspectives when applied to other families of isoclinic subspaces beyond what we have considered here. For instance, the isoclinic subspace conditions have been shown to arise in the study of quantum designs [Reference Zauner43].

We plan to undertake some of these investigations elsewhere and we invite other interested researchers to do the same.

Footnotes

The first author was partially supported by the NSERC Discovery Grant RGPIN-2024-400160. The second author was partially supported by the NSERC Discovery Grant RGPIN-2022-04149.

References

Balla, I. and Sudakov, B., Equiangular subspaces in Euclidean spaces . Discrete Comput. Geom. 61(2019), no. 1, 8190.Google Scholar
Bennett, C. H., DiVincenzo, D. P., Smolin, J. A., and Wootters, W. K., Mixed state entanglement and quantum error correction . Phys. Rev. A 54(1996), 3824.CrossRefGoogle ScholarPubMed
Bény, C., Kempf, A., and Kribs, D. W., Generalization of quantum error correction via the Heisenberg picture . Phys. Rev. Lett. 98(2007), no. 10, 100502.Google ScholarPubMed
Bény, C., Kempf, A., and Kribs, D. W., Quantum error correction of observables . Phys. Rev. A 76(2007), no. 4, 042303.CrossRefGoogle Scholar
Bény, C., Kempf, A., and Kribs, D. W., Quantum error correction on infinite-dimensional Hilbert spaces . J. Math. Phys. 50(2009), no. 6, 062108.Google Scholar
Björck, A. and Golub, G. H., Numerical methods for computing angles between linear subspaces . Math. Comput. 27(1973), no. 123, 579594.CrossRefGoogle Scholar
Choi, M.-D., Giesinger, M., Holbrook, J. A., and Kribs, D. W., Geometry of higher-rank numerical ranges . Linear Multilinear Algebra 56(2008), nos. 1–2, 5364.Google Scholar
Choi, M.-D., Kribs, D. W., and Życzkowski, K., Higher-rank numerical ranges and compression problems . Linear Algebra Appl. 418(2006), nos. 2–3, 828839.CrossRefGoogle Scholar
Choi, M.-D., Kribs, D. W., and Zyczkowski, K., Quantum error correcting codes from the compression formalism . Rep. Math. Phys. 58(2006), no. 1, 7791.Google Scholar
Crann, J., Kribs, D. W., Levene, R. H., and Todorov, I. G., Private algebras in quantum information and infinite-dimensional complementarity . J. Math. Phys. 57(2016), no. 1, 015208.CrossRefGoogle Scholar
Davidson, K. R., C*-algebras by example. American Mathematical Society, Providence, RI, 1996.Google Scholar
Durt, T., Englert, B.-G., Bengtsson, I., and Życzkowski, K., On mutually unbiased bases . Int. J. Quantum Inf. 8(2010), no. 04, 535640.Google Scholar
Farkas, M., Kaniewski, J., and Nayak, A., Mutually unbiased measurements, Hadamard matrices, and superdense coding . IEEE Trans. Inf. Theory 69(2023), no. 6, 38143824.CrossRefGoogle Scholar
Garling, D. J. H., Clifford algebras: An introduction. Vol. 78. Cambridge University Press, Cambridge, 2011.CrossRefGoogle Scholar
Gau, H.-L., Li, C.-K., Poon, Y.-T., and Sze, N.-S., Higher rank numerical ranges of normal matrices . SIAM J. Matrix Anal. Appl. 32(2011), 2343.CrossRefGoogle Scholar
Gottesman, D., Class of quantum error-correcting codes saturating the quantum Hamming bound . Phys. Rev. A 54(1996), 1862.CrossRefGoogle ScholarPubMed
Hamilton, W. R., Theory of quaternions . Proc. Royal Irish Acad. (1836–1869) 3(1844), 116.Google Scholar
Hardy, A. S., Elements of quaternions. Heath, & Company, Ginn, 1881.Google ScholarPubMed
Hoggar, S. G., New sets of equi-isoclinic $n$ -planes from old . Proc. Edinburgh Math. Soc. 20(1977), no. 4, 287291.Google Scholar
Jochym-O’Connor, T., Kribs, D. W., Laflamme, R., and Plosker, S., Quantum subsystems: exploring the complementarity of quantum privacy and error correction . Phys. Rev. A 90(2014), no. 3, 032305.Google Scholar
Jordan, C., Essai sur la géométrie à $n$ dimensions. Bull. Soc. Math. France 3(1875), 103174.CrossRefGoogle Scholar
Kestelman, H., Anticommuting linear transformations . Can. J. Math. 13(1961), no. 614624.CrossRefGoogle Scholar
Knill, E. and Laflamme, R., Theory of quantum error-correcting codes . Phys. Rev. A 55(1997), 900.Google Scholar
Knill, E., Laflamme, R., and Viola, L., Theory of quantum error correction for general noise . Phys. Rev. Lett. 84(2000), no. 11, 2525.Google ScholarPubMed
Kretschmann, D., Kribs, D. W., and Spekkens, R. W., Complementarity of private and correctable subsystems in quantum cryptography and error correction . Phys. Rev A 78(2008), no. 3, 032330.Google Scholar
Kribs, D. W., A quantum computing primer for operator theorists . Linear Algebra Appl. 400(2005), 147167.CrossRefGoogle Scholar
Kribs, D. W., Levick, J., Nelson, M. I., Pereira, R., and Rahaman, M., Quantum complementarity and operator structures . Quantum Inf. Comput. 19(2019), 6783.Google Scholar
Kribs, D. W., Mammarella, D., and Pereira, R., Isoclinic subspaces and quantum error correction . Oper. Matrices 2(2021), no. 15, 571580.CrossRefGoogle Scholar
Li, C.-K. and Poon, Y.-T., Generalized numerical ranges and quantum error correction . J. Oper. Theory 66(2011), 335351.Google Scholar
Li, C.-K., Poon, Y.-T., and Sze, N.-S., Higher rank numerical ranges and low rank perturbations of quantum channels . J. Math. Anal. Appl. 348(2008), no. 2, 843855.CrossRefGoogle Scholar
Li, C.-K., Poon, Y.-T., and Sze, N.-S., Condition for the higher rank numerical range to be non-empty . Linear Multilinear Algebra 57(2009), no. 4, 365368.Google Scholar
Li, C.-K. and Sze, N.-S., Canonical forms, higher rank numerical ranges, totally isotropic subspaces, and matrix equations . Proc. Amer. Math. Soc. 136(2008), no. 9, 30133023.CrossRefGoogle Scholar
Martinez-Avendano, R. A., Higher-rank numerical range in infinite-dimensional Hilbert space . Oper. Matrices 2(2008), no. 2, 249264.CrossRefGoogle Scholar
Nielsen, M. A. and Chuang, I., Quantum computation and quantum information. Cambridge University Press, Cambridge, 2000.Google Scholar
Shor, P. W., Scheme for reducing decoherence in quantum computing memory . Phys. Rev. A 52(1995), R2493.Google ScholarPubMed
Steane, A. M., Error correcting codes in quantum theory . Phys. Rev. Lett. 77(1996), no. 5, 793.CrossRefGoogle ScholarPubMed
Tavakoli, A., Farkas, M., Rosset, D., Bancal, J.-D., and Kaniewski, J., Mutually unbiased bases and symmetric informationally complete measurements in Bell experiments . Sci. Adv. 7(2021), no. 7, eabc3847.Google ScholarPubMed
Woerdeman, H. J., The higher rank numerical range is convex . Linear and Multilinear Algebra 56(2008), nos. 1–2, 6567.CrossRefGoogle Scholar
Wong, Y.-C., Clifford parallels in elliptic $\left(2n-1\right)$ -spaces and isoclinic $n$ -planes in Euclidean $2n$ -space . Bull. Amer. Math. Soc. 66(1960), no. 4, 289293.Google Scholar
Wong, Y.-C., Isoclinic $n$ -planes in Euclidean  $2n$ -space, clifford parallels in elliptic $\left(2n-1\right)$ -space, and the hurwitz matrix equations. Vol. 41. American Mathematical Society, Providence, RI, 1961.Google Scholar
Wong, Y.-C., Linear geometry in Euclidean 4-space. Southeast Asian Mathematical Society (SEAMS), 1977.Google Scholar
Wong, Y.-C. and Mok, K.-P., Normally related $n$ -planes and isoclinic $n$ -planes in ${r}^{2n}$ and strongly linearly independent matrices of order $n$ . Linear Algebra Appl. 139(1990), 3152.Google Scholar
Zauner, G., Quantum designs. Ph.D. thesis, University of Vienna Vienna, 1999.Google Scholar
Zhang, F., Quaternions and matrices of quaternions . Linear Algebra Appl. 251(1997), 2157.CrossRefGoogle Scholar