Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-25T20:40:21.598Z Has data issue: false hasContentIssue false

Operator noncommutative functions

Published online by Cambridge University Press:  24 May 2022

Meric Augat*
Affiliation:
Washingston University in St. Louis, St. Louis, MO, USA e-mail: [email protected]
John E. McCarthy
Affiliation:
Washingston University in St. Louis, St. Louis, MO, USA e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We establish a theory of noncommutative (NC) functions on a class of von Neumann algebras with a particular direct sum property, e.g., $B({\mathcal H})$ . In contrast to the theory’s origins, we do not rely on appealing to results from the matricial case. We prove that the $k{\mathrm {th}}$ directional derivative of any NC function at a scalar point is a k-linear homogeneous polynomial in its directions. Consequences include the fact that NC functions defined on domains containing scalar points can be uniformly approximated by free polynomials as well as realization formulas for NC functions bounded on particular sets, e.g., the NC polydisk and NC row ball.

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of The Canadian Mathematical Society

1 Introduction

Noncommutative (NC) function theory, as first proposed in the seminal work of Taylor [Reference Taylor25, Reference Taylor26] and developed, for example, in the monograph [Reference Kaliuzhnyi-Verbovetskyi and Vinnikov16] by Kaliuzhnyi-Verbovetskyi and Vinnikov, is a matricial theory, that is, a theory of functions of d-tuples of matrices. Let ${\mathbb M}_n$ denote the n-by-n square matrices, and let

$$\begin{align*}{\mathbb M}^{[d]} \ := \ \cup_{n=1}^\infty {\mathbb M}_n^d. \end{align*}$$

A NC function f defined on a domain $\Omega $ in ${\mathbb M}^{[d]}$ is a function that satisfies the following two properties.

  1. (i) The function is graded: if $x \in {\mathbb M}_n^d$ , then $f(x) \in {\mathbb M}_n$ .

  2. (ii) It preserves intertwining: if $L : \mathbb C^m \to \mathbb C^n$ is linear, $x = (x^1, \dots , x^d) \in {\mathbb M}_m^d$ and $y = (y^1, \dots , y^d)$ are both in $\Omega $ and $Lx = yL$ (this means $L x^r = y^r L$ for each $1 \leq r \leq d$ ), then $Lf(x) = f(y) L $ .

The theory has been very successful, and can be thought of as extending free polynomials in d variables to NC holomorphic functions. See, for example, the work of Helton, Klep, and McCullough [Reference Helton, Klep and McCullough6Reference Helton and McCullough10]; Salomon, Shalit, and Shamovich [Reference Salomon, Shalit and Shamovich23, Reference Salomon, Shalit and Shamovich24]; and Ball, Marx, and Vinnikov [Reference Ball, Marx and Vinnikov5].

However, the negative answer to Connes’s embedding conjecture [Reference Ji, Natarajan, Vidick, Wright and Yuen11] shows that evaluating NC polynomials on tuples of matrices is not sufficient to fully capture certain types of information, e.g., trace positivity of a free polynomial evaluated on tuples of self-adjoint contractions [Reference Klep and Schweighofer17]. Thus, there is an incentive to understand NC functions applied not to matrices, but to operators on an infinite dimensional Hilbert space ${\mathcal H}$ . Accordingly, it seems natural to exploit the fact that there are (noncanonical) identifications of a matrix of operators with an individual operator, and so one is led to consider functions that map elements of $B({\mathcal H})^d$ to $B({\mathcal H})$ and preserve intertwining.

Such functions were studied in [Reference Agler and McCarthy2, Reference Mancuso19]. A key assumption in those papers, however, was that the function was also sequentially continuous in the strong operator topology. This assumption was needed in order to prove that the derivatives at $0$ were actually free polynomials, by invoking this property from the matricial theory and using the density of finite rank operators in the strong operator topology. The main purpose of this note is to develop a theory of NC functions of operator tuples that does not depend on the matricial theory.

Other approaches to studying NC functions of operator tuples include the work of Pascoe and Tully-Doyle [Reference Pascoe and Tully-Doyle20]; Voiculescu [Reference Voiculescu27, Reference Voiculescu28]; Jury and Martin [Reference Jury and Martin13, Reference Jury and Martin14]; Jury, Martin, and Shamovich [Reference Jury, Martin and Shamovich15]; and Jury, Klep, Mancuso, McCullough, and Pascoe [Reference Jury, Klep, Mancuso, McCullough and Pascoe12].

For the rest of this paper, the following will be fixed. We shall let ${\mathcal H}$ be an infinite-dimensional Hilbert space. Let ${\mathcal A}$ be a unital subalgebra of $B({\mathcal H})$ that is closed in the norm topology. Let ${\mathcal T}_n({\mathcal A})$ denote the upper triangular n-by-n matrices with entries from ${\mathcal A}$ . We shall assume that ${\mathcal A}$ has the following direct sum property:

(1.1) $$ \begin{align} \forall n \geq 1, \ \exists\, U_n : \oplus_{j=1}^n {\mathcal H} \to {\mathcal H}, \ \mathrm{unitary,\ with\ } U_n ( {\mathcal T}_n({\mathcal A})) U_n^* \subseteq {\mathcal A}. \end{align} $$

Examples of such an ${\mathcal A}$ include $B({\mathcal H})$ ; the upper triangular matrices in $B({\mathcal H})$ with respect to a fixed basis; and any von Neumann algebra that can be written as a tensor product of an $I_\infty $ factor with something else.

We shall let d be a positive integer, and it will denote the number of variables. For a d-tuple $x \in {\mathcal A}^d$ , we shall write its coordinates with superscripts: $x = (x^1, \dots , x^d)$ . We shall topologize ${\mathcal A}^d$ with the relative norm topology from $B({\mathcal H})^d$ .

Definition 1.1 A set $\Omega \subseteq {\mathcal A}^d$ is called an NC domain if it is open and bounded, and closed with respect to finite direct sums in the following sense: for each $n \geq 2$ , there exists a unitary $U_n : {\mathcal H}^{(n)} \to {\mathcal H}$ so that whenever $x_1, \dots , x_n \in \Omega $ , then

(1.2) $$ \begin{align} U_n \ \begin{bmatrix} x_1 & 0& \cdots & 0 \\ 0 & x_2 & \cdots & 0 \\ & & \ddots \\ 0 & 0 & \cdots & x_n \end{bmatrix} U_n^* \ \in \ \Omega. \end{align} $$

Example 1.2 The prototypical examples of NC domains are balls. The reader is welcome to assume that $\Omega $ is either a NC polydisk, that is of the form

(1.3) $$ \begin{align} \mathcal{P} ({\mathcal A}) = \{ x \in {\mathcal A}^{d} : \max_{1 \leq r \leq d} \| x^{r} \| < 1 \} , \end{align} $$

or a NC row ball, that is,

(1.4) $$ \begin{align} \mathcal{R}({\mathcal A}) = \{ x \in {\mathcal A}^d : x^1 (x^1)^* + \dots + x^d (x^d)^* < 1 \}. \end{align} $$

More examples are given in Section 6.

Definition 1.3 Let $\Omega \subseteq {\mathcal A}^d$ be an NC domain. A function $F: \Omega \to B({\mathcal H})$ is intertwining preserving if whenever $x,y \in \Omega $ and $L: {\mathcal H} \to {\mathcal H}$ is a bounded linear operator that satisfies $Lx = yL$ (i.e., $Lx^r = y^r L$ for each r), then $L F(x) = F(y) L$ .

We say F is an NC function if it is intertwining preserving and locally bounded on $\Omega $ .

Remark 1.4 For any positive integer b, we may similarly define an NC mapping ${\mathcal F}:\Omega \to B({\mathcal H})^{b}$ where ${\mathcal F} = (F^1,\dots , F^{b})$ and each $F^i:\Omega \to B({\mathcal H})$ is an NC function. Many of our results can be reinterpreted for NC mappings with little to no overhead.

In Section 2, we show that every NC function is Fréchet holomorphic. Our first main result is proved in Theorem 3.6. A scalar point a is a point each of whose components is a scalar multiple of the identity.

Theorem 1.5 Suppose $\Omega $ is an NC domain containing a scalar point a, and F is NC on $\Omega $ . Then, for each k, the $k\mathrm {th}$ derivative $D^kF(a) [ h_1, \dots , h_k]$ is a symmetric homogeneous free polynomial of degree k in $h_1, \dots , h_k$ .

We derive several consequences of this result. In Theorem 4.2, we show that if $\Omega $ is a balanced NC domain, then a function F on $\Omega $ is an NC function if and only if it can be uniformly approximated by free polynomials on every finite set. In Theorem 6.2, we show that NC functions on most balanced domains are automatically sequentially strong operator continuous. This allows us to prove that every NC function on the NC matrix polydisk (resp. row ball) has a unique extension to an NC function on $\mathcal {P}(B({\mathcal H}))$ (resp. $\mathcal {R}(B({\mathcal H}))$ ).

Similarity preserving maps of matrices were studied by Procesi [Reference Procesi21], who showed that they were all trace polynomials. In the matricial case, this can be used to prove the analogue of Theorem 1.5 [Reference Klep and Spenko18]. In the infinite-dimensional case, we cannot use this theory, which makes the proof of Theorem 1.5 more complicated. However, we can then use the theorem to prove that the only intertwining preserving bounded k-linear maps are the obvious ones, the free polynomials. In Theorem 5.1, we prove the following theorem.

Theorem 1.6 Let $\Omega $ be an NC domain. Let $\Lambda : \Omega ^k \to B({\mathcal H})$ be NC and k-linear. Then $\Lambda $ is a homogeneous free polynomial of degree k.

2 Preliminaries

Throughout this section, we assume that $\Omega $ is an NC domain in ${\mathcal A}^d$ , and $F: \Omega \to B({\mathcal H})$ is an NC function. Let ${\mathbb N}^+$ denote the positive integers.

For each $n \in {\mathbb N}^+$ , define the unitary and similarity envelopes by

$$ \begin{align*} \widehat{\Omega}_n & \ := \{ U^* x U\ | \ U : {\mathcal H}^{(n)} \to {\mathcal H}, \mathrm{\ unitary},\ x \in \Omega \}, \\ \widetilde{\Omega}_n & \ := \{ S^{-1} x S \ |\ S : {\mathcal H}^{(n)} \to {\mathcal H}, \mathrm{\ invertible},\ x \in \Omega \}. \end{align*} $$

Notably, for $x_1,\dots , x_n\in \Omega $ , $\oplus _{j=1}^n x_j\in \widehat {\Omega }_n$ . We can extend F to $\widetilde {\Omega } = \cup _{n=1}^\infty \widetilde {\Omega }_n$ by

(2.1) $$ \begin{align} \widetilde{F} ( \tilde{x}) \ = \ S F(x) S^{-1}, \end{align} $$

where $\tilde {x} = S^{-1}xS$ for some $x\in \Omega $ .

It is straightforward to prove the following from the intertwining preserving property of F. Nevertheless, we include a proof to showcase the simplicity of working with $\widetilde {F}$ in lieu of F.

Proposition 2.1 The function $\widetilde {F}$ defined by (2.1) is well defined, and if $\tilde {x} \in \widetilde {\Omega }_m $ and $\tilde {y} \in \widetilde {\Omega }_m $ satisfy $ \tilde {L} \tilde {x} = \tilde {y} \tilde {L}$ for some linear $\tilde {L} : {{\mathcal H}}\otimes {\mathbb C^m} \to {{\mathcal H}}\otimes {\mathbb C^n}$ , then $\tilde {L} \widetilde {F} ( \tilde {x}) = \widetilde {F} ( \tilde {y}) \tilde {L} $ .

In particular, if $x_j \in \Omega $ for $1 \leq j \leq n$ , then $\widetilde {F} (\oplus x_j) = \oplus F(x_j)$ .

Proof Let $\tilde {x} = S^{-1}xS$ and $\tilde {y} = T^{-1}yT$ for $x,y\in \Omega $ . Define $L:{\mathcal H}\to {\mathcal H}$ by $L = T\tilde {L}S^{-1}$ and consider the following intertwining:

$$ \begin{align*} Lx &= T\tilde{L}S^{-1}\hat{x} = T\tilde{L}S^{-1}S\tilde{x}S^{-1} \\ &= T\tilde{L}\tilde{x}S^{-1} = T\tilde{y}\tilde{L}S^{-1} = TT^{-1}yT\tilde{L}S^{-1} \\ &= yL. \end{align*} $$

Thus, $LF(x) = F(y)L$ and consequently

$$ \begin{align*} \tilde{L}\widetilde{F}(x) &= \tilde{L}S^{-1}F(x)S = T^{-1}LF(x)S = T^{-1}F(y)LS = \widetilde{F}(\tilde{y})T^{-1}LS \\ &= \widetilde{F}(\tilde{y})\tilde{L}. \end{align*} $$

Finally, let $P_j:{\mathcal H}\to {\mathcal H}^{(n)}$ be the inclusion of ${\mathcal H}$ onto the $j{\text {th}}$ coordinate of ${\mathcal H}^{(n)}$ . Observe that $(\oplus _{i=1}^n x_i)P_j = P_j x_j$ . Hence, $\widetilde {F}(\oplus _{i=1}^n x_i)P_j = P_j \widetilde {F}(x_i) = P_j F(x_j)$ . The intertwining with $P_j^*$ has $x_jP_j^* = P_j^*(\oplus _{i=1}^n x_i)$ . Thus, $P_j^*F(x_j) = P_j^*\widetilde {F}(\oplus _{i=1}^n x_i)$ , and combining these two intertwining shows that $\widetilde {F}(\oplus _{i=1}^n x_i)$ is a diagonal block operator and

$$\begin{align*}\widetilde{F}(\oplus_{i=1}^n x_i) = \oplus_{i=1}^n F(x_i).\\[-35pt] \end{align*}$$

For later use, let us give a sort of converse.

Lemma 2.2 Suppose that $\Omega $ is an NC domain, and $F : \Omega \to B({\mathcal H})$ satisfies

(2.2) $$ \begin{align} F (S^{-1} [ x \oplus y] S ) = S^{-1} [F (x) \oplus F(y) ] S \end{align} $$

whenever $S: {\mathcal H} \to {\mathcal H}^{(2)}$ and $x,y, S^{-1} [ x \oplus y] S \in \Omega $ . Then F is intertwining preserving.

Proof Suppose $Lx = yL$ . Let

$$\begin{align*}S = \begin{bmatrix} 1 & L \\ 0 & 1 \end{bmatrix} , \end{align*}$$

and (2.2) implies that $L F(x) = F(y) L$ .▪

Recall that F is Fréchet holomorphic if, for every $x \in \Omega $ , there is an open neighborhood G of $0$ in ${\mathcal A}^d$ so that the Taylor series

(2.3) $$ \begin{align} F(x+h) = F(x) + \sum_{k=1}^\infty D^k F(x) [h, \dots, h] \end{align} $$

converges uniformly for h in G.

Using (1.1), it follows that if $x_1, \dots , x_n \in \Omega $ , then $\exists \varepsilon> 0$ so that if $y \in {\mathcal T}_n({\mathcal A})$ and $\| y - \oplus x_j \| < \varepsilon $ , then $U_n y U_n^* \in \Omega $ . The following is proved in [Reference Agler and McCarthy2], and, in the form stated, in [Reference Agler, McCarthy and Young3, Section 16.1].

Proposition 2.3 If $\Omega \subset {\mathcal A}^d$ is an NC domain and F is an NC function on $\Omega $ , then:

  1. (i) The function F is Fréchet holomorphic.

  2. (ii) For $x \in \Omega $ , $h \in {\mathcal A}$ ,

    $$\begin{align*}\widetilde{F} \left( \begin{bmatrix} x & h \\ 0 & x \end{bmatrix} \right) = \begin{bmatrix} F(x) & DF(x)[h]\\ 0 & F(x) \end{bmatrix}. \end{align*}$$

We wish to prove that when x is a scalar point, each derivative in (2.3) is actually a free polynomial in h. This is straightforward for the first derivative.

Lemma 2.4 Suppose $a = (a^1, \dots , a^d)$ is a d-tuple of scalar matrices in $\Omega $ . Then $F(a)$ is a scalar, and $DF(a)[c]$ is scalar for any scalar d-tuple c.

Proof For any $L \in B({\mathcal H})$ , since $La = aL$ , we have $LF(a) = F(a) L$ . Therefore, $F(a)$ is a scalar. For all t sufficiently close to $0$ , $a + tc$ is in $\Omega $ and $F(a+tc)- F(a)$ is scalar, and therefore $DF(a)[c]$ is scalar.▪

Lemma 2.5 Suppose $a = (a^1, \dots , a^d)$ is a d-tuple of scalar matrices in $\Omega $ . Then $DF(a)[h]$ is a linear polynomial in h.

Proof First assume that $h = (h_1, 0, \dots , 0)$ . Let $\varepsilon> 0$ be such that the closed $ \max (\varepsilon , \varepsilon \| h_1\|)$ ball around $a \oplus a$ is in $\widetilde {\Omega }$ . Let $J = (1, 0, \dots , 0)$ be the scalar d-tuple with first entry $1$ , the others $0$ . As

$$\begin{align*}\begin{bmatrix} 1 & 0 \\ 0 & h_1 \end{bmatrix} \ \begin{bmatrix} a & \varepsilon h \\ 0 & a \end{bmatrix} = \begin{bmatrix} a & \varepsilon J \\ 0 & a \end{bmatrix} \ \begin{bmatrix} 1 & 0 \\ 0 & h_1 \end{bmatrix} , \end{align*}$$

we get from Proposition 2.1 that

(2.4) $$ \begin{align} DF(a)[h] = DF(a)[J]\ h_1. \end{align} $$

By Lemma 2.4, $DF(a)[J]$ is a scalar, $c_1$ say, so we get

$$\begin{align*}DF(a) [ (h_1, 0, \dots, 0)] = c_1 h_1. \end{align*}$$

Permuting the coordinates and using the fact that $DF(a)[h]$ is linear in h, we get that, for any h,

$$\begin{align*}DF(a) [h] = \sum_{r=1}^d c_r h_r \end{align*}$$

for some constants $c_r$ .▪

3 Derivatives of NC functions are free polynomials

The derivatives are defined inductively, by

(3.1) $$ \begin{align} \nonumber &{ D^k F(x) [h_1, \dots, h_k] = } \\ & \lim_{\lambda \to 0} \frac{1}{\lambda} \left( D^{k-1}F(x + \lambda h_k)[h_1,\dots, h_{k-1}] - D^{k-1}F(x)[h_1, \dots, h_{k-1}] \right). \end{align} $$

The $k\mathrm {th}$ derivative is k-linear in $h_1, \dots , h_k$ . To extend Lemma 2.5 to higher derivatives, we need to introduce some other operators, called nc difference-differential operators in [Reference Kaliuzhnyi-Verbovetskyi and Vinnikov16].

$\Delta ^k F( x_1, \dots , x_{k+1}) [ h_1, \dots , h_k]$ is defined to be the $(1, k+1)$ entry in the matrix

(3.2) $$ \begin{align} \widetilde{F} \left( \begin{bmatrix} x_1 & h_1 & 0 & 0 & \dots & 0 \\ 0 & x_2 & h_2 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \dots & x_{k+1} \end{bmatrix} \right). \end{align} $$

We shall show in Lemma 3.2 that it is k-linear in $[h_1, \dots , h_k]$ .

The $\Delta ^k$ occur when applying $\widetilde {F}$ to a bidiagonal matrix. This is proved in [Reference Kaliuzhnyi-Verbovetskyi and Vinnikov16, Theorem 3.11].

Lemma 3.1 Let F be NC. Then,

(3.3) $$ \begin{align} & \hspace{-5pc} { \widetilde{F} \left( \begin{bmatrix} x_1 & h_1 & 0 & \dots & 0 \\ 0 & x_2 & h_2 & \dots & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots \\ 0 & 0 & 0 & \dots & x_{k+1} \end{bmatrix} \right) } && \end{align} $$
(3.4) $$ \begin{align} &\qquad= \begin{bmatrix} F(x_1) & \Delta^1 F(x_1, x_2) [h_1] & \dots & \Delta^k F(x_1, \dots, x_{k+1})[h_1, \dots, h_k] \\ 0 & F(x_2) & \dots & \Delta^{k-1} F(x_2, \dots, x_{k+1})[h_2, \dots, h_k] \\ \vdots & \vdots & \dots & \vdots \\ 0 & 0 & \dots & F(x_{k+1}) \end{bmatrix}. \end{align} $$

Proof We will prove this by induction. For $k=1$ , it is the definition of $\Delta ^1$ . Assume that it is proved for $k-1$ . Let $I_k$ denote the k-by-k matrix with diagonal entries the identity, and off-diagonal entries $0$ . As

$$\begin{align*}\begin{bmatrix} x_1 & h_1 & 0 & \dots & 0 \\ 0 & x_2 & h_2 & \dots & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots \\ 0 & 0 & 0 & \dots & x_{k+1} \end{bmatrix} \ \begin{bmatrix} I_k \\ 0 \end{bmatrix} _{(k+1) \times k } = \begin{bmatrix} I_k \\ 0 \end{bmatrix} _{(k+1) \times k } \ \begin{bmatrix} x_1 & h_1 & 0 & \dots & 0 \\ 0 & x_2 & h_2 & \dots & 0 \\ \vdots & \vdots & \vdots & \dots & \vdots \\ 0 & 0 & 0 & \dots & x_{k} \end{bmatrix}, \end{align*}$$

we conclude that the first k columns of (3.3) agree with those of (3.4). Similarly, intertwining by $ \begin {bmatrix} 0 & I_k \end {bmatrix} $ , we get that the bottom k rows agree. Finally, the $(1,(k+1))$ entry is the definition of $\Delta ^k$ .▪

A key property we need is that $\Delta ^k$ is k-linear in the directions. In the nc case, this is proved in [Reference Kaliuzhnyi-Verbovetskyi and Vinnikov16, Section 3.5].

Lemma 3.2 Let $x_1, \dots , x_{k+1} \in \Omega $ . Then $\Delta ^k F(x_1, \dots , x_{k+1})[h_1, \dots , h_k]$ is k-linear in $h_1, \dots , h_{k}$ .

Proof Let us write $\Delta ^k[h_1, \dots , h_k]$ for $\Delta ^k F(x_1, \dots , x_{k+1})[h_1, \dots , h_k]$ .

(i) First, we show that this is linear with respect to $h_1$ . Homogeneity follows from observing that

$$\begin{align*}\begin{bmatrix} x_1 & c h_1 & 0 & \dots \\ 0 & x_2 & h_2 & \dots \\ \vdots & \vdots & \vdots & \dots \\ \end{bmatrix} \begin{bmatrix} c & 0 & 0 \\ 0 & 1 & \\ \vdots & \vdots & \ddots \end{bmatrix} = \begin{bmatrix} c & 0 & 0 \\ 0 & 1 & \\ \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} x_1 & h_1 & 0 & \dots \\ 0 & x_2 & h_2 & \dots \\ \vdots & \vdots & \vdots & \dots \\ \end{bmatrix} \end{align*}$$

and using the intertwining preserving property Proposition 2.1.

To show additivity, let $p \geq 1$ and $q \geq 0$ be integers. Let Y be the $(p+k+q)\times (p+k+q)$ matrix

$$\begin{align*}Y = \begin{bmatrix} x_1 & 0 & \dots && h_1 & \dots && 0 & \dots \\ 0 & x_1 & \dots && 0 & \dots && 0 & \dots \\ & & \ddots && & \dots && & \dots \\ 0 & \dots && x_1 & h_1' & \dots && 0 & \dots \\ &&&0 & x_2 & h_2 & \dots & 0 & \dots\\ &&&&&\ddots &&&\\ &&&&& &x_{k+1}&0&\\ &&&&& &&x_{k+1}&\\ &&&&& &&&\ddots \end{bmatrix}. \end{align*}$$

Let L be the $(k+1) \times (p+k+q)$ matrix

$$\begin{align*}L = \begin{bmatrix} 1 & 0 & \dots& 0 & \dots & & &0 & \dots &0 \\ 0 & 0 & \dots& 1 & 0 & \dots & &0 &&\\ 0 & \dots && 0 & 1 && &0&&\\ 0 & \dots& & & &\ddots& &&&\\ 0 & \dots& & & && 1 &0& \dots &1\\ \end{bmatrix}. \end{align*}$$

Let X be the $(k+1)\times (k+1)$ matrix

$$\begin{align*}X = \begin{bmatrix} x_1 & h_1 & 0 & 0 & \dots & 0 \\ 0 & x_2 & h_2 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \dots & x_{k+1} \end{bmatrix}. \end{align*}$$

Then,

$$\begin{align*}LY = \begin{bmatrix} x_1 & \dots & h_1 & 0 & \dots &&0&\dots \\ 0 & \dots & x_2 & h_2 & \dots & && \\ & & & \ddots & &\\ 0 & \dots & 0 & \dots & &x_{k+1} & 0& \dots & x_{k+1} \end{bmatrix} = XL. \end{align*}$$

Therefore, the $(1,p+k+q)$ entry of $\widetilde {F}(Y)$ is $\Delta ^k[h_1, \dots , h_k]$ .

Let $L'$ be the matrix obtained by replacing the first row of L with the row that is $1$ in the $p\mathrm {th}$ entry and $0$ elsewhere. Then $L'Y = X' L'$ , where $X'$ is X with $h_1$ replaced by the d-tuple $h_1'$ . This gives that the $(p, p+k+q)$ entry of $\widetilde {F}(Y)$ is $\Delta ^k[h_1', \dots , h_k]$ .

Now, let $L"$ be the matrix that replaces the first row of L with a $1$ in both the first and $p\mathrm {th}$ entries, and let $X"$ be X with $h_1$ replaced by $h_1 + h_1'$ . Then $L" Y = X" L"$ , and we conclude that

$$\begin{align*}\Delta^k[h_1, \dots , h_k] + \Delta^k[h_1', \dots , h_k] = \Delta^k[h_1 + h_1', \dots , h_k]. \end{align*}$$

Therefore, $\Delta ^k$ is linear in the first entry.

(ii) To prove that $\Delta ^k$ is linear in the $i\mathrm {th}$ entry, for $i \geq 2$ , choose $p,q$ so that

$$\begin{align*}p + i -1 = k - i + 1 + q. \end{align*}$$

Then Y decomposes into a $2 \times 2$ block of $(p+i-1) \times (p+i-1)$ matrices.

$$\begin{align*}Y = \begin{bmatrix} A & B \\ 0 & D \end{bmatrix}. \end{align*}$$

Moreover, B is the matrix whose bottom left-hand entry is $h_i$ , and everything else is $0$ . Therefore,

$$\begin{align*}\widetilde{F}(Y) = \begin{bmatrix} \widetilde{F}(A) & \Delta^1 \widetilde{F}(A, D)[B] \\ 0 & \widetilde{F}(D) \end{bmatrix} , \end{align*}$$

and $ \Delta ^1 \widetilde {F}(A, D)[B]$ is linear in B (and hence in $h_i$ ) by part (i). Therefore, the $(1,p+k+q)$ entry of $\widetilde {F}(Y)$ , which we have established is $\Delta ^k[h_1, \dots , h_k]$ , is linear in $h_i$ , as desired.▪

Lemma 3.3 Suppose $a = (a_1, \dots , a_{k+1})$ is a $(k+1)$ -tuple of points in $\Omega $ , each of which is a d-tuple of scalars. Then $\Delta ^kF(a_1, \dots , a_{k+1})[h_1, \dots , h_{k}]$ is a free polynomial in h, homogeneous of degree k.

Proof Let us write $\Delta ^k[h_1, \dots , h_k]$ for $\Delta ^kF(a_1, \dots , a_{k+1})[h_1, \dots , h_{k}]$ . By Lemma 3.2, we know that $\Delta ^k [h_1, \dots , h_{k}]$ is k-linear. So we can assume that each $h_i$ is a d-tuple with only one nonzero entry. Say $h_i = H_i e_{j_i}$ , where $e_{j_i}$ is the d-tuple that is $1$ in the $j_i$ slot, $0$ else, and $H_i$ is an operator.

Claim:

(3.5) $$ \begin{align} \Delta^k[ H_1 e_{j_1}, H_2 e_{j_2} , \dots ] = H_1 H_2 \dots H_k \Delta^k[ e_{j_1}, e_{j_2} , \dots, e_{j_k} ]. \end{align} $$

This follows from the intertwining

$$ \begin{align*} &{ \begin{bmatrix} H_1 H_2 \dots H_k & 0 & 0 & \dots \\ 0 & H_2 \dots H_k & 0 & \dots & \\ && \ddots & \\ &&& 1 \end{bmatrix} \begin{bmatrix} a_1 & e_{j_1} & 0 & \dots \\ 0 & a_2 & e_{j_2} & \dots \\ && \ddots &\\ &&&a_{k+1}\end{bmatrix} } && \\ &= \begin{bmatrix} a_1 & e_{j_1} H_1 & 0 & \dots \\ 0 & a_2 & e_{j_2} H_2 & \dots \\ && \ddots &\\ &&&a_{k+1} \end{bmatrix} \begin{bmatrix} H_1 H_2 \dots H_k & 0 & 0 & \dots \\ 0 & H_2 \dots H_k & 0 & \dots & \\ && \ddots & \\ &&& 1 \end{bmatrix}. \end{align*} $$

Let

$$\begin{align*}X = \begin{bmatrix} a_1 & e_{j_1} H_1 & 0 & \dots \\ 0 & a_2 & e_{j_2} H_2 & \dots \\ && \ddots &\\ &&&a_{k+1} \end{bmatrix}. \end{align*}$$

As X is a d-tuple of $(k+1)\times (k+1)$ matrices of scalars, it commutes with any $(k+1)\times (k+1)$ matrix that has a constant operator L on the diagonal. Therefore, L commutes with $\Delta ^k[ e_{j_1}, e_{j_2} , \dots , e_{j_k} ]$ . As L is arbitrary, it follows that $\Delta ^k[ e_{j_1}, e_{j_2} , \dots , e_{j_k} ]$ is a scalar. So, from (3.5), we get that $\Delta ^k[ H_1 e_{j_1}, H_2 e_{j_2} , \dots ] $ is a constant times $H_1 H_2 \dots H_k$ , and by linearity, we are done.▪

Now, we relate $\Delta ^k$ to $D^k$ .

Lemma 3.4 Let F be NC. Then,

$$\begin{align*}\Delta^k F(x, \dots, x) [h, \dots , h] = \frac{1}{k!} D^kF(x)[h, \dots, h]. \end{align*}$$

Proof Let T be the upper-triangular Toeplitz matrix given by

$$\begin{align*}T\ = \begin{bmatrix} 1 & \frac{1}{\lambda} & \frac{1}{2! \lambda^2} & \dots & \frac{1}{k! \lambda^k} \\ \\ 0 & 1 & \frac{1}{\lambda} & \dots & \frac{1}{(k-1)! \lambda^{k-1}} \\ \vdots &\vdots & \vdots &&\vdots \\ 0 & 0 & 0 & \dots & 1 \end{bmatrix}. \end{align*}$$

Its inverse is

$$\begin{align*}T^{-1} = \begin{bmatrix} 1 & \frac{-1}{\lambda} & \frac{1}{2! \lambda^2} & \dots & \frac{(-1)^k}{k! \lambda^k} \\ \\ 0 & 1 & \frac{1}{\lambda} & \dots & \frac{(-1)^{k-1}}{(k-1)! \lambda^{k-1}} \\ \vdots &\vdots & \vdots &&\vdots \\ 0 & 0 & 0 & \dots & 1 \end{bmatrix}. \end{align*}$$

We have, componentwise in x and h,

$$\begin{align*}T \ \begin{bmatrix} x & 0 & 0 & \dots & 0 \\ 0 & x+\lambda h & 0 & \dots & 0 \\ \vdots &\vdots & \vdots &&\vdots \\ 0 & 0 & 0 & \dots & x+ k\lambda h \end{bmatrix} \ T^{-1} = \begin{bmatrix} x & h & 0 & \dots & 0 \\ 0 & x + \lambda h & h & \dots & 0 \\ \vdots &\vdots & \vdots &&\vdots \\ 0 & 0 & 0 & \dots & x + k\lambda h \end{bmatrix}. \end{align*}$$

Therefore,

$$\begin{align*}\Delta^k F(x, x+\lambda h, \dots, x + k\lambda h)[h,h,\dots,h] = \frac{(-1)^k}{k!\lambda^k} \sum_{j=0}^k (-1)^j { k \choose j} f(x + j\lambda h). \end{align*}$$

Take the limit as $\lambda \to 0$ , and the right-hand side converges to

$$\begin{align*}\frac{1}{k!} D^k F (x) [h, h, \dots, h]. \end{align*}$$

By continuity, the left-hand side converges to $\Delta ^k F (x,\dots , x)[h, \dots , h]$ .▪

Derivatives of NC functions are symmetric. The case $k = 2$ was proved in [Reference Augat4].

Proposition 3.5 Suppose F is an NC function and $k\geq 1$ is an integer. If $\sigma $ is any permutation in ${\mathfrak {S}}_k$ , then

$$\begin{align*}D^k F(x)[h_1,\dots, h_k] = D^kF(x)[h_{\sigma(1)},\dots, h_{\sigma(2)}] \end{align*}$$

for any x in the domain of F and for all $h_1,\dots , h_k\in {\mathcal A}^d$ .

Proof The case $k=1$ is trivial, and $k=2$ was proved in [Reference Augat4]. Assume that $k \geq 3$ and that the result holds for $k-1$ . If we can show that we can swap the last two entries

(3.6) $$ \begin{align} D^kF(x)[h_1,\dots, h_{k-1},h_k] = D^kF(x)[h_1,\dots, h_k,h_{k-1}] \end{align} $$

and also permute the first $k-1$ entries

(3.7) $$ \begin{align} D^kF(x)[h_1,\dots, h_{k-1},h_k] = D^kF(x)[h_{\sigma(1)},\dots, h_{\sigma(k-1)},h_k], \end{align} $$

then the result follows. Set $G = D^{k-2}F$ , and consider it as a function of $x,h_1,\dots , h_{k-2}$ . Then G is an NC function, and by the $k=2$ case,

$$ \begin{align*} D^2&G(x,h_1,\dots, h_{k-2})[(\ell_0,\dots,\ell_{k-2}),(\tilde{\ell}_0,\dots, \tilde{\ell}_{k-2})] \\ &= D^2G(x,h_1,\dots, h_{k-2})[(\tilde{\ell}_0,\dots, \tilde{\ell}_{k-2}),(\ell_0,\dots,\ell_{k-2})]. \end{align*} $$

Since

$$\begin{align*}D^2G(x,h_1,\dots,h_{k_2})[h_{k-1},0,\dots,0,h_{k},0,\dots, 0] = D^kF(x)[h_1,\dots,h_k], \end{align*}$$

we see that equation (3.6) holds. The induction hypothesis says that

(3.8) $$ \begin{align} D^{k-1}F(x)[h_1,\dots,h_{k-1}] = D^{k-1}F(x)[h_{\sigma(1)},\dots,h_{\sigma(k-1)}]. \end{align} $$

If $G' = D^{k-1}F$ is treated as function in $x,h_1,\dots , h_{k-1}$ , then applying equation (3.8), we have

$$ \begin{align*} D^kF(x)[h_1,\dots, h_{k-1},h_k] &= DG'(x,h_1,\dots,h_{k-1})[h_k,0,\dots,0] \\ &= DG'(x,h_{\sigma(1)},\dots, h_{\sigma(k-1)})[h_k,0,\dots,0]\\ &= D^kF(x)[h_{\sigma(1)},\dots, h_{\sigma(k-1)}, h_k]. \end{align*} $$

Thus, both equations (3.6) and (3.7) hold. Therefore, the $k{\text {th}}$ derivative of F is symmetric in its arguments.▪

Combining Lemmas 3.3 and 3.4 and Proposition 3.5, we get our first main result.

Theorem 3.6 Suppose $\Omega $ is an NC domain that contains a scalar point a and F is an NC function on $\Omega $ . Then, for each k, the $k\mathrm {th}$ derivative $D^kF(a) [ h_1, \dots , h_k]$ is a homogeneous polynomial of degree k, it is k-linear, and it is symmetric with respect to the action of ${\mathfrak {S}}_k$ .

Proof We know that $D^kF (a) [h_1, \dots , h_k]$ is k-linear, so we can assume that each $h_i$ is a d-tuple with only one entry; we can write $h_i = H_i e_{j_i}$ , as in the proof of Lemma 3.3. We want to show that

(3.9) $$ \begin{align} D^k F (a) [ H_1 e_{j_1}, \dots, H_k e_{j_k} ] \end{align} $$

is a homogeneous polynomial of degree k in the operators $H_1, \dots , H_k$ . Let $s_i$ be scalars for $1 \leq i \leq k$ , and consider

(3.10) $$ \begin{align} D^k F (a) [ s_1 H_1 e_{j_1} + \dots + s_k H_k e_{j_k}, s_1 H_1 e_{j_1} + \dots + s_k H_k e_{j_k}, \dots ]. \end{align} $$

Since all the arguments are the same, by Lemma 3.4, this agrees with $k!$ times $\Delta ^k$ , which by Lemma 3.3 is a homogeneous polynomial of degree k. Group the terms in (3.10) by what the commutative monomial in $s_1, \dots , s_k$ is, and consider the sum of the terms in (3.10) that are a multiple of $s_1 \dots s_k$ . These correspond to

(3.11) $$ \begin{align} \sum_{\sigma \in {\mathfrak{S}}_k} D^k F (a) [ H_{\sigma(1)} e_{j_\sigma(1)}, \dots, H_\sigma(k) e_{j_\sigma(k)} ]. \end{align} $$

By Proposition 3.5, (3.11) is just $k!$ times (3.9), and hence this is a homogeneous polynomial in $H_1, \dots , H_k$ , as desired.▪

4 Approximating NC functions by free polynomials

The results in this section are in improvement over those in [Reference Agler and McCarthy2], as they do not need the a priori assumption that the function is sequentially strong operator continuous. Recall that a set $\Omega $ in a vector space is balanced if $\alpha \Omega \subseteq \Omega $ whenever $\alpha $ is a complex number of modulus less than or equal to $1$ . Importantly, $\mathcal {P}(A)$ and $\mathcal {R}(A)$ are balanced.

If $\Omega $ contains a scalar point $\alpha $ , and F is NC on $\Omega $ , then F is given by a convergent series of free Taylor polynomials near $\alpha $ . For convenience, we assume that $\alpha = 0$ .

Lemma 4.1 Let $\Omega $ be an NC domain containing $0$ , and let F be an NC function on $\Omega $ . Then there is an open set $\Upsilon \subset \Omega $ containing $0$ , and homogeneous free polynomials $p_k$ of degree k so that

(4.1) $$ \begin{align} F(x) = \sum_{k=0}^\infty p_k(x) \quad \forall \ x \in \Upsilon , \end{align} $$

and the convergence is uniform in $\Upsilon $ .

Proof By Proposition 2.3, we know that F is Fréchet holomorphic at $0$ , and by Theorem 3.6, we know that the $k\mathrm {th}$ derivative is a homogeneous polynomial $p_k$ of degree k. Therefore, (4.1) holds.▪

Theorem 4.2 Let $\Omega $ be a balanced NC domain, and $F: \Omega \to B({\mathcal H})$ . The following statements are equivalent.

  1. (i) The function F is NC.

  2. (ii) There is a power series expansion $\sum _{k=0}^\infty p_k(x)$ that converges absolutely and locally uniformly at each point $x \in \Omega $ to $F(x)$ such that each $p_k$ is a homogeneous free polynomial of degree k.

  3. (iii) For any triple of points in $ \Omega $ , there is a sequence of free polynomials that converge uniformly to F on a neighborhood of each point in the triple.

Proof $(i) \Rightarrow (ii){:}$ By Lemma 4.1, F is given by a power series expansion (4.1) in a neighborhood of $0$ . We must show that this series converges absolutely on all of $\Omega $ .

Let $x \in \Omega $ . Since $\Omega $ is open and balanced, there exists $r> 1$ so that $\mathbb D(0,r) x \subseteq \Omega $ . Define a function $f: \mathbb D(0,r) \to B({\mathcal H})$ by

$$\begin{align*}f(\zeta) = F ( \zeta x). \end{align*}$$

Then f is holomorphic, and so norm continuous [Reference Rudin22, Theorem 3.31]. Therefore,

$$\begin{align*}\sup \left\{ \| f (\zeta) \| \, : \, | \zeta | = \frac{1+r}{2} \right\} \ =: \ M \ \ < \infty. \end{align*}$$

By the Cauchy integral formula,

$$\begin{align*}\| p_k(x) \| = \frac{1}{k!} \left\| \frac{d^k}{d \zeta^k} f (\zeta) \big\lvert_0 \right\| \ \leq \ M \left(\frac{2}{1+r}\right)^k. \end{align*}$$

Therefore, the power series $\sum p_k(x)$ converges absolutely, to $f(1) = F(x)$ .

Since F is NC, it is bounded on some neighborhood of x, and by the Cauchy estimate again, the convergence of the power series is uniform on that neighborhood.

$(ii) \Rightarrow (iii){:}$ Let $x_1, x_2, x_3 \in \Omega $ . Let $q_k = \sum _{j=0}^k p_k$ . Then $q_k(x)$ converges uniformly to $F(x)$ on an open set containing $\{ x_1, x_2, x_3 \}$ .

$(iii) \Rightarrow (i){:}$ Since F is locally uniformly approximable by free polynomials, it is locally bounded. To see that it is also intertwining preserving, we shall show that it satisfies the hypotheses of Lemma 2.2. Let $S: {\mathcal H} \to {\mathcal H}^{(2)}$ be invertible, and assume that $x,y$ , and $z = S^{-1}[x \oplus y] S$ are all in $\Omega $ . Let $q_k$ be a sequence of free polynomials that approximate F on $\{ x,y,z \}$ . Then,

$$ \begin{align*} F \left( S^{-1} \begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix} S \right) &= \lim_k q_k \left( S^{-1} \begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix} S \right) \\ &= \lim_k \left( S^{-1} \begin{bmatrix} p_k(x) & 0 \\ 0 & p_k(y) \end{bmatrix} S \right) \\ &= \left( S^{-1} \begin{bmatrix} F(x) & 0 \\ 0 & F( y) \end{bmatrix} S \right)\!. \end{align*} $$

So, by Lemma 2.2, F is intertwining preserving.▪

The requirement that F be intertwining preserving forces $F(x)$ to always lie in the double commutant of x. However, if F is also locally bounded on a balanced domain containing x, we get a much stronger conclusion as a corollary of Theorem 4.2.

Corollary 4.3 Suppose F is an NC function on a balanced NC domain $\Omega $ . Then $F(x)$ is in the norm closed unital algebra generated by $\{ x^1, \dots , x^d \}$ .

5 k-linear NC functions

In the following theorem, we assume that $\Lambda $ is NC as a function of all $dk$ variables at once, and is k-linear if they are broken up into d-tuples. If we had an independent proof of Theorem 5.1, we could use it to prove Theorem 3.6 with the aid of Lemma 5.3. Instead, we deduce it as a consequence of Theorem 3.6.

Theorem 5.1 Let $\Omega $ be an NC domain. Let $\Lambda : \Omega ^k \to B({\mathcal H})$ be NC and k-linear. Then $\Lambda $ is a homogeneous free polynomial of degree k.

Proof Let ${\mathfrak h} = (h_1, \dots , h_k)$ be a k-tuple of d-tuples in $\Omega $ . Calculating, and using k-linearity, we get

$$ \begin{align*} D \Lambda (x) [{\mathfrak h}] &= \lim_{\lambda \to 0} \frac{1}{\lambda} [ \Lambda (x + \lambda h) - \Lambda (x) ] \\ &= \Lambda(h_1, x_2, \dots, x_k) + \Lambda(x_1, h_2, \dots, x_k ) + \dots. \end{align*} $$

Repeating this calculation, we get that $D^2 \Lambda (x) [ {\mathfrak h}, {\mathfrak h}]$ is $2!$ times the sum of $\Lambda $ evaluated at every k-tuple that has $k-2$ entries from $(x_1, \dots , x_k)$ and two entries from ${\mathfrak h}$ . Continuing, we get

(5.1) $$ \begin{align} D^k \Lambda(x) [ {\mathfrak h}, \dots, {\mathfrak h}] = k! \ \Lambda (h_1, \dots, h_k). \end{align} $$

By Theorem 3.6, the left-hand side of (5.1) is a homogeneous free polynomial of degree k, so the right-hand side is too.▪

It is worth singling out a special case of Theorem 5.1.

Corollary 5.2 Let $\Lambda : [B({\mathcal H})]^{dk} \to B({\mathcal H})$ be k-linear, intertwining preserving, and bounded. Then $\Lambda $ is a homogeneous nc polynomial of degree k.

Lemma 5.3 The $k\mathrm {th}$ derivative $D^kF(x)[h_1,\dots , h_k]$ is NC on $\Omega \times {\mathcal A}^{dk}$ . If $a \in \Omega $ is a scalar point, then $D^kF(a)[h_1, \dots , h_k]$ is NC on ${\mathcal A}^{dk}$ .

Proof The first assertion follows from induction, and the observation that difference quotients preserve intertwining. The second assertion follows from the fact that if a is scalar,

$$\begin{align*}D^k F (a) [ S^{-1} h_1 S, \dots , S^{-1} h_k S ] = D^k F (S^{-1}a S) [ S^{-1} h_1 S, \dots , S^{-1} h_k S ].\\[-32pt] \end{align*}$$

6 Realization formulas

One can generalize Example 1.2. For $\delta $ a matrix of free polynomials, let

$$\begin{align*}B_\delta( {\mathcal A}) \ = \ \{ x \in {\mathcal A}^d : \| \delta(x) \| < 1 \}. \end{align*}$$

These sets are all NC domains. If

$$\begin{align*}\delta(x) = \begin{bmatrix} x^1 & 0 & \dots & 0 \\ 0 & x^2 & \dots & 0 \\ &&\ddots \\ 0 & 0 & \dots & x^d \end{bmatrix} , \end{align*}$$

then $B_\delta ( {\mathcal A})$ is $\mathcal {P}({\mathcal A})$ from (1.3). If we set

$$\begin{align*}\delta(x) = ( x^1 \ x^2 \ \cdots \ x^d) , \end{align*}$$

then $B_\delta ( {\mathcal A})$ is $\mathcal {R}({\mathcal A})$ from (1.4).

The sets $B_\delta ( {\mathcal A})$ are closed not just under finite direct sums, but countable direct sums, in the following sense.

Definition 6.1 A family $\{ E_k \}_{k=1}^\infty $ is an exhaustion of $\Omega $ if:

  1. (1) $ E_k \subseteq \ \mathrm {int}(E_{k+1})$ for all k;

  2. (2) $\Omega = \bigcup _{k=1}^\infty E_k $ ;

  3. (3) each $ E_k$ is bounded;

  4. (4) each $ E_k$ is closed under countable direct sums: if $x_j$ is a sequence in $ E_k$ , then there exists a unitary $U : {\mathcal H} \to {\mathcal H}^{(\infty )}$ such that

    (6.1) $$ \begin{align} U^{-1} \ \begin{bmatrix} x_1 & 0& \cdots & \\ 0 & x_2 & \cdots \\ \cdots & \cdots & \ddots \end{bmatrix} U \ \in \ E_k. \end{align} $$

If we set

$$\begin{align*}E_k = \{ x \in B_\delta( {\mathcal A}) : \| \delta(x) \| \leq 1 - 1/k,\ \mathrm{and\ } \| x \| \leq k \}, \end{align*}$$

then $E_k$ is an exhaustion of $B_\delta ({\mathcal A})$ .

We have the following automatic continuity result for NC functions on balanced domains that have an exhaustion.

Theorem 6.2 Suppose $\Omega \subseteq {\mathcal A}^d$ is a balanced NC domain that has an exhaustion $(E_k)$ , and $F: \Omega \to B({\mathcal H})$ is NC and bounded on each $E_k$ . Suppose that, for some k, there is a sequence $(x_j)$ in $E_k$ that converges to $x \in E_k$ in the strong operator topology. Then $F(x_j)$ converges to $F(x)$ in the strong operator topology.

Proof Let $U : {\mathcal H} \to {\mathcal H}^{(\infty )}$ be a unitary so that $ U^{-1} [ \oplus x_j ] U= z \in E_k$ . Let $\Pi _j : {\mathcal H}^\infty \to {\mathcal H}$ be projection onto the $j\mathrm {th}$ component. Let $L_j = \Pi _j U$ . Then $L_j z = x_j L_j$ . Therefore, $ F( z) = U^{-1}[ \oplus F(x_j) ]U. $

Let v be any unit vector, and $\varepsilon> 0$ . By Theorem 4.2, there is a free polynomial p so that $\| p(x) - F(x) \| < \varepsilon /3$ and $\| p(z) - F(z) \| < \varepsilon /3$ . Therefore, $\| p(x_j) - F(x_j) \| < \varepsilon /3$ for each j.

Now, choose N so that $j \geq N$ implies $\| [ p(x) - p(x_j) ] v \| < \varepsilon /3$ , which we can do because multiplication is continuous on bounded sets in the strong operator topology. Then we get for $j \geq N$ that

$$\begin{align*}\| [F(x) - F(x_j) ] v \| \ \leq \ \| F(x) - p(x) \| + \| [ p(x) - p(x_j) ] v \| + \| p(x_j) - F(x_j) \| \ \leq \ \varepsilon.\\[-20pt] \end{align*}$$

Definition 6.3 Let $\delta $ be an $I \times J$ matrix of free polynomials, and $F: B_\delta ({\mathcal A}) \to B({\mathcal H})$ . A realization for F consists of an auxiliary Hilbert space ${\mathcal M}$ and an isometry

(6.2) $$ \begin{align} \begin{bmatrix}A&B\\C&D\end{bmatrix} \ : \mathbb C \oplus {\mathcal M}^{I} \to \mathbb C \oplus {\mathcal M}^{J} \end{align} $$

such that for all x in $B_\delta ({\mathcal A})$ ,

(6.3) $$ \begin{align} \kern2pc F(x) = {A}\otimes {1} + \big({B}\otimes{1}\big) \big({1}\otimes{\delta(x)}\big) \left[ 1 - \big({D}\otimes{1}\big) \big({1}\otimes{\delta(x)}\big) \right]^{-1} \big({C}\otimes{1}\big). \end{align} $$

In [Reference Agler and McCarthy2], it was shown that if $B_\delta ( B({\mathcal H}))$ is connected and contains $0$ , then every sequentially strong operator continuous function (in the sense of Theorem 6.2) NC function from $B_\delta (B({\mathcal H}))$ that is bounded by $1$ has a realization. The strong operator continuity was needed to pass from a realization of $B_\delta $ in the matricial case given in [Reference Agler and McCarthy1] to a realization for operators. In light of Proposition 6.2, though, this hypothesis is automatically fulfilled. So we get the following corollary.

Corollary 6.4 Let $\delta $ be an $I \times J$ matrix of free polynomials, and $F: B_\delta (B({\mathcal H})) \to B({\mathcal H})$ satisfy $\sup \| F (x) \| \leq 1$ . Assume that $B_\delta (B({\mathcal H}))$ is balanced. Then F is NC if and only if it has a realization.

As another consequence, we get that every bounded NC function on $B_\delta ({\mathbb M})$ (by which we mean $\{ x \in {\mathbb M}^{[d]} : \| \delta ( x) \| < 1 \}$ ) has a unique extension to an NC function on $B_\delta (B({\mathcal H}))$ , where we embed ${\mathbb M}^{[d]}$ into $B({\mathcal H})^d$ by choosing a basis of ${\mathcal H}$ and identifying an n-by-n matrix with the finite rank operator that is $0$ outside the first n-by-n block.

Corollary 6.5 Assume that $B_\delta (B({\mathcal H}))$ is balanced. Then every NC bounded function f on $B_\delta ({\mathbb M})$ has a unique extension to an NC function on $B_\delta (B({\mathcal H}))$ .

Proof Suppose $F_1$ and $F_2$ are both extensions of f, and let $F = F_1 - F_2$ . As $0 \in B_\delta (B({\mathcal H}))$ and $\delta $ is continuous, there exists $r> 0$ so that $r \mathcal {P}(B({\mathcal H})) \subseteq B_\delta (B({\mathcal H}))$ .

Let $x \in r \mathcal {P}(B({\mathcal H}))$ . Then there exists a sequence $(x_j)$ in $r \mathcal {P}({\mathbb M})$ that converges to x in the strong operator topology. As $F(x_j) = 0$ for each j, by Theorem 6.2, we get $F(x) = 0$ . Therefore, F vanishes on an open subset of $B_\delta (B({\mathcal H}))$ . As F is holomorphic, and $B_\delta (B({\mathcal H}))$ is connected, we conclude that F is identically zero.▪

Question 6.6 Are the previous results true if $B_\delta (B({\mathcal H}))$ is not balanced?

If one has a realization formula (equation (6.3)) for $B_\delta ({\mathcal A})$ , then it automatically extends to $B_\delta (B({\mathcal H}))$ . We do not know how different choices of algebra ${\mathcal A}_1$ and ${\mathcal A}_2$ satisfying (1.1) affect the set of NC functions on their balls.

Footnotes

This research was partially supported by the National Science Foundation Grant DMS 2054199.

References

Agler, J. and McCarthy, J. E., Global holomorphic functions in several non-commuting variables . Can. J. Math. 67(2015), no. 2, 241285.CrossRefGoogle Scholar
Agler, J. and McCarthy, J. E., Non-commutative holomorphic functions on operator domains . European J. Math. 1(2015), no. 4, 731745.CrossRefGoogle ScholarPubMed
Agler, J., McCarthy, J. E., and Young, N. J., Operator analysis: Hilbert space methods in complex analysis, Cambridge Tracts in Mathematics, 219, Cambridge University Press, Cambridge, 2020.CrossRefGoogle Scholar
Augat, M., Free potential functions. Preprint, 2020. https://arxiv.org/pdf/2005.01850.pdf Google Scholar
Ball, J. A., Marx, G., and Vinnikov, V., Noncommutative reproducing kernel Hilbert spaces . J. Funct. Anal. 271(2016), no. 7, 18441920.CrossRefGoogle Scholar
Helton, J. W., Klep, I., and McCullough, S., Analytic mappings between noncommutative pencil balls . J. Math. Anal. Appl. 376(2011), no. 2, 407428.CrossRefGoogle Scholar
Helton, J. W., Klep, I., and McCullough, S., Proper analytic free maps . J. Funct. Anal. 260(2011), no. 5, 14761490.CrossRefGoogle Scholar
Helton, J. W., Klep, I., and McCullough, S., Free analysis, convexity and LMI domains . In: Mathematical methods in systems, optimization, and control , Operator Theory: Advances and Applications, 222, Springer, Basel, 2012, pp. 195219.CrossRefGoogle Scholar
Helton, J. W., Klep, I., and McCullough, S., The tracial Hahn–Banach theorem, polar duals, matrix convex sets, and projections of free spectrahedra . J. Eur. Math. Soc. (JEMS) 19(2017), no. 6, 18451897.CrossRefGoogle Scholar
Helton, J. W. and McCullough, S. A., A Positivstellensatz for non-commutative polynomials . Trans. Amer. Math. Soc. 356(2004), no. 9, 37213737 (electronic).CrossRefGoogle Scholar
Ji, Z., Natarajan, A., Vidick, T., Wright, J., and Yuen, H., MIP* = RE. Preprint, 2020. arXiv:2001.04383 CrossRefGoogle Scholar
Jury, M., Klep, I., Mancuso, M. E., McCullough, S., and Pascoe, J. E., Noncommutative partial convexity via $\varGamma$ -convexity . J. Geom. Anal. 31(2021), no. 3, 31373160.CrossRefGoogle Scholar
Jury, M. T. and Martin, R. T. W., Operators affiliated to the free shift on the free hardy space . J. Funct. Anal. 277(2019), no. 12, Article no. 108285, 39 pp.CrossRefGoogle Scholar
Jury, M. T. and Martin, R. T. W., Column extreme multipliers of the free hardy space . J. Lond. Math. Soc. (2) 101(2020), no. 2, 457489.CrossRefGoogle Scholar
Jury, M. T., Martin, R. T. W., and Shamovich, E., Blaschke-singular-outer factorization of free non-commutative functions . Adv. Math. 384(2021), Article no. 107720, 42 pp.CrossRefGoogle Scholar
Kaliuzhnyi-Verbovetskyi, D. S. and Vinnikov, V., Foundations of free non-commutative function theory, American Mathematical Society, Providence, RI, 2014.Google Scholar
Klep, I. and Schweighofer, M., Connes’ embedding conjecture and sums of Hermitian squares . Adv. Math. 217(2008), no. 4, 18161837.CrossRefGoogle Scholar
Klep, I. and Spenko, S., Free function theory through matrix invariants . Can. J. Math. 69(2017), no. 2, 408433.CrossRefGoogle Scholar
Mancuso, M. E., Inverse and implicit function theorems for noncommutative functions on operator domains . J. Operator Theory 83(2020), no. 2, 447473.CrossRefGoogle Scholar
Pascoe, J. E. and Tully-Doyle, R., Cauchy transforms arising from homomorphic conditional expectations parametrize noncommutative pick functions . J. Math. Anal. Appl. 472(2019), no. 2, 14871498.CrossRefGoogle Scholar
Procesi, C., The invariant theory of $n\times n$ matrices . Adv. Math. 19(1976), no. 3, 306381.CrossRefGoogle Scholar
Rudin, W., Functional analysis, McGraw-Hill, New York, 1991.Google Scholar
Salomon, G., Shalit, O. M., and Shamovich, E., Algebras of bounded noncommutative analytic functions on subvarieties of the noncommutative unit ball . Trans. Amer. Math. Soc. 370(2018), no. 12, 86398690.CrossRefGoogle Scholar
Salomon, G., Shalit, O. M., and Shamovich, E., Algebras of noncommutative functions on subvarieties of the noncommutative ball: the bounded and completely bounded isomorphism problem . J. Funct. Anal. 278(2020), no. 7, Article no. 108427, 54 pp.CrossRefGoogle Scholar
Taylor, J. L., A general framework for a multi-operator functional calculus . Adv. Math. 9(1972), 183252.CrossRefGoogle Scholar
Taylor, J. L., Functions of several noncommuting variables . Bull. Amer. Math. Soc. 79(1973), 134.CrossRefGoogle Scholar
Voiculescu, D., Free analysis questions I: duality transform for the coalgebra of ∂_(X:B) . Int. Math. Res. Not. IMRN 2004(2004), no. 16, 793822.CrossRefGoogle Scholar
Voiculescu, D.-V., Free analysis questions II: the Grassmannian completion and the series expansions at the origin . J. Reine Angew. Math. 645(2010), 155236.Google Scholar