Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-18T16:16:59.827Z Has data issue: false hasContentIssue false

Toeplitz determinants with a one-cut regular potential and Fisher–Hartwig singularities I. Equilibrium measure supported on the unit circle

Published online by Cambridge University Press:  15 August 2023

Elliot Blackstone
Affiliation:
Department of Mathematics, University of Michigan, Ann Arbor, USA ([email protected])
Christophe Charlier
Affiliation:
Centre for Mathematical Sciences, Lund University, 22100 Lund, Sweden ([email protected])
Jonatan Lenells
Affiliation:
Department of Mathematics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

We consider Toeplitz determinants whose symbol has: (i) a one-cut regular potential $V$, (ii) Fisher–Hartwig singularities and (iii) a smooth function in the background. The potential $V$ is associated with an equilibrium measure that is assumed to be supported on the whole unit circle. For constant potentials $V$, the equilibrium measure is the uniform measure on the unit circle and our formulas reduce to well-known results for Toeplitz determinants with Fisher–Hartwig singularities. For non-constant $V$, our results appear to be new even in the case of no Fisher–Hartwig singularities. As applications of our results, we derive various statistical properties of a determinantal point process which generalizes the circular unitary ensemble.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction

In this work, we obtain large $n$ asymptotics of the Toeplitz determinant

(1.1)\begin{equation} D_n(\vec\alpha,\vec\beta,V,W) := \det (f_{j-k})_{j,k=0,\ldots,n-1}, \quad f_k:=\frac{1}{2\pi}\int_0^{2\pi}f(e^{i\theta})e^{{-}ik\theta}{\rm d}\theta,\end{equation}

where $f$ is supported on the unit circle $\mathbb {T}=\{z\in \mathbb {C}:|z|=1\}$ and is of the form

(1.2)\begin{equation} f(z)=e^{{-}nV(z)}e^{W(z)}\omega(z), \quad z\in \mathbb{T}. \end{equation}

We assume that $V$ and $W$ are analytic in a neighbourhood of $\mathbb {T}$ and that the potential $V$ is real-valued on $\mathbb {T}$. The function $\omega (z)=\omega (z;\vec \alpha,\vec \beta )$ in (1.2) contains Fisher–Hartwig singularities and is defined in (1.8) below. Since the functions $V$ and $W$ are analytic on $\mathbb {T}$, there exists an open annulus $U$ containing $\mathbb {T}$ on which they admit Laurent series representations of the form

(1.3)\begin{align} V(z) & = V_{0} + V_+(z) + V_{-}(z),\quad V_+(z) = \sum_{k=1}^{+\infty} V_{k}z^{k}, \quad V_{-}(z) = \sum_{k={-}\infty}^{{-}1} V_{k}z^{k}, \end{align}
(1.4)\begin{align} & W(z) = W_{0} + W_+(z) + W_{-}(z), \quad W_+(z) = \sum_{k=1}^{+\infty} W_{k}z^{k},\quad W_{-}(z) = \sum_{k={-}\infty}^{{-}1} W_{k}z^{k}, \end{align}

where $V_{k}, W_{k}\in \mathbb {C}$ are the Fourier coefficients of $V$ and $W$, i.e. $V_{k} = {1}/{2\pi }\int _0^{2\pi }V(e^{i\theta })e^{-ik\theta }{\rm d}\theta$ and similarly for $W_{k}$. Associated to $V$ there is an equilibrium measure $\mu _{V}$, which is the unique minimizer of the functional

(1.5)\begin{equation} \mu \mapsto \iint \log \frac{1}{|z-s|} {\rm d}\mu(z){\rm d}\mu(s) + \int V(z){\rm d}\mu(z) \end{equation}

among all Borel probability measures $\mu$ on $\mathbb {T}$. In this paper, we make the assumption that $\mu$ is supported on the whole unit circle. We further assume that $V$ is regular, i.e. that the function $\psi$ given by

(1.6)\begin{equation} \psi(z) = \frac{1}{2\pi} - \frac{1}{2\pi} \sum_{\ell = 1}^{+\infty} \ell ( V_{\ell}z^{\ell} + \overline{V_{\ell}}z^{-\ell}), \quad z \in U, \end{equation}

is strictly positive on $\mathbb {T}$. Under these assumptions, we show in appendix A that

(1.7)\begin{equation} {\rm d}\mu_V(e^{i\theta})=\psi(e^{i\theta}){\rm d}\theta, \qquad \theta\in [0,2\pi). \end{equation}

The function $\omega$ appearing in (1.2) is defined by

(1.8)\begin{equation} \omega(z) = \prod_{k=0}^{m} \omega_{\alpha_{k}}(z)\omega_{\beta_{k}}(z), \end{equation}

where $\omega _{\alpha _{k}}(z)$ and $\omega _{\beta _{k}}(z)$ are defined for $z=e^{i\theta }$ by

(1.9)\begin{equation} \omega_{\alpha_{k}}(z) \,{=}\, |z-t_k|^{\alpha_{k}}, \enspace \omega_{\beta_{k}}(z) \,{=}\, e^{i(\theta -\theta_{k})\beta_{k}} \,{\times} \left\{ \begin{array}{@{}ll} e^{i\pi\beta_{k}}, & \mbox{ if } 0 \leq \theta < \theta_{k}, \\ e^{{-}i \pi \beta_{k}}, & \mbox{ if } \theta_{k} \leq \theta < 2\pi, \end{array} \right. \enspace \theta \in [0,2\pi), \end{equation}

and

(1.10)\begin{equation} t_k:=e^{i\theta_k}, \quad 0 =\theta_{0} < \theta_{1} < \cdots < \theta_{m} < 2\pi. \end{equation}

At $t_k=e^{i\theta _{k}}$, the functions $\omega _{\alpha _{k}}$ and $\omega _{\beta _{k}}$ have root- and jump-type singularities, respectively. Note that $\omega _{\beta _{k}}$ is continuous at $z=1$ if $k \neq 0$. We allow the parameters $\theta _{1},\ldots,\theta _{m}$ to vary with $n$, but we require them to lie in a compact subset of $(0,2\pi )_{\mathrm {ord}}^{m}:=\{(\theta _{1},\ldots,\theta _{m}): 0 < \theta _{1} < \cdots < \theta _{m} < 2\pi \}$.

To summarize, the $n \times n$ Toeplitz determinant (1.1) depends on $n$, $m$, $V$, $W$, $\vec {t} = (t_{1},\ldots,t_{m})$, $\vec {\alpha }=(\alpha _1,\ldots,\alpha _m)$ and $\vec {\beta } = (\beta _{1},\ldots,\beta _{m})$, but for convenience the dependence on $m$ and $\vec {t}$ is omitted in the notation $D_n(\vec \alpha,\vec \beta,V,W)$. We now state our main result.

Theorem 1.1 Large $n$ asymptotics of $D_{n}(\vec {\alpha },\vec {\beta },V,W)$

Let $m \in \mathbb {N} :=\{0,1,\ldots \}$, and let $t_{k}=e^{i\theta _{k}}$, $\alpha _{k}\in \mathbb {C}$ and $\beta _{k} \in \mathbb {C}$ be such that

\begin{align*} & 0 = \theta_{0} < \theta_{1} < \ldots < \theta_{m} < 2\pi, \quad \mbox{ and}\\ & {\rm Re\,} \alpha_{k} >{-}1, \quad {\rm Re\,} \beta_{k} \in (-\tfrac{1}{2},\tfrac{1}{2}) \quad \mbox{ for } k=0,\ldots,m. \end{align*}

Let $V: \mathbb {T}\to \mathbb {R}$ and $W: \mathbb {T}\to \mathbb {C}$, and suppose $V$ and $W$ can be extended to analytic functions in a neighbourhood of $\mathbb {T}$. Suppose that the equilibrium measure ${\rm d}\mu _V(e^{i\theta })=\psi (e^{i\theta }){\rm d}\theta$ associated to $V$ is supported on $\mathbb {T}$ and that $\psi > 0$ on $\mathbb {T}$. Then, as $n \to \infty$,

(1.11)\begin{equation} D_{n}(\vec{\alpha},\vec{\beta},V,W) = \exp(C_{1} n^{2} + C_{2} n + C_{3} \log n + C_{4} + {\mathcal{O}} (n^{{-}1+2\beta_{\max}})), \end{equation}

with $\beta _{\max } = \max \{ |{\rm Re\,} \beta _{1}|,\ldots,|{\rm Re\,} \beta _{m}| \}$ and

\begin{align*} C_{1} & ={-}\frac{V_{0}}{2}-\frac{1}{2}\int_0^{2\pi}V(e^{i\theta}) {\rm d}\mu_{V}(e^{i\theta}), \\ C_{2} & =\sum_{k=0}^{m} \frac{\alpha_{k}}{2}(V(t_{k})-V_{0}) - \sum_{k=0}^{m} 2i\beta_{k} {\rm Im\,}(V_+(t_{k})) + \int_0^{2\pi}W(e^{i\theta}){\rm d}\mu_V(e^{i\theta}), \\ C_{3} & = \sum_{k=0}^{m} \left( \frac{\alpha_{k}^{2}}{4}-\beta_{k}^{2} \right), \\ C_{4} & =\sum_{\ell = 1}^{+\infty} \ell W_{\ell}W_{-\ell} - \sum_{k=0}^{m} \frac{\alpha_{k}}{2}(W(t_{k})-W_{0}) + \sum_{k=0}^{m}\beta_{k} \left(W_+(t_{k})-W_{-}(t_{k})\right) \\ & \quad+ \sum_{0 \leq j < k \leq m} \Bigg\{ \frac{\alpha_{j} i \beta_{k} - \alpha_{k} i \beta_{j}}{2}(\theta_{k}-\theta_{j }-\pi) + \left( 2\beta_{j}\beta_{k}-\frac{\alpha_{j}\alpha_{k}}{2} \right) \log |t_{j}-t_{k}| \Bigg\} \\ & \quad + \sum_{k=0}^{m} \log \frac{G(1+\frac{\alpha_{k}}{2}+\beta_{k})G(1+\frac{\alpha_{k}}{2}-\beta_{k})}{G(1+\alpha_{k})} + \sum_{k=0}^m\frac{\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4}}{\psi(t_k)}\left(\frac{1}{2\pi}-\psi(t_k)\right), \end{align*}

where $G$ is Barnes’ $G$-function. Furthermore, the above asymptotics are uniform for all $\alpha _{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Re\,} z >-1\}$, for all $\beta _{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Re\,} z \in (-\frac {1}{2},\frac {1}{2})\}$ and for all $(\theta _{1},\ldots,\theta _{m})$ in compact subsets of $(0,2\pi )_{\mathrm {ord}}^{m}$. The above asymptotics can also be differentiated with respect to $\alpha _{0},\ldots,\alpha _{m},\beta _{0},\ldots,\beta _{m}$ as follows: if $k_{0},\ldots,k_{2m+1}\in \mathbb {N}$, $k_{0}+\ldots +k_{2m+1}\geq 1$ and $\partial ^{\vec {k}}:=\partial _{\alpha _{0}}^{k_{0}}\ldots \partial _{\alpha _{m}}^{k_{m}}\partial _{\beta _{0}}^{k_{m+1}}\ldots \partial _{\beta _{m}}^{k_{2m+1}}$, then

(1.12)\begin{align} \partial^{\vec{k}}\left( \log D_{n}(\vec{\alpha},\vec{\beta},V,W) - \log \widehat{D}_{n} \right) = {\mathcal{O}} \left( \frac{(\log n)^{k_{m+1}+\ldots+k_{2m+1}}}{n^{1-2\beta_{\max}}} \right), \qquad \mbox{as } n \to + \infty, \end{align}

where $\widehat {D}_{n}$ denotes the right-hand side of (1.11) without the error term.

1.1 History and related work

In the case when the potential $V(z)$ in (1.2) vanishes identically, the asymptotic evaluation of Toeplitz determinants of the form (1.1) has a long and distinguished history. The first important result was obtained by Szegő in 1915 who determined the leading behaviour of $D_{n}(\vec {\alpha },\vec {\beta },V,W)$ in the case when $\vec {\alpha } = \vec {\beta } = \vec {0}$ and $V = 0$, that is, when the symbol $f(z)$ is given by $f(z) = e^{W(z)}$. In our notation, this result, known as the first Szegő limit theorem [Reference Szegő45], can be expressed as

(1.13)\begin{equation} D_{n}(\vec{0},\vec{0},0,W) = \exp\left(\frac{n}{2\pi}\int_0^{2\pi}W(e^{i\theta}){\rm d}\theta + o(n)\right) \qquad \text{as }n \to \infty. \end{equation}

Later, in the 1940s, it became clear from the pioneering work of Kaufmann and Onsager that a more detailed understanding of the error term in (1.13) could be used to compute two-point correlation functions in the two-dimensional Ising model in the thermodynamic limit [Reference Kaufman and Onsager39]. This inspired Szegő to seek for a stronger version of (1.13). The outcome was the so-called strong Szegő limit theorem [Reference Szegő46], which in our notation states that

(1.14)\begin{align} D_{n}(\vec{0},\vec{0},0,W) = \exp\left(\frac{n}{2\pi}\int_0^{2\pi}W(e^{i\theta}){\rm d}\theta + \sum_{\ell = 1}^{+\infty} \ell W_{\ell}W_{-\ell} + o(1)\right) \qquad \text{as }n \to \infty. \end{align}

We observe that if $V = 0$, then ${\rm d}\mu _V(e^{i\theta }) = \frac {{\rm d}\theta }{2\pi }$; thus, Szegő's theorems are consistent with our main result, theorem 1.11, in the special case when $\vec {\alpha } = \vec {\beta } = \vec {0}$ and $V = 0$. (The strong Szegő theorem actually holds under much weaker assumptions on $W$ than what is assumed in this paper, see e.g. the survey [Reference Basor7].)

In a groundbreaking paper from 1968, Fisher and Hartwig introduced a class of singular symbols $f(z)$ for which they convincingly conjectured a detailed asymptotic formula for the associated Toeplitz determinant [Reference Fisher and Hartwig32]. The Fisher–Hartwig class consists of symbols $f(z)$ of form (1.2) with $V = 0$. In our notation, the Fisher–Hartwig conjecture can be formulated as

(1.15)\begin{align} D_{n}(\vec{\alpha},\vec{\beta},0,W) \sim \exp\left(\frac{n}{2\pi}\int_0^{2\pi}W(e^{i\theta}){\rm d}\theta + \sum_{k=0}^{m} \left( \frac{\alpha_{k}^{2}}{4}-\beta_{k}^{2} \right)\log{n} + C_4\right)\nonumber\\ \text{as }n \to \infty, \end{align}

where $C_4$ is a constant to be determined, and the Fisher–Hartwig singularities are encoded in the vectors $\vec {\alpha }$ and $\vec {\beta }$. Symbols with Fisher–Hartwig singularities arise in many applications. For example, in the 1960s, Lenard proved [Reference Lenard41] that no Bose–Einstein condensation exists in the ground state for a one-dimensional system of impenetrable bosons by considering Toeplitz determinants with symbols of the form $f(z) = |z-e^{i\theta _1}| |z - e^{-i\theta _1}|$ with $\theta _1 \in {\mathbb {R}}$. Lenard's proof hinges on an inequality whose proof was provided by Szegő, see [Reference Lenard41, Theorem 2]. We observe that (1.15) is consistent with theorem 1.11 in the special case when $V = 0$.

There are too many works devoted to proofs and generalizations of the Fisher–Hartwig conjecture (1.15) for us to cite them all, but we refer to [Reference Basor4, Reference Böttcher and Silbermann11, Reference Widom47] for some early works, and to [Reference Basor and Morrison5, Reference Basor and Tracy6, Reference Böttcher10, Reference Deift, Its and Krasovsky25] for four reviews. The current state-of-the-art for non-merging singularities and for $\vec {\alpha }$, $\vec {\beta }$ in compact subsets was set by Ehrhardt in his 1997 Ph.D. thesis (see [Reference Ehrhardt29]) and by Deift, Its and Krasovsky in [Reference Deift, Its and Krasovsky24, Reference Deift, Its and Krasovsky26]. Since our proof builds on the results for the case of $V = 0$, we have included a version of the asymptotic formulas of [Reference Deift, Its and Krasovsky24, Reference Deift, Its and Krasovsky26, Reference Ehrhardt29] in theorem 4.1. We also refer to [Reference Claeys and Krasovsky21, Reference Fahs31] for studies of merging Fisher–Hartwig singularities with $V=0$, and to [Reference Charlier and Claeys17] for the case of large discontinuities with $V=0$.

Note that if $V=V_{0}$ is a constant, then $D_{n}(\vec {\alpha },\vec {\beta },V_{0},W)=e^{-n^{2}V_{0}}D_{n}(\vec {\alpha },\vec {\beta },0,W)$.

The novelty of the present work is that we consider symbols that include a non-constant potential $V$; we are not aware of any previous works on the unit circle including such potentials. Our main result is formulated under the assumption that ${\rm Re\,} \beta _{k} \in (-\frac {1}{2},\frac {1}{2})$ for all $k$. The general case where ${\rm Re\,} \beta _{k} \in \mathbb {R}$ was treated in the case of $V=0$ in [Reference Deift, Its and Krasovsky24]. Asymptotic formulas for Hankel determinants with a one-cut regular potential $V$ and Fisher–Hartwig singularities were obtained in [Reference Berestycki, Webb and Wong8, Reference Charlier14, Reference Charlier and Gharakhloo19], and the corresponding multi-cut case was considered in [Reference Charlier, Fahs, Webb and Wong18]. Our proofs draw on some of the techniques developed in these papers.

1.2 Application: a determinantal point process on the unit circle

The Toeplitz determinant (1.1) admits the Heine representation

(1.16)\begin{equation} D_n(\vec\alpha,\vec\beta,V,W) = \frac{1}{n!(2\pi)^n}\int_{[0,2\pi]^{n}} \prod_{1 \leq j < k \leq n} |e^{i\phi_{k}}-e^{i\phi_{j}}|^{2}\prod_{j=1}^{n} f(e^{i\phi_{j}}){\rm d}\phi_{j}. \end{equation}

This suggests that the results of theorem 1.11 can be applied to obtain information about the point process on $\mathbb {T}$ defined by the probability measure

(1.17)\begin{align} \frac{1}{n! (2\pi)^n Z_{n}} \prod_{1 \leq j < k \leq n} |e^{i\phi_{k}}-e^{i\phi_{j}}|^{2}\prod_{j=1}^{n} e^{{-}nV(e^{i\phi_{j}})}{\rm d}\phi_{j}, \qquad \phi_{1},\ldots,\phi_{n}\in[0,2\pi), \end{align}

where $Z_{n} = D_{n}(\vec {0},\vec {0},V,0)$ is the normalization constant (also called the partition function). In what follows, we use theorem 1.11 to obtain smooth statistics, log statistics, counting statistics and rigidity bounds for the point process (1.17). In the case of constant $V$, the point process (1.17) describes the distribution of eigenvalues of matrices drawn from the circular unitary ensemble and has already been widely studied. We are not aware of any earlier work where the process (1.17) is considered explicitly for non-constant $V$. However, the point process (1.17), but with $nV(e^{i\phi })$ replaced by the highly oscillatory potential $V(e^{in\phi })$, is studied in [Reference Baik2, Reference Forrester34]. We also refer to [Reference Bourgade and Falconet12, Reference Byun and Seo13] for other determinantal generalizations of the circular unitary ensemble.

Let $\mathsf {p}_{n}(z):=\prod _{j=1}^{n}(e^{i\phi _{j}}-z)$ be the characteristic polynomial associated to (1.17), and define $\log \mathsf {p}_{n}(z)$ for $z \in \mathbb {T}\setminus \{e^{i\phi _{1}},\ldots,e^{i\phi _{n}}\}$ by

\begin{align*} \log \mathsf{p}_{n}(z) & := \sum_{j=1}^{n} \log (e^{i\phi_{j}}-z), \qquad {\rm Im\,} \log (e^{i\phi_{j}}-z)\\ & := \frac{\phi_{j} + \arg_{0} z}{2} + \begin{cases} \frac{3\pi}{2}, & \mbox{if } 0 \leq \phi_{j} < \arg_{0} z, \\ \frac{\pi}{2}, & \mbox{if } \arg_{0} z < \phi_{j} < 2\pi, \end{cases} \end{align*}

where $\arg _{0} z \in [0,2\pi )$. In particular, if $\theta _{k}\notin \{\phi _{1},\ldots,\phi _{n}\}$,

(1.18)\begin{align} e^{2i \beta_{k}({\rm Im\,} \log \mathsf{p}_{n}(t_{k})-n\theta_{k}-n\pi)} = \prod_{j=1}^{n} \omega_{\beta_{k}}(e^{i\phi_{j}}) = e^{{-}i\beta_{k}(\pi+\theta_{k}) n }e^{2\pi i \beta_{k}N_{n}(\theta_{k})}\prod_{j=1}^{n} e^{i\beta_{k}\phi_{j}}, \end{align}

where $N_{n}(\theta ):=\#\{\phi _{j} \in [0,\theta ]\} \in \{0,1,\ldots,n\}$. Using the first identity in (1.18) and the fact that $\{\theta _{0},\ldots,\theta _{m}\} \cap \{\phi _{1},\ldots,\phi _{n}\} = \emptyset$ with probability one, it is straightforward to see that

(1.19)\begin{align} \mathbb{E}\Bigg[\prod_{j=1}^{n}e^{W(e^{i\phi_{j}})}\prod_{k=0}^{m}e^{\alpha_{k}{\rm Re\,} \log \mathsf{p}_{n}(t_{k})}e^{2i\beta_{k}({\rm Im\,} \log \mathsf{p}_{n}(t_{k})-n\theta_{k}-n\pi)}\Bigg] = \frac{D_{n}(\vec{\alpha},\vec{\beta},V,W)}{D_{n}(\vec{0},\vec{0},V,0)}. \end{align}

Furthermore, if $\beta _{0}=-\beta _{1}-\ldots -\beta _{m}$, then the second identity in (1.18) together with (1.19) implies

(1.20)\begin{equation} \frac{D_{n}(\vec{\alpha},\vec{\beta},V,W)}{D_{n}(\vec{0},\vec{0},V,0)} = \prod_{k=1}^{m} e^{{-}i \beta_{k} \theta_{k} n } \times \mathbb{E}\Bigg[\prod_{j=1}^{n}e^{W(e^{i\phi_{j}})}\prod_{k=0}^{m}|\mathsf{p}_{n}(t_{k})|^{\alpha_{k}}e^{2 \pi i\beta_{k}N_{n}(\theta_{k})}\Bigg]. \end{equation}

Lemma 1.2 For any $z \in \mathbb {T}$, we have

(1.21)\begin{align} & \frac{V(z)-V_{0}}{2} = \int_{0}^{2\pi} \log |e^{i\theta}-z|{\rm d}\mu_{V}(e^{i\theta}), \end{align}
(1.22)\begin{align} & \frac{\arg_{0} z}{2\pi} - \frac{{\rm Im\,} V_+(z) - {\rm Im\,} V_+(1)}{\pi} = \int_{0}^{\arg_{0} z} {\rm d}\mu_{V}(e^{i\theta}). \end{align}

Proof. The equilibrium measure $\mu _{V}$ is uniquely characterized by the Euler–Lagrange variational equality

(1.23)\begin{equation} 2 \int_{0}^{2\pi} \log |z-e^{i\theta}| {\rm d}\mu_{V}(e^{i\theta}) = V(z) - \ell, \quad\text{ for } z \in \mathbb{T}, \end{equation}

where $\ell \in \mathbb {R}$ is a constant, see e.g. [Reference Saff and Totik42]. In particular, the identity (1.21) is equivalent to the statement that $\ell =V_{0}$. The equality $\ell =V_{0}$ can be established by integrating (1.23) over $z=e^{i\phi } \in \mathbb {T}$ and dividing by $2\pi$:

\[ \ell = \int_{0}^{2\pi} \ell \frac{{\rm d}\phi}{2\pi} = \int_{0}^{2\pi} \left( V(z) - 2 \int_{0}^{2\pi} \log|e^{i\phi}-e^{i\theta}|{\rm d}\mu_{V}(e^{i\theta}) \right)\frac{{\rm d}\phi}{2\pi} = V_{0}, \]

where we have used the well-known (see e.g. [Reference Saff and Totik42, Example 0.5.7]) identity $\int _{0}^{2\pi } \log |e^{i\phi }-e^{i\theta }| \frac {{\rm d}\phi }{2\pi } =0$ for $\theta \in [0,2\pi )$. This proves (1.21). Identity (1.22) follows from (1.6) and (1.3).

Combining (1.20), theorem 1.1 and lemma 1.2, we get the following.

Theorem 1.3 Let $m \in \mathbb {N},$ and let $t_{k}=e^{i\theta _{k}},$ $\alpha _{0},\ldots,\alpha _{m}\in \mathbb {C}$ and $u_{1},\ldots,u_{m} \in \mathbb {C}$ be such that

\[ 0 = \theta_{0} < \theta_{1} < \ldots < \theta_{m} < 2\pi, \quad \mbox{ and } \quad {\rm Re\,} \alpha_{k} >{-}1, \quad {\rm Im\,} u_{k} \in (-\pi,\pi) \quad \mbox{for all } k. \]

Let $V: \mathbb {T}\to \mathbb {R},$ $W: \mathbb {T}\to \mathbb {C}$ and suppose $V,$ $W$ can be extended to analytic functions in a neighbourhood of $\mathbb {T}$. Suppose that the equilibrium measure ${\rm d}\mu _V(e^{i\theta })=\psi (e^{i\theta }){\rm d}\theta$ associated to $V$ is supported on $\mathbb {T}$ and that $\psi > 0$ on $\mathbb {T}$. Then, as $n \to \infty,$ we have

(1.24)\begin{align} & \mathbb{E}\Bigg[\prod_{j=1}^{n}e^{W(e^{i\phi_{j}})}\prod_{k=0}^{m}|\mathsf{p}_{n}(t_{k})|^{\alpha_{k}}\prod_{k=1}^{m}e^{u_{k}N_{n}(\theta_{k})}\Bigg]\nonumber\\ & \quad = \exp\left( \tilde{C}_{1} n + \tilde{C}_{2} \log n + \tilde{C}_{3} + {\mathcal{O}} \left( n^{{-}1+\frac{u_{\max}}{\pi}} \right)\right), \end{align}

with $u_{\max } = \max \{ |{\rm Im\,} u_{1}|,\ldots,|{\rm Im\,} u_{m}| \}$ and

(1.25)\begin{align} \tilde{C}_{1} & = \sum_{k=0}^{m} \alpha_{k}\int_{0}^{2\pi} \log|e^{i\phi}-t_{k}|{\rm d}\mu_V(e^{i\phi}) + \sum_{k=1}^{m} u_{k} \int_{0}^{\theta_{k}} {\rm d}\mu_V(e^{i\phi})\nonumber\\ & \quad+ \int_0^{2\pi} W(e^{i\phi}){\rm d}\mu_V(e^{i\phi}), \end{align}
(1.26)\begin{align} \tilde{C}_{2} & = \sum_{k=0}^{m} \left( \frac{\alpha_{k}^{2}}{4}+\frac{u_{k}^{2}}{4\pi^{2}} \right), \end{align}
(1.27)\begin{align} \tilde{C}_{3} & = \sum_{\ell = 1}^{+\infty} \ell W_{\ell}W_{-\ell} - \sum_{k=0}^{m} \alpha_{k} \frac{W_+(t_{k})+W_{-}(t_{k})}{2} + \sum_{k=0}^{m} \frac{u_{k}}{\pi}\frac{W_+(t_{k}) -W_{-}(t_{k})}{2i} \end{align}
(1.28)\begin{align} & + \sum_{0 \leq j < k \leq m} \Bigg\{ \frac{\alpha_{j} u_{k} - \alpha_{k} u_{j}}{4\pi}(\theta_{k}-\theta_{j }-\pi) - \left( \frac{u_{j}u_{k}}{2\pi^{2}}+\frac{\alpha_{j}\alpha_{k}}{2} \right) \log |t_{j}-t_{k}| \Bigg\} \end{align}
(1.29)\begin{align} & + \sum_{k=0}^{m} \log \frac{G(1+\frac{\alpha_{k}}{2}+\frac{u_{k}}{2\pi i})G(1+\frac{\alpha_{k}}{2}-\frac{u_{k}}{2\pi i})}{G(1+\alpha_{k})} - \sum_{k=0}^m\frac{\frac{u_{k}^{2}}{\pi^{2}}+\alpha_{k}^{2}}{4\psi(t_k)}\left(\frac{1}{2\pi}-\psi(t_k)\right), \end{align}

where $G$ is Barnes’ $G$-function and $u_{0}:=-u_{1}-\ldots -u_{m}$. Furthermore, the above asymptotics are uniform for all $\alpha _{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Re\,} z >-1\}$, for all $u_{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Im\,} z \in (-\pi,\pi )\}$ and for all $(\theta _{1},\ldots,\theta _{m})$ in compact subsets of $(0,2\pi )_{\mathrm {ord}}^{m}$. The above asymptotics can also be differentiated with respect to $\alpha _{0},\ldots,\alpha _{m},u_{1},\ldots,u_{m}$ as follows: if $k_{0},\ldots,k_{2m}\in \mathbb {N}$, $k_{0}+\ldots +k_{2m}\geq 1$ and $\partial ^{\vec {k}}:=\partial _{\alpha _{0}}^{k_{0}}\ldots \partial _{\alpha _{m}}^{k_{m}}\partial _{u_{1}}^{k_{m+1}}\ldots \partial _{u_{m}}^{k_{2m}}$, then as $n \to + \infty$

\begin{align*} & \partial^{\vec{k}}\left( \log \mathbb{E}\Bigg[\prod_{j=1}^{n}e^{W(e^{i\phi_{j}})}\prod_{k=0}^{m}|\mathsf{p}_{n}(t_{k})|^{\alpha_{k}}\prod_{k=1}^{m}e^{u_{k}N_{n}(\theta_{k})}\Bigg] - \log \widehat{E}_{n} \right)\\ & \quad = {\mathcal{O}} \left( \frac{(\log n)^{k_{m+1}+\ldots+k_{2m}}}{n^{1-\frac{u_{\max}}{\pi}}} \right), \end{align*}

where $\widehat {E}_{n}$ denotes the right-hand side of (1.24) without the error term.

Our first corollary is concerned with the smooth linear statistics of (1.17). For $V=0$, the central limit theorem stated in corollary 1.4 was already obtained in [Reference Johansson38].

Corollary 1.4 Smooth statistics

Let $V$ and $W$ be as in theorem 1.3, and assume furthermore that $W:\mathbb {T}\to \mathbb {R}$. Let $\{\kappa _{j}\}_{j=1}^{+\infty }$ be the cumulants of $\sum _{j=1}^{n}W(e^{i\phi _{j}})$, i.e.

(1.30)\begin{equation} \kappa_{j} := \partial_{t}^{j} \log \mathbb{E}[e^{t \sum_{j=1}^{n}W(e^{i\phi_{j}})}]\big|_{t=0}. \end{equation}

As $n \to + \infty$, we have

\begin{align*} & \mathbb{E}\Bigg[\sum_{j=1}^{n}W(e^{i\phi_{j}})\Bigg] = n \int_0^{2\pi} W(e^{i\phi}){\rm d}\mu_V(e^{i\phi}) + {\mathcal{O}} \left( \frac{1}{n} \right), \\ & \mathrm{Var}\Bigg[\sum_{j=1}^{n}W(e^{i\phi_{j}})\Bigg] = 2\sum_{\ell = 1}^{+\infty} \ell W_{\ell}W_{-\ell} + {\mathcal{O}} \left( \frac{1}{n} \right), \\ & \kappa_{j} = {\mathcal{O}} \left( \frac{1}{n} \right), \qquad j \geq 3. \end{align*}

Moreover, if $W$ is non-constant, then

\[ \frac{\sum_{j=1}^{n}W(e^{i\phi_{j}})-n\int_0^{2\pi} W(e^{i\phi}){\rm d}\mu_V(e^{i\phi})}{(2\sum_{k = 1}^{+\infty} kW_{k}W_{{-}k})^{1/2}} \]

converges in distribution to a standard normal random variable.

Our second corollary considers linear statistics for a test function with a $\log$-singularity at $t$. We let $\gamma _{\mathrm {E}}\approx 0.5772$ denote Euler's constant.

Corollary 1.5 $\log |\cdot |$-statistics

Let $t=e^{i\theta } \in \mathbb {T}$ with $\theta \in [0,2\pi ),$ and let $\{\kappa _{j}\}_{j=1}^{+\infty }$ be the cumulants of $\log |\mathsf {p}_{n}(t)|,$ i.e.

(1.31)\begin{equation} \kappa_{j} := \partial_{\alpha}^{j} \log \mathbb{E}[e^{\alpha \log |\mathsf{p}_{n}(t)|}]\big|_{\alpha=0}. \end{equation}

As $n \to + \infty$, we have

\begin{align*} & \mathbb{E}[\log |\mathsf{p}_{n}(t)|] = n \int_{0}^{2\pi} \log|e^{i\phi}-t|{\rm d}\mu_V(e^{i\phi}) + {\mathcal{O}} \left( \frac{1}{n} \right), \\ & \mathrm{Var}[\log |\mathsf{p}_{n}(t)|] = \frac{\log n}{2} + \frac{1+\gamma_{\mathrm{E}}}{2} - \frac{\frac{1}{2\pi}-\psi(t)}{2\psi(t)} + {\mathcal{O}} \left( \frac{1}{n} \right), \\ & \kappa_{j} = ({-}1+2^{1-j}) \; (\log G)^{(j)}(1) + {\mathcal{O}} \left( \frac{1}{n} \right), \qquad j \geq 3, \end{align*}

and

\[ \frac{\log |\mathsf{p}_{n}(t)|-n\int_{0}^{2\pi} \log|e^{i\phi}-t|{\rm d}\mu_V(e^{i\phi})}{\sqrt{\log n}/\sqrt{2}} \]

converges in distribution to a standard normal random variable.

Counting statistics of determinantal point processes have been widely studied over the years [Reference Costin and Lebowitz22, Reference Soshnikov44] and is still a subject of active research, see e.g. the recent works [Reference Charlier16, Reference Dai, Xu and Zhang23, Reference Smith, Le Doussal, Majumdar and Schehr43]. Our third corollary established various results on the counting statistics of (1.17).

Corollary 1.6 Counting statistics

Let $t=e^{i\theta } \in \mathbb {T}$ be bounded away from $t_{0}:=1$, with $\theta \in (0,2\pi )$, and let $\{\kappa _{j}\}_{j=1}^{+\infty }$ be the cumulants of $N_{n}(\theta )$, i.e.

(1.32)\begin{equation} \kappa_{j} := \partial_{u}^{j} \log \mathbb{E}[e^{u N_{n}(\theta)}]\big|_{u=0}. \end{equation}

As $n \to + \infty$, we have

\begin{align*} & \mathbb{E}[N_{n}(\theta)] = n \int_{0}^{\theta} {\rm d}\mu_V(e^{i\phi}) + {\mathcal{O}} \left( \frac{\log n}{n} \right), \\ & \mathrm{Var}[N_{n}(\theta)] = \frac{\log n}{\pi^{2}} + \frac{1+\gamma_{\mathrm{E}}+\log|t-1|}{\pi^{2}} - \frac{\frac{1}{2\pi}-\psi(1)}{2\pi^{2}\psi(1)} - \frac{\frac{1}{2\pi}-\psi(t)}{2\pi^{2}\psi(t)}\\ & \quad+ {\mathcal{O}} \left( \frac{(\log n)^{2}}{n} \right), \\ & \kappa_{2j+1} = {\mathcal{O}} \left( \frac{(\log n)^{2j+1}}{n} \right), \qquad j \geq 1, \\ & \kappa_{2j+2} = \frac{({-}1)^{j+1}}{2^{2j}\pi^{2j+2}} \; (\log G)^{(2j+2)}(1) + {\mathcal{O}} \left( \frac{(\log n)^{2j+2}}{n} \right), \qquad j \geq 1, \end{align*}

and $\frac {N_{n}(\theta )-n\int _{0}^{\theta } {\rm d}\mu _V(e^{i\phi })}{\sqrt {\log n}/\pi }$ converges in distribution to a standard normal random variable.

Remark 1.7 There are several differences between smooth, $\log$- and counting statistics that are worth pointing out:

  • The variance of the smooth statistics is of order $1$, while the variances of the $\log$- and counting statistics are of order $\log n$.

  • The third and higher order cumulants of the smooth statistics are all ${\mathcal {O}}(n^{-1})$, while for the $\log$-statistics the corresponding cumulants are all of order $1$. On the other hand, the third and higher order cumulants of the counting statistics are as follows: the odd cumulants are $o(1)$, while the even cumulants are of order $1$. This phenomenon for the counting statistics was already noticed in [Reference Smith, Le Doussal, Majumdar and Schehr43, eq (29)] for a class of determinantal point processes.

Another consequence of theorem 1.3 is the following result about the individual fluctuations of the ordered angles. Corollary 1.8 is an analogue for (1.17) of Gustavsson's well-known result [Reference Gustavsson36, Theorem 1.2] for the Gaussian unitary ensemble.

Corollary 1.8 Ordered statistics

Let $\xi _{1}\leq \xi _{2} \leq \ldots \leq \xi _{n}$ denote the ordered angles,

(1.33)\begin{equation} \xi_{1}=\min\{\phi_{1},\ldots,\phi_{n}\}, \quad \xi_{j} = \inf_{\theta\in [0,2\pi)}\{\theta:N_{n}(\theta)=j\}, \quad j=1,\ldots,n, \end{equation}

and let $\eta _{k}$ be the classical location of the $k$-th smallest angle $\xi _{k}$,

(1.34)\begin{equation} \int_{0}^{\eta_{k}}{\rm d}\mu_V(e^{i\phi}) = \frac{k}{n}, \qquad k=1,\ldots,n. \end{equation}

Let $t=e^{i\theta } \in \mathbb {T}$ with $\theta \in (0,2\pi )$. Let $k_{\theta }=[n \int _{0}^{\theta }{\rm d}\mu _V(e^{i\phi })],$ where $[x]:= \lfloor x + \frac {1}{2}\rfloor$ is the closest integer to $x$. As $n \to + \infty,$ $\frac {n\psi (e^{i\eta _{k_{\theta }}})}{\sqrt {\log n}/\pi }(\xi _{k_{\theta }}-\eta _{k_{\theta }})$ converges in distribution to a standard normal random variable.

There has been a lot of progress in recent years towards understanding the global rigidity of various point processes, see e.g. [Reference Arguin, Belius and Bourgade1, Reference Claeys, Fahs, Lambert and Webb20, Reference Erdős, Yau and Yin30]. Our next corollary is a contribution in this direction: it establishes global rigidity upper bounds for (i) the counting statistics of (1.17) and (ii) the ordered statistics of (1.17).

Corollary 1.9 Rigidity

For each $\epsilon >0$ sufficiently small, there exist $c>0$ and $n_{0}>0$ such that

(1.35)\begin{align} & \mathbb{P}\left(\sup_{0 \leq \theta < 2\pi}\Bigg|N_{n}(\theta)- n\int_{0}^{\theta}{\rm d}\mu_V(e^{i\phi}) \Bigg|\leq (1+\epsilon)\frac{1}{\pi}\log n \right) \geq 1-\frac{c}{\log n}, \end{align}
(1.36)\begin{align} & \mathbb{P}\left( \max_{1 \leq k \leq n} \psi(e^{i\eta_{k}})|\xi_{k}-\eta_{k}| \leq (1+\epsilon)\frac{1}{\pi} \frac{\log n}{n} \right) \geq 1-\frac{c}{\log n}, \end{align}

for all $n \geq n_{0}$.

Remark 1.10 It follows from (1.36) that $\lim _{n\to \infty }\mathbb {P}( \max _{1 \leq k \leq n} \psi (e^{i\eta _{k}})|\xi _{k}-\eta _{k}| \leq (1+\epsilon )\frac {1}{\pi }\frac {\log n}{n} ) = 1$. We believe that the upper bound $(1+\epsilon )\frac {1}{\pi }$ is sharp, in the sense that we expect the following to hold true:

(1.37)\begin{equation} \lim_{n\to +\infty}\mathbb{P}\left( (1-\epsilon)\frac{1}{\pi}\frac{\log n}{n} \leq \max_{1 \leq k \leq n} \psi(e^{i\eta_{k}})|\xi_{k}-\eta_{k}| \leq (1+\epsilon)\frac{1}{\pi} \frac{\log n}{n} \right) = 1. \end{equation}

Our belief is supported by the fact that (1.37) was proved in [Reference Arguin, Belius and Bourgade1, Theorem 1.5] for $V=0$, $\psi (e^{i\theta })=\frac {1}{2\pi }$.

2. Differential identity for $D_n$

Our general strategy to prove theorem 1.1 is inspired by the earlier works [Reference Berestycki, Webb and Wong8, Reference Charlier14, Reference Deift, Its and Krasovsky24, Reference Krasovsky40]. The first step consists of establishing a differential identity which expresses derivatives of $\log D_n(\vec \alpha,\vec \beta,V,W)$ in terms of the solution $Y$ to a Riemann–Hilbert (RH) problem (see proposition 2.2). Throughout the paper, $\mathbb {T}$ is oriented in the counterclockwise direction. We first state the RH problem for $Y$.

RH problem for $Y(\cdot ) = Y_n(\cdot ;\vec \alpha,\vec \beta,V,W)$

  1. (a) $Y : {\mathbb {C}} \setminus \mathbb {T} \to \mathbb {C}^{2 \times 2}$ is analytic.

  2. (b) For each $z \in \mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$, the boundary values $\lim _{z' \to z}Y(z')$ from the interior and exterior of $\mathbb {T}$ exist, and are denoted by $Y_+(z)$ and $Y_{-}(z)$ respectively. Furthermore, $Y_+$ and $Y_{-}$ are continuous on $\mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$, and are related by the jump condition

    (2.1)\begin{equation} Y_+(z) = Y_-(z)\begin{pmatrix}1 & z^{{-}n}f(z) \\ 0 & 1\end{pmatrix}, \qquad z \in \mathbb{T}\setminus \{t_{0},\ldots,t_{m}\}, \end{equation}
    where $f$ is given by (1.2).
  3. (c) $Y$ has the following asymptotic behaviour at infinity:

    \[ Y(z) = (1+{\mathcal{O}}(z^{{-}1}))z^{n \sigma_{3}}, \qquad \mbox{as } z \to \infty, \]
    where $\sigma _{3} = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$.
  4. (d) As $z \rightarrow t_k$, $k=0, \ldots, m$, $z \in {\mathbb {C}} \setminus \mathbb {T}$,

    \[ Y(z) = \begin{cases} \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(1) + {\mathcal{O}}({|z-t_k|}^{\alpha_{k}}) \\ {\mathcal{O}}(1) & {\mathcal{O}}(1) + {\mathcal{O}}({|z-t_k|}^{\alpha_{k}}) \end{pmatrix}, & \mbox{if } {\rm Re\,} \alpha_k \ne 0, \\ \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(\log{|z-t_k|}) \\ {\mathcal{O}}(1) & {\mathcal{O}}(\log{|z-t_k|}) \end{pmatrix}, & \mbox{if } {\rm Re\,} \alpha_k = 0. \end{cases} \]

Suppose $\{p_k(z) = \kappa _{k} z^{k}+\ldots \}_{k\geq 0}$ and $\{\hat {p}_k(z)=\kappa _{k} z^{k}+\ldots \}_{k\geq 0}$ are two families of polynomials satisfying the orthogonality conditions

(2.2)\begin{equation} \begin{cases} \displaystyle\frac{1}{2\pi}\int_0^{2\pi} p_k(z)z^{{-}j}f(z){\rm d}\theta= \kappa_k^{{-}1}\delta_{jk}, \\ \displaystyle\frac{1}{2\pi}\int_0^{2\pi} \hat{p}_k(z^{{-}1})z^{j}f(z){\rm d}\theta= \kappa_k^{{-}1}\delta_{jk}, \end{cases} \quad z=e^{i\theta}, \quad j= 0,\ldots,k. \end{equation}

Then the function $Y(z)$ defined by

(2.3)\begin{align} Y(z) & = \left( \kappa_n^{{-}1}p_n(z) \quad \kappa_n^{{-}1}\int_{\mathbb{T}} \frac{p_n(s)f(s)}{2\pi i s^n(s-z)}{\rm d}s - \kappa_{n-1}z^{n-1}\hat{p}_{n-1}(z^{{-}1})\right.\notag\\ & \quad -\left. \kappa_{n-1}\int_{\mathbb{T}} \frac{\hat{p}_{n-1}(s^{{-}1})f(s)}{2\pi i s(s-z)}{\rm d}s \right) \end{align}

solves the RH problem for $Y$. It was first noticed by Fokas, Its and Kitaev [Reference Fokas, Its and Kitaev33] that orthogonal polynomials can be characterized by RH problems (for a contour on the real line). The above RH problem for $Y$, whose jumps lie on the unit circle, was already considered in e.g. [Reference Baik, Deift and Johansson3, eq. (1.26)] and [Reference Deift, Its and Krasovsky24, eq. (3.1)] for more specific $f$.

The monic orthogonal polynomials $\kappa _{n}^{-1}p_{n}, \kappa _{n}^{-1}\hat {p}_{n}$, and also $Y$, are unique (if they exist). The orthogonal polynomials exist if $f$ is strictly positive almost everywhere on $\mathbb {T}$ (this is the case if $W$ is real-valued, $\alpha _{k}>-1$ and $i\beta _{k} \in (-\frac {1}{2},\frac {1}{2})$). More generally, a sufficient condition to ensure existence of $p_{n}, \hat {p}_{n}$ (and therefore of $Y$) is that $D_{n}^{(n)} \neq 0 \neq D_{n+1}^{(n)}$, where $D_{l}^{(n)} =: \det (f_{j-k})_{j,k=0,\ldots,l-1}$, $l\geq 1$ (note that $D_{n}^{(n)}=D_{n}(\vec {\alpha },\vec {\beta },V,W)$), see e.g. [Reference Claeys and Krasovsky21, Section 2.1]. In fact,

(2.4)\begin{equation} p_{k}(z) = \frac{\begin{vmatrix} f_{0} & f_{{-}1} & \ldots & f_{{-}k}\\ \vdots & \vdots & \ddots & \vdots\\ f_{k-1} & f_{k-2} & \ldots & f_{{-}1}\\ 1 & z & \ldots & z^k \end{vmatrix}}{\sqrt{D_k^{(n)}}\sqrt{D_{k+1}^{(n)}}}, \quad \hat{p}_k(z) = \frac{\begin{vmatrix} f_{0} & f_{{-}1} & \ldots & f_{{-}k+1} & 1 \\ f_{1} & f_{0} & \ldots & f_{{-}k+2} & z \\ \vdots & \vdots & & \vdots & \vdots \\ f_{k} & f_{k-1} & \ldots & f_{1} & z^{k} \\ \end{vmatrix}}{\sqrt{D_k^{(n)}}\sqrt{D_{k+1}^{(n)}}}, \end{equation}

and $\kappa _k= (D_{k}^{(n)})^{1/2}/(D_{k+1}^{(n)})^{1/2}$. (Note that $p_{k}$, $\hat {p}_{k}$ and $\kappa _{k}$ are unique only up to multiplicative factors of $-1$. This can be fixed with a choice of the branch for the above roots. However, since $Y$ only involves $\kappa _{n}^{-1}p_{n}$ and $\kappa _{n-1}\hat {p}_{n-1}$, which are unique, this choice for the branch is unimportant for us.) If $D_{k}^{(n)}\neq 0$ for $k=0,1,\ldots,n+1$, it follows that

(2.5)\begin{equation} D_n(\vec{\alpha},\vec{\beta},V,W) = \prod_{j=0}^{n-1} \kappa_j^{{-}2}. \end{equation}

Lemma 2.1 Let $n \in \mathbb {N}$ be fixed, and assume that $D_k^{(n)}(f)\neq 0$, $k = 0,1,\ldots,n+1$. For any $z\ne 0$, we have

(2.6)\begin{equation} [Y^{{-}1}(z)Y'(z)]_{21}z^{{-}n+1} = \sum_{k = 0}^{n-1}\hat{p}_k(z^{{-}1})p_k(z), \end{equation}

where $Y(\cdot ) = Y_n(\cdot ;\vec \alpha,\vec \beta,V,W)$.

Proof. The assumptions imply that $\kappa _k= (D_k^{(n)})^{1/2}/(D_{k+1}^{(n)})^{1/2}$ is finite and nonzero and that $p_k, \hat {p}_k$ exist for all $k \in \{0,\ldots,n\}$. Note that (a) $\det Y: {\mathbb {C}} \setminus {\mathbb {T}} \to {\mathbb {C}}$ is analytic, (b) $(\det Y)_+(z) = (\det Y)_{-}(z)$ for $z \in {\mathbb {T}}\setminus \{t_{0},\ldots,t_{m}\}$, (c) $\det Y(z) = o(|z-t_{k}|^{-1})$ as $z \to t_{k}$ and (d) $\det Y(z) = 1+o(1)$ as $z\to \infty$. Hence, using successively Morera's theorem, Riemann's removable singularities theorem and Liouville's theorem, we conclude that $\det Y \equiv 1$. Using (2.3) and the fact that $\det Y \equiv 1$, we obtain

\begin{align*} \left[Y^{{-}1}(z)Y'(z)\right]_{21}& =\frac{z^n}{\kappa_n}\cdot\frac{\kappa_{n-1}}{z}\hat{p}_{n-1}(z^{{-}1})\frac{d}{dz}p_n(z)\\& \quad -\kappa_n^{{-}1}p_n(z)\frac{d}{dz}\left[z^n\cdot\frac{\kappa_{n-1}}{z}\hat{p}_{n-1}(z^{{-}1})\right]. \end{align*}

Using the recurrence relation (see [Reference Deift, Its and Krasovsky24, Lemma 2.2])

\[ \frac{\kappa_{n-1}}{z}\hat{p}_{n-1}(z^{{-}1})= \kappa_{n}\hat{p}_{n}(z^{{-}1})-\hat{p}_{n}(0)z^{{-}n}p_{n}(z), \]

we then find

\begin{align*} & \left[Y^{{-}1}(z)Y'(z)\right]_{21}\\ & \quad = z^{n-1}\left({-}np_n(z)\hat{p}_n(z^{{-}1})+z\left(\hat{p}_n(z^{{-}1})\frac{d}{dz}p_n(z)-p_n(z)\frac{d}{dz}\hat{p}_n(z^{{-}1})\right)\right). \end{align*}

The claim now directly follows from the Christoffel–Darboux formula [Reference Deift, Its and Krasovsky24, Lemma 2.3].

Proposition 2.2 Let $n \in \mathbb {N}_{\geq 1}:=\{1,2,\ldots \}$ be fixed and suppose that $f$ depends smoothly on a parameter $\gamma$. If $D_k^{(n)}(f)\neq 0$ for $k = n-1,n,n+1$, then the following differential identity holds

(2.7)\begin{equation} \partial_{\gamma} \log D_n(\vec{\alpha},\vec{\beta},V,W) = \frac{1}{2\pi}\int_{0}^{2\pi}[Y^{{-}1}(z)Y'(z)]_{21}z^{{-}n+1}\partial_{\gamma}f(z){\rm d}\theta, \qquad z=e^{i\theta}. \end{equation}

Remark 2.3 Identity (2.7) will be used (with a particular choice of $\gamma$) in the proof of proposition 4.4 to deform the potential, see (4.8).

Proof. We first prove the claim under the stronger assumption that $D_k^{(n)}(f)\neq 0$ for $k = 0,1,\ldots,n+1$. In this case, $\kappa _k= (D_k^{(n)})^{1/2}/(D_{k+1}^{(n)})^{1/2}$ is finite and nonzero and $p_k, \hat {p}_k$ exist for all $k = 0,1,\ldots,n$. Replacing $z^{-j}$ with $\hat {p}_{j}(z^{-1})\kappa _j^{-1}$ in the first orthogonality condition in (2.2) (with $k=j$), and differentiating with respect to $\gamma$, we obtain, for $j = 0, \ldots, n-1$,

(2.8)\begin{align} -\frac{\partial_\gamma[\kappa_j]}{\kappa_j}& =\frac{\kappa_j}{2\pi}\partial_\gamma\left[\int_0^{2\pi}p_j(z)\hat{p}_j(z^{{-}1})\kappa_j^{{-}1}f(z){\rm d}\theta\right] \nonumber\\ & =\frac{1}{2\pi}\int_0^{2\pi}p_j(z)\hat{p}_j(z^{{-}1})\partial_\gamma[f(z)]{\rm d}\theta+\frac{\kappa_j}{2\pi}\int_0^{2\pi}\partial_\gamma\left[p_j(z)\hat{p}_j(z^{{-}1})\kappa_j^{{-}1}\right]f(z){\rm d}\theta. \end{align}

The second term on the right-hand side can be simplified as follows:

(2.9)\begin{align} & \frac{\kappa_j}{2\pi}\int_0^{2\pi}\partial_\gamma\left[p_j(z)\hat{p}_j(z^{{-}1})\kappa_j^{{-}1}\right]f(z){\rm d}\theta\nonumber\\& \quad = \frac{\kappa_j}{2\pi}\int_0^{2\pi}\partial_\gamma[p_j(z)]\hat{p}_j(z^{{-}1})\kappa_j^{{-}1}f(z){\rm d}\theta = \frac{\partial_\gamma[\kappa_j]}{\kappa_j}, \end{align}

where the first and second equalities use the first and second relations in (2.2), respectively. Combining (2.8) and (2.9), we find

(2.10)\begin{equation} -2\frac{\partial_\gamma[\kappa_j]}{\kappa_j}=\frac{1}{2\pi}\int_0^{2\pi}p_j(z)\hat{p}_j(z^{{-}1})\partial_\gamma[f(z)]{\rm d}\theta.\end{equation}

Taking the log of both sides of (2.5) and differentiating with respect to $\gamma$, we get

(2.11)\begin{align} \partial_{\gamma}\log D_{n}(\vec{\alpha},\vec{\beta},V,W)={-}2\sum_{j=0}^{n-1}\frac{\partial_\gamma[\kappa_j]}{\kappa_j}=\frac{1}{2\pi}\int_0^{2\pi}\left(\sum_{j=0}^{n-1}p_j(z)\hat{p}_j(z^{{-}1})\right)\partial_\gamma[f(z)]{\rm d}\theta.\end{align}

An application of lemma 2.1 completes the proof under the assumption that $D_k^{(n)}(f)\neq 0$, $k = 0,1,\ldots,n+1$. Since the existence of $Y$ only relies on the weaker assumption $D_k^{(n)}(f)\neq 0$, $k = n-1,n,n+1$, the claim follows from a simple continuity argument.

3. Steepest descent analysis

In this section, we use the Deift-Zhou [Reference Deift and Zhou28] steepest descent method to obtain large $n$ asymptotics for $Y$.

3.1 Equilibrium measure and $g$-function

The first step of the method is to normalize the RH problem at $\infty$ by means of a so-called $g$-function built in terms of the equilibrium measure (1.7). Recall from (1.3), (1.4) and (1.6) that $U$ is an open annulus containing $\mathbb {T}$ in which $V$, $W$ and $\psi$ are analytic.

Define the function $g:\mathbb {C}\setminus ((-\infty,-1]\cup \mathbb {T} )\to \mathbb {C}$ by

(3.1)\begin{equation} g(z) = \int_{\mathbb{T}} \log_{s} (z-s) \psi(s) \frac{{\rm d}s}{is}, \end{equation}

where for $s = e^{i \theta } \in \mathbb {T}$ and $\theta \in [-\pi,\pi )$, the function $z \mapsto \log _{s} (z-s)$ is analytic in $\mathbb {C}\setminus ((-\infty,-1]\cup \{e^{i \theta '}: -\pi \leq \theta ' \leq \theta \})$ and such that $\log _{s} (2)=\log |2|$.

Lemma 3.1 The function $g$ defined in (3.1) is analytic in $\mathbb {C}\setminus ((-\infty,-1]\cup \mathbb {T} )$, satisfies $g(z) = \log z + {\mathcal {O}}(z^{-1})$ as $z \to \infty$ and possesses the following properties:

(3.2)\begin{align} & g_+(z) + g_{-}(z) = 2 \int_{\mathbb{T}} \log |z-s| \psi(s)\frac{{\rm d}s}{is} + i(\pi + \hat{c} + \arg z), \quad z \in \mathbb{T}, \end{align}
(3.3)\begin{align} & g_+(z) - g_{-}(z) = 2 \pi i \int_{\arg z}^{\pi} \psi(e^{i\theta}){\rm d}\theta, \quad z \in \mathbb{T}, \end{align}
(3.4)\begin{align} & g_+(z) - g_{-}(z) = 2 \pi i,\quad z \in (-\infty,-1), \end{align}

where $\hat {c}=\int _{-\pi }^{\pi } \theta \psi (e^{i\theta }) {\rm d}\theta$ and $\arg z \in (-\pi,\pi )$.

Proof. In the case where the equilibrium measure satisfies the symmetry $\psi (e^{i\theta })=\psi (e^{-i\theta })$, we have $\hat {c}=0$ and in this case (3.2)–(3.4) follow from [Reference Baik, Deift and Johansson3, Lemma 4.2]. In the more general setting of a non-symmetric equilibrium measure, (3.2)–(3.4) can be proved along the same lines as [Reference Baik, Deift and Johansson3, proof of Lemma 4.2] (the main difference is that $F(\pi )=\pi$ in [Reference Baik, Deift and Johansson3, proof of Lemma 4.2] should here be replaced by $F(\pi )=\pi +\hat {c}$).

It follows from (3.3) that

(3.5)\begin{equation} g'_+(z)-g'_-(z)={-}\frac{2\pi}{z}\psi(z), ~~~ z\in \mathbb{T}. \end{equation}

Substituting (3.2) into the Euler–Lagrange equality (1.23) and recalling that ${\rm d}\mu _V(s) = \psi (s) \frac {{\rm d}s}{is}$, we get

(3.6)\begin{equation} V(z) = g_+(z) + g_{-}(z) + \ell - \log z - i(\pi+ \hat{c}), \qquad z \in \mathbb{T}, \end{equation}

where the principal branch is taken for the logarithm. Consider the function

(3.7)\begin{equation} \xi(z) = \begin{cases} \displaystyle -i \pi \int_{{-}1}^{z} \psi(s) \frac{{\rm d}s}{is}, & \mbox{if } |z|<1, \; z \in U, \\ \displaystyle i \pi \int_{{-}1}^{z} \psi(s) \frac{{\rm d}s}{is}, & \mbox{if } |z|>1, \; z \in U, \end{cases} \end{equation}

where the contour of integration (except for the starting point $-1$) lies in $U \setminus ((-\infty,0]\cup \mathbb {T})$ and the first part of the contour lies in $\{z : {\rm Im\,} z \geq 0\}$. Since $\psi$ is real-valued on $\mathbb {T}$, we have ${\rm Re\,} \xi (z)=0$ for $z \in \mathbb {T}$. Using the Cauchy–Riemann equations in polar coordinates and the compactness of the unit circle, we verify that there exists an open annulus $U' \subseteq U$ containing $\mathbb {T}$ such that ${\rm Re\,} \xi (z) > 0$ for $z \in U'\setminus \mathbb {T}$. Redefining $U$ if necessary, we can (and do) assume that $U'=U$. Furthermore, for $z = e^{i\theta } \in \mathbb {T}$, $\theta \in (-\pi,\pi )$, we have

(3.8)\begin{align} & \xi_+(z)-\xi_{-}(z) = 2\xi_+(z) ={-}2 \pi i \int_{{-}1}^{z} \psi(s) \frac{{\rm d}s}{is} = 2 \pi i \int_{\theta}^{\pi} \psi(e^{i\theta'}) {\rm d}\theta' = g_+(z)-g_{-}(z), \end{align}
(3.9)\begin{align} & 2\xi_{{\pm}}(z) - 2g_{{\pm}}(z) = \ell - V(z)-\log z-i\pi -i \hat{c}. \end{align}

Analytically continuing $\xi (z)-g(z)$ in (3.9), we obtain

(3.10)\begin{equation} \xi(z) = g(z) + \frac{1}{2}\left(\ell - V(z)-\log z-i\pi -i\hat{c}\right), \quad \mbox{for all } z \in U\setminus \left( (-\infty,0]\cup \mathbb{T} \right). \end{equation}

Note also that

(3.11)\begin{align} & \xi_+(x) - \xi_{-}(x) = \pi i, & & x \in U\cap (-\infty,-1), \end{align}
(3.12)\begin{align} & \xi_+(x) - \xi_{-}(x) ={-}\pi i, & & x \in U\cap ({-}1,0), \end{align}

where $\xi _{\pm }(x) := \lim _{\epsilon \to 0^+}\xi (z\pm i\epsilon )$ for $x \in U\cap ((-\infty,-1)\cup (-1,0))$.

3.2 Transformations $Y\rightarrow T\rightarrow S$

The first transformation $Y\rightarrow T$ is defined by

(3.13)\begin{equation} T(z) = e^{-\frac{n(\pi+\hat{c}) i}{2}\sigma_{3}}e^{\frac{n\ell}{2}\sigma_3}Y(z)e^{{-}ng(z)\sigma_3}e^{-\frac{n\ell}{2}\sigma_3}e^{\frac{n(\pi+\hat{c}) i}{2}\sigma_{3}}. \end{equation}

For $z \in \mathbb {T} \setminus \{t_{0},\ldots,t_{m}\}$, the function $T$ satisfies the jump relation $T_+=T{\_}J_T$ where the jump matrix $J_T$ is given by

\[ J_T(z) = \begin{pmatrix} e^{{-}n(g_+(z)-g_{-}(z))} & z^{{-}n}e^{{-}n[V(z)-g_+(z)-g_{-}(z)-\ell+i\pi+i\hat{c}]}e^{W(z)}\omega(z) \\ 0 & e^{n(g_+(z)-g_{-}(z))} \end{pmatrix}. \]

Combining the above with (3.4), (3.6) and (3.8), we conclude that $T$ satisfies the following RH problem.

RH problem for $T$

  1. (a) $T: {\mathbb {C}} \setminus \mathbb {T} \rightarrow {\mathbb {C}}^{2\times 2}$ is analytic.

  2. (b) The boundary values $T_+$ and $T_{-}$ are continuous on $\mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$ and are related by

    \[ T_+(z) = T_-(z)\begin{pmatrix} e^{{-}2n\xi_+(z)} & e^{W(z)} \omega(z) \\ 0 & e^{{-}2n\xi_{-}(z)} \end{pmatrix}, \qquad z \in \mathbb{T} \setminus \{t_{0},\ldots,t_{m}\}. \]
  3. (c) As $z \to \infty$, $T(z) = I + {\mathcal {O}}(z^{-1})$.

  4. (d) As $z \rightarrow t_k$, $k = 0, \ldots, m$, $z \in {\mathbb {C}} \setminus \mathbb {T}$,

    \[ T(z) = \begin{cases} \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(1) + {\mathcal{O}}({|z-t_k|}^{\alpha_{k}}) \\ {\mathcal{O}}(1) & {\mathcal{O}}(1) + {\mathcal{O}}({|z-t_k|}^{\alpha_{k}}) \end{pmatrix}, & \mbox{if } {\rm Re\,} \alpha_k \ne 0, \\ \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(\log{|z-t_k|}) \\ {\mathcal{O}}(1) & {\mathcal{O}}(\log{|z-t_k|}) \end{pmatrix}, & \mbox{if } {\rm Re\,} \alpha_k = 0. \end{cases} \]

The jumps of $T$ for $z \in \mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$ can be factorized as

\begin{align*} & \begin{pmatrix} e^{{-}2n\xi_+(z)} & e^{W(z)} \omega(z) \\ 0 & e^{{-}2n\xi_{-}(z)} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n \xi_{-}(z)} & 1 \end{pmatrix} \\ & \quad\times \begin{pmatrix} 0 & e^{W(z)}\omega(z) - e^{{-}W(z)}\omega(z)^{{-}1} & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n \xi_+(z)} & 1 \end{pmatrix}. \end{align*}

Before proceeding to the second transformation, we first describe the analytic continuations of the functions appearing in the above factorization. The functions $\omega _{\beta _{k}}$, $k =0,\ldots,m$, have a straightforward analytic continuation from $\mathbb {T} \setminus \{t_{k}\}$ to $\mathbb {C}\setminus \{\lambda t_{k}: \lambda \geq 0\}$, which is given by

(3.14)\begin{align} \omega_{\beta_{k}}(z) & = z^{\beta_{k}}t_{k}^{-\beta_{k}}\nonumber\\& \quad \times \begin{cases} e^{i \pi \beta_{k}}, & 0 \leq \arg_{0} z < \theta_{k}, \\ e^{{-}i \pi \beta_{k}}, & \theta_{k} \leq \arg_{0} z < 2 \pi, \end{cases}\enspace z \in \mathbb{C}\setminus \{\lambda t_{k}: \lambda \geq 0\}, \quad k =0,\ldots,m, \end{align}

where $\arg _{0} z \in [0,2\pi )$, $t_{k}^{-\beta _{k}} := e^{-i\beta _{k} \theta _{k}}$ and $z^{\beta _{k}} := |z|^{\beta _{k}}e^{i\beta _{k} \arg _{0} z}$. For the root-type singularities, we follow [Reference Deift, Its and Krasovsky24] and analytically continue $\omega _{\alpha _{k}}$ from $\mathbb {T}\setminus \{t_{k}\}$ to $\mathbb {C}\setminus \{\lambda t_{k}: \lambda \geq 0\}$ as follows

\begin{align*} \omega_{\alpha_{k}}(z) = \frac{(z-t_{k})^{\alpha_{k}}}{(z t_{k} e^{i \ell_{k}(z)})^{\alpha_{k}/2}} := \frac{e^{\alpha_{k}(\log |z-t_{k}|+i \hat{\arg}_{k}(z-t_{k}))}}{e^{\frac{\alpha_{k}}{2}(\log |z| + i \arg_{0}(z) + i \theta_{k} + i \ell_{k}(z))}},\quad z \in \mathbb{C}\setminus \{\lambda t_{k}: \lambda \geq 0\},\\ k=0,\ldots,m, \end{align*}

where $\hat {\arg }_{k} z \in (\theta _{k},\theta _{k}+2\pi )$, and

\[ \ell_{k}(z) = \begin{cases} 3 \pi, & 0 \leq \arg_{0} z < \theta_{k}, \\ \pi, & \theta_{k} \leq \arg_{0} z < 2 \pi. \end{cases} \]

Now, we open lenses around $\mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$ as shown in figure 1. The part of the lens-shaped contour lying in $\{|z|<1\}$ is denoted $\gamma _+$, and the part lying in $\{|z|>1\}$ is denoted $\gamma _{-}$. We require that $\gamma _+,\gamma _{-} \subset U$. The transformation $T \mapsto S$ is defined by

(3.15)\begin{align} S(z) & = T(z)\nonumber\\ & \quad\times \begin{cases} I, & \mbox{if } z \mbox{ is outside the lenses,} \\ \begin{pmatrix} 1 & 0 \\ e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n\xi(z)} & 1 \end{pmatrix}, & \mbox{if } |z| > 1 \mbox{ and inside the lenses,} \\ \begin{pmatrix} 1 & 0 - e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n\xi(z)} & 1 \end{pmatrix}, & \mbox{if } |z| < 1 \mbox{ and inside the lenses.} \end{cases} \end{align}

Note from (3.11)–(3.12) that $e^{-2n\xi (z)}$ is analytic in $U \cap ((-\infty,-1)\cup (-1,0))$. It can be verified using the RH problem for $T$ and (3.15) that $S$ satisfies the following RH problem.

Figure 1. The jump contour for $S$ with $m=2$.

RH problem for $S$

  1. (a) $S : {\mathbb {C}} \setminus (\gamma _+ \cup \gamma _{-} \cup \mathbb {T}) \to \mathbb {C}^{2 \times 2}$ is analytic, where $\gamma _+, \gamma _-$ are the contours in figure 1 lying inside and outside $\mathbb {T}$, respectively.

  2. (b) The jumps for $S$ are as follows.

    \begin{align*} & S_+(z) = S_-(z)\begin{pmatrix} 0 & e^{W(z)}\omega(z) - e^{{-}W(z)}\omega(z)^{{-}1} & 0 \end{pmatrix}, & & z \in \mathbb{T}\setminus \{t_{0},\ldots,t_{m}\}, \\ & S_+(z) = S_-(z)\begin{pmatrix} 1 & 0 \\ e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n\xi(z)} & 1 \end{pmatrix}, & & z \in \gamma_+{\cup} \gamma_{-}. \end{align*}
  3. (c) As $z \rightarrow \infty$, $S(z) = I + {\mathcal {O}}(z^{-1})$.

  4. (d) As $z \to t_{k}$, $k = 0,\ldots,m$, we have

    \begin{align*} & \displaystyle S(z) = \left\{\begin{array}{@{}ll} \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(\log (z-t_{k})) \\ {\mathcal{O}}(1) & {\mathcal{O}}(\log (z-t_{k})) \end{pmatrix}, & \mbox{if } z \mbox{ is outside the lenses}, \\ \begin{pmatrix} {\mathcal{O}}(\log (z-t_{k})) & {\mathcal{O}}(\log (z-t_{k})) \\ {\mathcal{O}}(\log (z-t_{k})) & {\mathcal{O}}(\log (z-t_{k})) \end{pmatrix}, & \mbox{if } z \mbox{ is inside the lenses}, \end{array}\right.\\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \displaystyle \mbox{ if } {\rm Re\,} \alpha_{k} = 0, \\ & \displaystyle S(z) = \left\{\begin{array}{@{}ll} \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(1) \\ {\mathcal{O}}(1) & {\mathcal{O}}(1) \end{pmatrix}, & \mbox{if } z \mbox{ is outside the lenses}, \\ \begin{pmatrix} {\mathcal{O}}((z-t_{k})^{-\alpha_{k}}) & {\mathcal{O}}(1) \\ {\mathcal{O}}((z-t_{k})^{-\alpha_{k}}) & {\mathcal{O}}(1) \end{pmatrix}, & \mbox{if } z \mbox{ is inside the lenses}, \end{array} \right.\\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \displaystyle \mbox{ if } {\rm Re\,} \alpha_{k} > 0, \\ & \displaystyle S(z) = \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}((z-t_{k})^{\alpha_{k}}) \\ {\mathcal{O}}(1) & {\mathcal{O}}((z-t_{k})^{\alpha_{k}}) \end{pmatrix},\\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \displaystyle \mbox{ if } {\rm Re\,} \alpha_{k} < 0. \end{align*}

Since $\gamma _+,\gamma _{-} \subset U$ and ${\rm Re\,} \xi (z) > 0$ for $z \in U\setminus \mathbb {T}$ (recall the discussion below (3.7)), the jump matrices $S_{-}(z)^{-1}S_+(z)$ on $\gamma _+\cup \gamma _{-}$ are exponentially close to $I$ as $n \to + \infty$, and this convergence is uniform outside fixed neighbourhoods of $t_{0},\ldots,t_{m}$.

Our next task is to find suitable approximations (called ‘parametrices’) for $S$ in different regions of the complex plane.

3.3 Global parametrix $P^{(\infty )}$

In this subsection, we will construct a global parametrix $P^{(\infty )}$ that is defined as the solution to the following RH problem. We will show in subsection 3.5 below that $P^{(\infty )}$ is a good approximation of $S$ outside fixed neighbourhoods of $t_{0},\ldots,t_{m}$.

RH problem for $P^{(\infty )}$

  1. (a) $P^{(\infty )}:\mathbb {C}\setminus \mathbb {T} \to \mathbb {C}^{2\times 2}$ is analytic.

  2. (b) The jumps are given by

    (3.16)\begin{align} P^{(\infty)}_+(z) = P^{(\infty)}_-(z) \begin{pmatrix} 0 & e^{W(z)}\omega(z) - e^{{-}W(z)}\omega(z)^{{-}1} & 0 \end{pmatrix},\nonumber\\ z \in \mathbb{T}\setminus \{t_{0},\ldots,t_{m}\}. \end{align}
  3. (c) As $z \to \infty$, we have $P^{(\infty )}(z) = I + {\mathcal {O}}(z^{-1})$.

  4. (d) As $z \to t_{k}$ from $|z| \lessgtr 1$, $k\in \{0,\ldots,m\}$, we have $P^{(\infty )}(z) = {\mathcal {O}}(1)(z-t_{k})^{-(\frac {\alpha _{k}}{2}\pm \beta _{k})\sigma _{3}}$.

The unique solution to the above RH problem is given by

(3.17)\begin{equation} P^{(\infty)}(z) = \begin{cases} D(z)^{\sigma_3}\begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix}, & \mbox{ if } |z| < 1, \\ D(z)^{\sigma_3}, & \mbox{ if } |z| > 1, \end{cases} \end{equation}

where $D(z)$ is the Szegő function defined by

(3.18)\begin{align} & D(z) = D_{W}(z) \prod_{k=0}^{m}D_{\alpha_{k}}(z)D_{\beta_{k}}(z), & & D_{W}(z)=\exp{\left(\frac{1}{2\pi i}\int_{\mathbb{T}} \frac{W(s)}{s-z}{\rm d}s\right)}, \end{align}
(3.19)\begin{align} & D_{\alpha_{k}}(z) = \exp{\left(\frac{1}{2\pi i}\int_{\mathbb{T}} \frac{\log \omega_{\alpha_{k}}(s)}{s-z}{\rm d}s\right)}, & & D_{\beta_{k}}(z) = \exp{\left(\frac{1}{2\pi i}\int_{\mathbb{T}} \frac{\log \omega_{\beta_{k}}(s)}{s-z}{\rm d}s\right)}. \end{align}

The branches of the logarithms in (3.19) can be arbitrarily chosen as long as $\log \omega _{\alpha _k}(s)$ and $\log \omega _{\beta _k}(s)$ are continuous on $\mathbb {T} \setminus t_k$. The function $D$ is analytic on $\mathbb {C}\setminus \mathbb {T}$ and satisfies the jump condition $D_+(z) = D_-(z)e^{W(z)}\omega (z)$ on $\mathbb {T} \setminus \{t_0, \ldots, t_m\}$. The expressions for $D_{\alpha _{k}}$ and $D_{\beta _{k}}$ can be simplified as in [Reference Deift, Its and Krasovsky24, eqs. (4.9)–(4.10)]; we have

(3.20)\begin{equation} D_{\alpha_{k}}(z) D_{\beta_{k}}(z) = \begin{cases} \displaystyle \left( \frac{z-t_{k}}{t_{k}e^{i\pi}} \right)^{\frac{\alpha_{k}}{2}+\beta_{k}} = \frac{e^{(\frac{\alpha_{k}}{2}+\beta_{k})(\log |z-t_{k}| + i \hat{\arg}_{k}(z-t_{k}))}}{e^{(\frac{\alpha_{k}}{2}+\beta_{k})(i\theta_{k}+i\pi)}}, & \mbox{if } |z|<1, \\ \displaystyle \left( \frac{z-t_{k}}{z} \right)^{-\frac{\alpha_{k}}{2}+\beta_{k}} = \frac{e^{(\beta_{k}-\frac{\alpha_{k}}{2})(\log |z-t_{k}| + i \hat{\arg}_{k}(z-t_{k}))}}{e^{(\beta_{k}-\frac{\alpha_{k}}{2})(\log |z| + i \hat{\arg}_{k} z)}}, & \mbox{if } |z|>1, \end{cases} \end{equation}

where $\hat {\arg }_{k}$ was defined below (3.14). Using (1.4), we can also simplify $D_{W}$ as

(3.21)\begin{equation} D_{W}(z) = \begin{cases} e^{W_{0}+W_+(z)}, & |z|<1, \\ e^{{-}W_{-}(z)}, & |z|>1. \end{cases} \end{equation}

3.4 Local parametrices $P^{(t_k)}$

In this subsection, we build parametrices $P^{(t_k)}(z)$ in small open disks $\mathcal {D}_{t_k}$ of $t_k$, $k=0,\ldots,m$. The disks $\mathcal {D}_{t_k}$ are taken sufficiently small such that $\mathcal {D}_{t_k} \subset U$ and $\mathcal {D}_{t_k} \cap \mathcal {D}_{t_j} = \emptyset$ for $j \neq k$. Since we assume that the $t_{k}$'s remain bounded away from each other, we can (and do) choose the radii of the disks to be fixed. The parametrices $P^{(t_k)}(z)$ are defined as the solution to the following RH problem. We will show in subsection 3.5 below that $P^{(t_k)}$ is a good approximation for $S$ in $\mathcal {D}_{t_k}$.

RH problem for $P^{(t_{k})}$

  1. (a) $P^{(t_k)}: \mathcal {D}_{t_{k}}\setminus (\mathbb {T} \cup \gamma _+ \cup \gamma _{-}) \to \mathbb {C}^{2\times 2}$ is analytic.

  2. (b) For $z\in (\mathbb {T} \cup \gamma _+ \cup \gamma _{-})\cap \mathcal {D}_{t_{k}}$, $P_{-}^{(t_k)}(z)^{-1}P_+^{(t_k)}(z)=S_{-}(z)^{-1}S_+(z)$.

  3. (c) As $n\to +\infty$, $P^{(t_k)}(z)=(I+{\mathcal {O}}(n^{-1+2|{\rm Re\,} \beta _{k}|}))P^{(\infty )}(z)$ uniformly for $z\in \partial \mathcal {D}_{t_k}$.

  4. (d) As $z\to t_k$, $S(z)P^{(t_k)}(z)^{-1}={\mathcal {O}}(1)$.

A solution to the above RH problem can be constructed using hypergeometric functions as in [Reference Deift, Its and Krasovsky24, Reference Foulquié Moreno, Martinez-Finkelshtein and Sousa35]. Consider the function

\[ f_{t_{k}}(z):= 2\pi i \int_{t_{k}}^{z} \psi(s) \frac{{\rm d}s}{is}, \qquad z \in \mathcal{D}_{t_{k}}, \]

where the path is a straight line segment from $t_{k}$ to $z$. This is a conformal map from $\mathcal {D}_{t_k}$ to a neighbourhood of $0$, which satisfies

(3.22)\begin{equation} f_{t_{k}}(z) = 2\pi t_{k}^{{-}1} \psi(t_{k}) (z-t_{k}) \left( 1+{\mathcal{O}}(z-t_{k}) \right), \qquad \mbox{ as } z \to t_{k}. \end{equation}

If $\mathcal {D}_{t_{k}} \cap (-\infty,0] = \emptyset$, $f_{t_{k}}$ can also be expressed as

\[ f_{t_{k}}(z) ={-} 2 \times \begin{cases} \xi(z)-\xi_+(t_{k}), & |z|<1,\\ - (\xi(z)-\xi_{-}(t_{k})), & |z|>1. \end{cases} \]

If $\mathcal {D}_{t_{k}} \cap (-\infty,0] \neq \emptyset$, then instead we have

(3.23)\begin{align} & f_{t_{k}}(z) ={-} 2 \times \begin{cases} \xi(z)-\xi_+(t_{k}), & |z|<1, \; {\rm Im\,} z >0, \\ \xi(z)-\xi_+(t_{k})-\pi i, & |z|<1, \; {\rm Im\,} z <0,\\ - (\xi(z)-\xi_{-}(t_{k})), & |z|>1, \; {\rm Im\,} z >0,\\ - (\xi(z)-\xi_{-}(t_{k})+\pi i), & |z|>1, \; {\rm Im\,} z <0, \end{cases} \quad \mbox{if } {\rm Im\,} t_{k}>0, \\ & f_{t_{k}}(z) ={-} 2 \times \begin{cases} \xi(z)-\xi_+(t_{k})+\pi i, & |z|<1, \; {\rm Im\,} z >0, \\ \xi(z)-\xi_+(t_{k}), & |z|<1, \; {\rm Im\,} z <0,\\ -(\xi(z)-\xi_{-}(t_{k})-\pi i), & |z|>1, \; {\rm Im\,} z >0,\\ -(\xi(z)-\xi_{-}(t_{k})), & |z|>1, \; {\rm Im\,} z <0, \end{cases} \quad \mbox{if } {\rm Im\,} t_{k}<0. \nonumber \end{align}

If $t_{k}=-1$, (3.23) also holds with $\xi _{\pm }(t_k) := \lim _{\epsilon \to 0^+}\xi _{\pm }(e^{(\pi - \epsilon )i})=0$. We define $\omega _{k}$ and $\widetilde {W}_{k}$ by

\begin{align*} \omega_{k}(z)& = e^{{-}2\pi i \beta_{k}\hat{\theta}(z;k)}z^{\beta_{k}}t_{k}^{-\beta_{k}} \prod_{j\neq k} \omega_{\alpha_{j}}(z)\omega_{\beta_{j}}(z),\\ \widetilde{W}_{k}(z) & = \check{\omega}_{\alpha_{k}}(z)^{\frac{1}{2}} \times \begin{cases} e^{- \frac{i\pi\alpha_{k}}{2}}, & z \in Q_{+,k}^{R}\cup Q_{-,k}^{L}, \\ e^{ \frac{i\pi\alpha_{k}}{2}}, & z \in Q_{-,k}^{R} \cup Q_{+,k}^{L}, \end{cases} \end{align*}

where $\hat {\theta }(z;k)=1$ if ${\rm Im\,} z <0$ and $k=0$ and $\hat {\theta }(z;k)=0$ otherwise, $z^{\beta _{k}} := |z|^{\beta _{k}}e^{i\beta _{k} \arg _{0} z}$,

(3.24)\begin{equation} \check{\omega}_{\alpha_{k}}(z)^{1/2} := \frac{(z-t_{k})^{\frac{\alpha_{k}}{2}}}{(z t_{k} e^{i \ell_{k}(z)})^{\alpha_{k}/4}} := \frac{e^{\frac{\alpha_{k}}{2}(\log |z-t_{k}|+i \check{\arg}_{k}(z-t_{k}))}}{e^{\frac{\alpha_{k}}{4}(\log |z| + i \arg_{0}(z) + i \theta_{k} + i \ell_{k}(z))}}, \end{equation}

and (see figure 2)

\begin{align*} Q_{{\pm},k}^{R}& = \{ z \in \mathcal{D}_{t_{k}}: \mp {\rm Re\,} f_{t_{k}}(z) > 0 \mbox{, } {\rm Im\,} f_{t_{k}}(z) >0 \}, \\ Q_{{\pm},k}^{L}& = \{ z \in \mathcal{D}_{t_{k}}: \mp {\rm Re\,} f_{t_{k}}(z) > 0 \mbox{, } {\rm Im\,} f_{t_{k}}(z) <0 \}. \end{align*}

Figure 2. The four quadrants $Q_{\pm,k}^{R}$, $Q_{\pm,k}^{L}$ near $t_k$ and their images under the map $f_{t_k}$.

The argument $\check {\arg }_{k}(z-t_{k})$ in (3.24) is defined to have a discontinuity for $z \in (\overline {Q_{-,k}^{L}} \cap \overline {Q_{-,k}^{R}}) \cup [z_{\star,k},t_{k}\infty )$, $z_{\star,k}:=\overline {Q_{-,k}^{L}} \cap \overline {Q_{-,k}^{R}}\cap \partial \mathcal {D}_{t_{k}}$, and such that $\check {\arg }_{k}((1-0_+)t_{k}-t_{k})=\theta _{k}+\pi$. Note that $\check {\arg }_{k}(z-t_{k})$ is merely a small deformation of the argument $\hat {\arg }_{k}(z-t_{k})$ defined below (3.14). This small deformation is needed to ensure that $E_{t_{k}}$ in (3.26) below is analytic in $\mathcal {D}_{t_{k}}$.

Note that $\omega _{k}$ is analytic in $\mathcal {D}_{t_{k}}$. We now use the confluent hypergeometric model RH problem, whose solution is denoted $\Phi _{\mathrm {HG}}(z;\alpha _{k},\beta _{k})$ (see appendix B for the definition and properties of $\Phi _{\mathrm {HG}}$). If $k \neq 0$ and $\mathcal {D}_{t_{k}} \cap (-\infty,0] = \emptyset$, we define

(3.25)\begin{equation} P^{(t_{k})}(z) = E_{t_{k}}(z)\Phi_{\mathrm{HG}}(nf_{t_{k}}(z);\alpha_{k},\beta_{k})\widetilde{W}_{k}(z)^{-\sigma_{3}}e^{{-}n\xi(z)\sigma_{3}}e^{-\frac{W(z)}{2}\sigma_{3}}\omega_{k}(z)^{-\frac{\sigma_{3}}{2}}, \end{equation}

where $E_{t_{k}}$ is given by

(3.26)\begin{align} E_{t_{k}}(z) & = P^{(\infty)}(z) \omega_{k}(z)^{\frac{\sigma_{3}}{2}}e^{\frac{W(z)}{2}\sigma_{3}} \widetilde{W}_{k}(z)^{\sigma_{3}}\nonumber\\ & \quad \times \left\{ \hspace{-0.18cm} \begin{array}{l l} e^{ \dfrac{i\pi\alpha_{k}}{4}\sigma_{3}}e^{{-}i\pi\beta_{k} \sigma_{3}}, \hspace{-0.2cm} & z \in Q_{+,k}^{R} \\ e^{-\dfrac{i\pi\alpha_{k}}{4}\sigma_{3}}e^{{-}i\pi\beta_{k}\sigma_{3}}, \hspace{-0.2cm} & z \in Q_{+,k}^{L} \\ e^{\dfrac{i\pi\alpha_{k}}{4}\sigma_{3}}\begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix} , \quad & z \in Q_{-,k}^{L} \\ e^{-\dfrac{i\pi\alpha_{k}}{4}\sigma_{3}}\begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix} , \quad & z \in Q_{-,k}^{R} \\ \end{array}\right\} \quad e^{n\xi_+(t_{k})\sigma_{3}} (nf_{t_{k}}(z))^{\beta_{k}\sigma_{3}}. \end{align}

Here the branch of $f_{t_k}(z)^{\beta _k}$ is such that $f_{t_k}(z)^{\beta _k} = |f_{t_{k}}(z)|^{\beta _k} e^{\beta _k i\arg f_{t_{k}}(z)}$ with $\arg f_{t_k}(z) \in (-\frac {\pi }{2}, \frac {3\pi }{2})$, and the branch for the square root of $\omega _{k}(z)$ can be chosen arbitrarily as long as $\omega _{k}(z)^{1/2}$ is analytic in $\mathcal {D}_{t_{k}}$ (note that $P^{(t_{k})}(z)$ is invariant under a sign change of $\omega _{k}(z)^{1/2}$). If $k \neq 0$, $\mathcal {D}_{t_{k}} \cap (-\infty,0] \neq \emptyset$ and ${\rm Im\,} t_{k} \geq 0$ (resp. ${\rm Im\,} t_{k} < 0$), then we define $P^{(t_{k})}(z)$ as in (3.25) but with $\xi (z)$ replaced by $\xi (z)+\pi i \theta _{-}(z)$ (resp. $\xi (z)+\pi i \theta _+(z)$), where

\[ \theta_{-}(z):= \begin{cases} 1, & \mbox{if } {\rm Im\,} z <0, \; |z|>1, \\ - 1, & \mbox{if } {\rm Im\,} z <0, \; |z|<1, \\ 0, & \mbox{otherwise,} \end{cases} \quad \theta_+(z):= \begin{cases} -1, & \mbox{if } {\rm Im\,} z >0, \; |z|>1, \\ 1, & \mbox{if } {\rm Im\,} z >0, \; |z|<1, \\ 0, & \mbox{otherwise.} \end{cases} \]

Using the definition of $\widetilde {W}_{k}$ and the jumps (3.16) of $P^{(\infty )}$, we verify that $E_{t_{k}}$ has no jumps in $\mathcal {D}_{t_{k}}$. Moreover, since $P^{(\infty )}(z) = {\mathcal {O}}(1)(z-t_{k})^{-(\frac {\alpha _{k}}{2}\pm \beta _{k})\sigma _{3}}$ as $z\to t_{k}$, $\pm (1-|z|) > 0$, we infer from (3.26) that $E_{t_{k}}(z) = {\mathcal {O}}(1)$ as $z \to t_{k}$, and therefore $E_{t_{k}}$ is analytic in $\mathcal {D}_{t_{k}}$. Using (3.26), we see that $E_{t_{k}}(z) = {\mathcal {O}}(1)n^{\beta _{k}\sigma _{3}}$ as $n \to \infty$, uniformly for $z \in \mathcal {D}_{t_{k}}$. Since $P^{(t_{k})}$ and $S$ have the same jumps on $(\mathbb {T}\cup \gamma _+\cup \gamma _{-})\cap \mathcal {D}_{t_{k}}$, $S(z)P^{(t_{k})}(z)^{-1}$ is analytic in $\mathcal {D}_{t_{k}} \setminus \{t_{k}\}$. Furthermore, by (B.5) and condition (d) in the RH problem for $S$, as $z \to t_{k}$ from outside the lenses we have that $S(z)P^{(t_{k})}(z)^{-1}$ is ${\mathcal {O}}(\log (z-t_{k}))$ if ${\rm Re\,} \alpha _{k} = 0$, is ${\mathcal {O}}(1)$ if ${\rm Re\,} \alpha _{k} > 0$, and is ${\mathcal {O}}((z-t_{k})^{\alpha _{k}})$ if ${\rm Re\,} \alpha _{k} < 0$. In all cases, the singularity of $S(z)P^{(t_{k})}(z)^{-1}$ at $z = t_{k}$ is removable and therefore $P^{(t_{k})}$ in (3.25) satisfies condition (d) of the RH problem for $P^{(t_{k})}$.

The value of $E_{t_{k}}(t_{k})$ can be obtained by taking the limit $z \to t_{k}$ in (3.26) (e.g. from the quadrant $Q_{+,k}^{R}$). Using (3.17), (3.18), (3.20), (3.22) and (3.26), we obtain

(3.27)\begin{equation} E_{t_{k}}(t_{k}) = \begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix} \Lambda_{k}^{\sigma_{3}}, \end{equation}

where

(3.28)\begin{align} \Lambda_{k} & = e^{\frac{W(t_{k})}{2}}D_{W,+}(t_{k})^{{-}1}\nonumber\\& \quad \times\Bigg[\prod_{j \neq k}D_{\alpha_{j},+}(t_{k})^{{-}1}D_{\beta_{j},+}(t_{k})^{{-}1} \omega_{\alpha_{j}}^{\frac{1}{2}}(t_{k})\omega_{\beta_{j}}^{\frac{1}{2}}(t_{k})\Bigg](2\pi \psi(t_{k})n)^{\beta_{k}}e^{n \xi_+(t_{k})}. \end{align}

In (3.28), the branch of $\omega _{\alpha _{j}}^{\frac {1}{2}}(t_{k})$ is as in (3.24) and $\omega _{\beta _{j}}^{\frac {1}{2}}(t_{k})$ is defined by

\begin{align*} \omega_{\beta_{j}}^{\frac{1}{2}}(t_k) := e^{i\frac{\beta_j}{2}(\theta_{k} - \theta_{j})} \times \begin{cases} e^{\frac{i \pi}{2} \beta_{j}}, & \mbox{if } 0 \leq \theta_{k} < \theta_{j}, \\ e^{-\frac{i \pi}{2} \beta_{j}}, & \mbox{if } \theta_{j} \leq \theta_{k} < 2 \pi. \end{cases} \end{align*}

The expression for $\Lambda _k$ can be further simplified as follows. A simple computation shows that

\begin{align*} D_{\alpha_{j},+}(z) & = |z-t_{j}|^{\frac{\alpha_{j}}{2}}\exp \left( \frac{i \alpha_{j}}{2}\left[ \hat{\arg}_{j}(z-t_{j}) - \theta_{j} - \pi \right] \right) \\ & = |z-t_{j}|^{\frac{\alpha_{j}}{2}}\exp \left( \frac{i \alpha_{j}}{2}\left[ \frac{\ell_{j}(z)}{2}+ \frac{\arg_{0} z - \theta_{j}}{2} - \pi \right] \right), \\ D_{\beta_{j},+}(z) & = |z-t_{j}|^{\beta_{j}}\exp \left( i \beta_{j} \left[ \hat{\arg}_{j}(z-t_{j}) - \theta_{j} - \pi \right] \right) \\ & = |z-t_{j}|^{\beta_{j}}\exp \left( i \beta_{j}\left[ \frac{\ell_{j}(z)}{2}+ \frac{\arg_{0} z - \theta_{j}}{2} - \pi \right] \right) \end{align*}

for $z \in \mathbb {T}$. Therefore, the product in brackets in (3.28) can be rewritten as

\[ \prod_{j \neq k} |t_{k}-t_{j}|^{-\beta_{j}}\exp \left( - \frac{i\alpha_{j}}{2} \frac{\theta_{k} - \theta_{j}}{2} \right)\prod_{j=0}^{k-1} \exp \left( \frac{\pi i \alpha_{j}}{4} \right) \prod_{j=k+1}^{m} \exp \left( -\frac{\pi i \alpha_{j}}{4} \right), \]

and thus

\[ \Lambda_{k} = e^{\frac{W(t_{k})}{2}}D_{W,+}(t_{k})^{{-}1}e^{\frac{i \lambda_{k}}{2}}(2\pi \psi(t_{k})n)^{\beta_{k}}\prod_{j \neq k} |t_{k}-t_{j}|^{-\beta_{j}}, \]

where

(3.29)\begin{equation} \lambda_{k} = \sum_{j=0}^{k-1} \frac{\pi \alpha_{j}}{2} - \sum_{j=k+1}^{m} \frac{\pi \alpha_{j}}{2} - \sum_{j \neq k} \frac{\alpha_{j}(\theta_{k}-\theta_{j})}{2} + 2\pi n \int_{t_{k}}^{{-}1} \psi(s)\frac{{\rm d}s}{is}.\end{equation}

Using (3.25) and (B.2), we obtain

(3.30)\begin{align} & P^{(t_{k})}(z)P^{(\infty)}(z)^{{-}1}\nonumber\\ & \quad = I + \frac{\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4}}{n f_{t_{k}}(z)} E_{t_{k}}(z) \begin{pmatrix} -1 & \tau(\alpha_{k},\beta_{k}) - \tau(\alpha_{k},-\beta_{k}) & 1 \end{pmatrix}E_{t_{k}}(z)^{{-}1}\nonumber\\ & \qquad + {\mathcal{O}} (n^{{-}2+2|{\rm Re\,} \beta_{k}|}), \end{align}

as $n \to \infty$ uniformly for $z \in \partial \mathcal {D}_{t_{k}}$, where $\tau (\alpha _{k},\beta _{k})$ is defined in (B.3).

3.5 Small norm RH problem

We consider the function $R$ defined by

(3.31)\begin{equation} R(z) = \begin{cases} S(z)P^{(\infty)}(z)^{{-}1}, & z \in {\mathbb{C}} \setminus (\cup_{k=0}^{m}\overline{\mathcal{D}_{t_k}} \cup \mathbb{T} \cup \gamma_+{\cup} \gamma_{-}), \\ S(z)P^{(t_{k})}(z)^{{-}1}, & z \in \mathcal{D}_{t_{k}}\setminus (\mathbb{T} \cup \gamma_+{\cup} \gamma_{-}), \; k=0,\ldots,m. \end{cases} \end{equation}

We have shown in the previous section that $P^{(t_{k})}$ and $S$ have the same jumps on $\mathbb {T} \cup \gamma _+ \cup \gamma _{-}$ and that $S(z)P^{(t_{k})}(z)^{-1}={\mathcal {O}}(1)$ as $z\to t_{k}$. Hence, $R$ is analytic in $\cup _{k=0}^m\mathcal {D}_{t_k}$. Using also the RH problems for $S$, $P^{(\infty )}$ and $P^{(t_{k})}$, we conclude that $R$ satisfies the following RH problem.

RH problem for $R$

  1. (a) $R : {\mathbb {C}} \setminus \Gamma _{R} \to \mathbb {C}^{2 \times 2}$ is analytic, where $\Gamma _{R} = \cup _{k=0}^{m} \partial \mathcal {D}_{t_{k}} \cup ((\gamma _+ \cup \gamma _{-}) \setminus \cup _{k=0}^{m} \mathcal {D}_{t_{k}})$ and the circles $\partial \mathcal {D}_{t_{k}}$ are oriented in the clockwise direction.

  2. (b) The jumps are given by

    \begin{align*} & R_+(z) = R_{-}(z) P^{(\infty)}(z)& \\ & \quad \times\!\begin{pmatrix} 1 & 0 \\ e^{{-}W(z)}\omega(z)^{{-}1}e^{{-}2n\xi(z)} & 1 \end{pmatrix} P^{(\infty)}(z)^{{-}1}, & \!\!\!z \in (\gamma_+{\cup} \gamma_{-}) {\setminus} \cup_{k=0}^{m} \overline{\mathcal{D}_{t_{k}}},\\ & R_+(z) = R_{-}(z) P^{(t_{k})}(z)P^{(\infty)}(z)^{{-}1}, & z \in \partial \mathcal{D}_{t_{k}}, \, k=0,\ldots,m. \end{align*}
  3. (c) As $z \to \infty$, $R(z) = I + {\mathcal {O}}(z^{-1})$.

  4. (d) As $z \to z^{*}\in \Gamma _{R}^{*}$, where $\Gamma _{R}^{*}$ is the set of self-intersecting points of $\Gamma _{R}$, we have $R(z) = {\mathcal {O}}(1)$.

Recall that ${\rm Re\,} \xi (z) \geq c > 0$ for $z \in (\gamma _+ \cup \gamma _{-}) \setminus \cup _{k=0}^{m} \mathcal {D}_{t_{k}}$. Moreover, we see from (3.17) that $P^{(\infty )}(z)$ is bounded for $z$ away from the points $t_{0},\ldots,t_{m}$. Using also (3.30), we conclude that as $n \to + \infty$

(3.32)\begin{align} & J_{R}(z) = I + {\mathcal{O}}(e^{{-}cn}), \quad\mbox{uniformly for } z \in (\gamma_+{\cup} \gamma_{-}) \setminus \cup_{k=0}^{m} \overline{\mathcal{D}_{t_{k}}}, \end{align}
(3.33)\begin{align} & J_{R}(z) = I + J_{R}^{(1)}(z)n^{{-}1} + {\mathcal{O}}(n^{{-}2+2\beta_{\max}}), \quad\mbox{uniformly for } z \in \cup_{k=0}^{m} \partial \mathcal{D}_{t_{k}}, \end{align}

where $J_{R}(z):=R_{-}^{-1}(z)R_+(z)$ and

\[ J_{R}^{(1)}(z) = \frac{\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4}}{f_{t_{k}}(z)} E_{t_{k}}(z) \begin{pmatrix} -1 & \tau(\alpha_{k},\beta_{k}) - \tau(\alpha_{k},-\beta_{k}) & 1 \end{pmatrix}E_{t_{k}}(z)^{{-}1}, \quad z \in \partial\mathcal{D}_{t_{k}}. \]

Furthermore, it is easy to see that the ${\mathcal {O}}$-terms in (3.32)–(3.33) are uniform for $(\theta _{1},\ldots,\theta _{m})$ in any given compact subset $\Theta \subset (0,2\pi )_{\mathrm {ord}}^{m}$, for $\alpha _{0},\ldots,\alpha _{m}$ in any given compact subset $\mathfrak {A}\subset \{z \in \mathbb {C}: {\rm Re\,} z >-1\}$, and for $\beta _{0},\ldots,\beta _{m}$ in any given compact subset $\mathfrak {B}\subset \{z \in \mathbb {C}: {\rm Re\,} z \in (-\frac {1}{2},\frac {1}{2})\}$. Therefore, $R$ satisfies a small norm RH problem, and the existence of $R$ for all sufficiently large $n$ can be proved using standard theory [Reference Deift, Kriecherbauer, McLaughlin, Venakides and Zhou27, Reference Deift and Zhou28] as follows. Define the operator $\mathcal {C}:L^{2}(\Gamma _{R})\to L^{2}(\Gamma _{R})$ by $\mathcal {C}f(z) = \frac {1}{2\pi i}\int _{\Gamma _{R}}\frac {f(s)}{s-z}dz$, and denote $\mathcal {C}_+f$ and $\mathcal {C}_{-}f$ for the left and right non-tangential limits of $\mathcal {C}f$. Since $\Gamma _{R}$ is a compact set, by (3.32)–(3.33) we have $J_{R}-I \in L^{2}(\Gamma _{R})\cap L^{\infty }(\Gamma _{R})$, and we can define

\begin{align*} \mathcal{C}_{J_{R}}: L^{2}(\Gamma_{R})+L^{\infty}(\Gamma_{R}) \to L^{2}(\Gamma_{R}), \\ \mathcal{C}_{J_{R}}f=\mathcal{C}_{-}(f(J_{R}-I)), \\ f \in L^{2}(\Gamma_{R})+L^{\infty}(\Gamma_{R}). \end{align*}

Using $\| \mathcal {C}_{J_{R}} \|_{L^{2}(\Gamma _{R}) \to L^{2}(\Gamma _{R})} \leq C \|J_{R}-I\|_{L^{\infty }(\Gamma _{R})}$ and (3.32)–(3.33), we infer that there exists $n_{0}=n_{0}(\Theta,\mathfrak {A},\mathfrak {B})$ such that $\| \mathcal {C}_{J_{R}} \|_{L^{2}(\Gamma _{R}) \to L^{2}(\Gamma _{R})} <1$ for all $n \geq n_{0}$, all $(\theta _{1},\ldots,\theta _{m})\in \Theta$, all $\alpha _{0},\ldots,\alpha _{m} \in \mathfrak {A}$ and all $\beta _{0},\ldots,\beta _{m}\in \mathfrak {B}$. Hence, for $n \geq n_{0}$, $I-\mathcal {C}_{J_{R}}:L^{2}(\Gamma _{R}) \to L^{2}(\Gamma _{R})$ can be inverted as a Neumann series and thus $R$ exists and is given by

(3.34)\begin{equation} R=I+\mathcal{C}(\mu_{R}(J_{R}-I)), \qquad \mbox{where } \quad \mu_{R}:= I + (I-\mathcal{C}_{J_{R}})^{{-}1}\mathcal{C}_{J_{R}}(I). \end{equation}

Using (3.34), (3.32) and (3.33), we obtain

(3.35)\begin{equation} R(z) = I+R^{(1)}(z)n^{{-}1} + {\mathcal{O}}(n^{{-}2+2\beta_{\max}}), \qquad \mbox{as } n \to + \infty, \end{equation}

uniformly for $(\theta _{1},\ldots,\theta _{m})\in \Theta$, $\alpha _{0},\ldots,\alpha _{m} \in \mathfrak {A}$ and $\beta _{0},\ldots,\beta _{m}\in \mathfrak {B}$, where $R^{(1)}$ is given by

\[ R^{(1)}(z) = \sum_{k=0}^{m}\frac{1}{2\pi i} \int_{\partial \mathcal{D}_{t_{k}}} \frac{J_{R}^{(1)}(s)}{s-z}{\rm d}s. \]

Since the jumps $J_{R}$ are analytic in a neighbourhood of $\Gamma _{R}$, expansion (3.35) holds uniformly for $z \in \mathbb {C}\setminus \Gamma _{R}$. It also follows from (3.34) that (3.35) can be differentiated with respect to $z$ without increasing the error term. For $z \in \mathbb {C}\setminus \cup _{k=0}^{m} \mathcal {D}_{t_{k}}$, a residue calculation using (3.22), (3.27) and (3.30) shows that (recall that $\partial \mathcal {D}_{t_{k}}$ is oriented in the clockwise direction)

(3.36)\begin{equation} R^{(1)}(z) = \sum_{k=0}^{m} \frac{1}{z-t_{k}} \frac{(\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4})t_{k}}{2\pi \psi(t_{k})} \begin{pmatrix} 1 & \Lambda_{k}^{{-}2} \tau(\alpha_{k},-\beta_{k}) - \Lambda_{k}^{2} \tau(\alpha_{k},\beta_{k}) & -1 \end{pmatrix}. \end{equation}

Remark 3.2 Above, we have discussed the uniformity of (3.32)–(3.33) and (3.35) in the parameters $\theta _{k},\alpha _{k},\beta _{k}$. In § 4, we will also need the following fact, which can be proved via a direct analysis (we omit the details here, see e.g. [Reference Berestycki, Webb and Wong8, Lemma 4.35] for a similar situation): If $V$ is replaced by $sV$, then (3.32)–(3.33) and (3.35) also hold uniformly for $s \in [0,1]$.

Remark 3.3 If $k_{0},\ldots,k_{2m+1}\in \mathbb {N}$, $k_{0}+\ldots +k_{2m+1}\geq 1$ and $\partial ^{\vec {k}}:=\partial _{\alpha _{0}}^{k_{0}}\ldots \partial _{\alpha _{m}}^{k_{m}}\partial _{\beta _{0}}^{k_{m+1}}\ldots \partial _{\beta _{m}}^{k_{2m+1}}$, then by (3.17) we have

\[ \partial^{\vec{k}}J_{R}(z) = {\mathcal{O}}(e^{{-}cn}), \quad \mbox{uniformly for } z \in (\gamma_+{\cup} \gamma_{-}) \setminus \cup_{k=0}^{m} \overline{\mathcal{D}_{t_{k}}}, \]

and by the same type of arguments that led to (3.30) we have

\begin{align*} \partial^{\vec{k}}J_{R}(z) = \partial^{\vec{k}}(J_{R}^{(1)}(z))n^{{-}1} + {\mathcal{O}}\left(\frac{(\log n)^{k_{m+1}+\ldots+k_{2m+1}}}{n^{2-2\beta_{\max}}}\right),\\ \mbox{uniformly for } z \in \cup_{k=0}^{m} \partial \mathcal{D}_{t_{k}}. \end{align*}

It follows that

\[ \partial^{\vec{k}}R(z) = \partial^{\vec{k}}(R^{(1)}(z))n^{{-}1} + {\mathcal{O}}\left(\frac{(\log n)^{k_{m+1}+\ldots+k_{2m+1}}}{n^{2-2\beta_{\max}}}\right), \qquad \mbox{as } n \to + \infty. \]

If $W$ is replaced by $tW$, $t \in [0,1]$, then the asymptotics (3.32), (3.33) and (3.35) are uniform with respect to $t$ and can also be differentiated any number of times with respect to $t$ without worsening the error term.

4. Integration in $V$

Our strategy is inspired by [Reference Berestycki, Webb and Wong8] and considers a linear deformation in the potential (in [Reference Berestycki, Webb and Wong8] the authors study Hankel determinants related to point processes on the real line, see also [Reference Charlier14, Reference Charlier, Fahs, Webb and Wong18, Reference Charlier and Gharakhloo19] for subsequent works using similar deformation techniques). Consider the potential $\hat {V}_{s}:=sV$, where $s \in [0,1]$. It is immediate to verify that

(4.1)\begin{equation} 2 \int_{0}^{2\pi} \log |z-e^{i\theta}| {\rm d}\mu_{\hat{V}_{0}}(e^{i\theta}) = \hat{V}_{0}(z) - \ell_{0}, \mbox{ for } z \in \mathbb{T}, \end{equation}

with ${\rm d}\mu _{\hat {V}_{0}}(e^{i\theta }):=\frac {1}{2\pi }{\rm d}\theta$ and $\ell _{0}=0$. Using a linear combination of (4.1) and (1.23) (writing $\hat {V}_{s}=(1-s)\hat {V}_{0}+sV$), we infer that

(4.2)\begin{equation} 2 \int_{0}^{2\pi} \log |z-e^{i\theta}| {\rm d}\mu_{\hat{V}_{s}}(e^{i\theta}) = \hat{V}_{s}(z) - \ell_{s}, \mbox{ for } z \in \mathbb{T},\end{equation}

holds for each $s \in [0,1]$ with $\ell _{s}:=s \ell$ and ${\rm d}\mu _{\hat {V}_{s}}(e^{i\theta })=\psi _{s}(e^{i\theta }){\rm d}\theta$, $\psi _{s}(e^{i\theta }):=\frac {1-s}{2\pi }+s\psi (e^{i\theta })$. In particular, this shows that $\psi _{s}(e^{i\theta })>0$ for all $s \in [0,1]$ and all $\theta \in [0,2\pi )$. Hence, we can (and will) use the analysis of § 3 with $V$ replaced by $\hat {V}_{s}$.

We first recall the following result, which will be used for our proof.

Theorem 4.1 Taken from [Reference Deift, Its and Krasovsky24, Reference Ehrhardt29]

Let $m \in \mathbb {N}$, and let $t_{k}=e^{i\theta _{k}}$, $\alpha _{k}$ and $\beta _{k}$ be such that

\begin{align*} 0=\theta_{0} < \theta_{1} < \ldots < \theta_{m} < 2\pi, \quad \mbox{ and } \quad {\rm Re\,} \alpha_{k} >{-}1, \quad {\rm Re\,} \beta_{k} \in (-\tfrac{1}{2},\tfrac{1}{2}) \\ \mbox{ for } k=0,\ldots,m. \end{align*}

Let $W: \mathbb {T}\to \mathbb {R}$ be analytic, and define $W_+$ and $W_{-}$ as in (1.4). As $n \to +\infty$, we have

(4.3)\begin{equation} D_n(\vec\alpha,\vec\beta,0,W)= \exp \left( D_{2}n + D_{3} \log n + D_{4} + {\mathcal{O}}\left( \frac{1}{n^{1- 2\beta_{\max}}} \right)\right), \end{equation}

where

\begin{align*} D_{2} & = W_{0}, \\ D_{3} & =\sum_{k=0}^{m} \left(\frac{\alpha_{k}^{2}}{4}-\beta_{k}^{2}\right), \\ D_{4} & =\sum_{\ell = 1}^{+\infty} \ell W_{\ell}W_{-\ell} + \sum_{k=0}^{m} \left( \beta_{k}-\frac{\alpha_{k}}{2} \right) W_+(t_{k}) - \sum_{k=0}^{m} \left( \beta_{k}+\frac{\alpha_{k}}{2} \right) W_{-}(t_{k}) \\ & \quad+ \sum_{0 \leq j < k \leq m} \Bigg\{ \frac{\alpha_{j} i \beta_{k} - \alpha_{k} i \beta_{j}}{2}(\theta_{k}-\theta_{j }-\pi) + \left( 2\beta_{j}\beta_{k}-\frac{\alpha_{j}\alpha_{k}}{2} \right) \log |t_{j}-t_{k}| \Bigg\} \\ & \quad+ \sum_{k=0}^{m} \log \frac{G(1+\frac{\alpha_{k}}{2}+\beta_{k})G(1+\frac{\alpha_{k}}{2}-\beta_{k})}{G(1+\alpha_{k})}, \end{align*}

where $G$ is Barnes’ $G$-function. Furthermore, the above asymptotics are uniform for all $\alpha _{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Re\,} z >-1\}$, for all $\beta _{k}$ in compact subsets of $\{z \in \mathbb {C}: {\rm Re\,} z \in (-\frac {1}{2},\frac {1}{2})\}$ and for all $(\theta _{1},\ldots,\theta _{m})$ in compact subsets of $(0,2\pi )_{\mathrm {ord}}^{m}$.

Remark 4.2 The above theorem, but with the ${\mathcal {O}}$-term replaced by $o(1)$, was proved by Ehrhardt in [Reference Ehrhardt29]. The stronger estimate ${\mathcal {O}}( n^{-1+ 2\beta _{\max }} )$ was obtained in [Reference Deift, Its and Krasovsky26, Remark 1.4]. (In fact the results [Reference Deift, Its and Krasovsky26, Reference Ehrhardt29] are valid for more general values of the $\beta _{k}$'s, but this will not be needed for us.)

Lemma 4.3 For $z\in \mathbb {T}$, we have

(4.4)\begin{align} \frac{1}{i\pi}{\int\hskip -1,05em -\,}_{\mathbb{T}}\frac{V'(w)}{w-z}dw& =\frac{1}{z}\left(1-2\pi\psi(z)\right), \end{align}
(4.5)\begin{align} \frac{1}{i\pi}{\int\hskip -1,05em -\,}_{\mathbb{T}}\frac{V(w)}{w-z}dw& = V_{0} + V_+(z) - V_{-}(z) = V_{0} + 2i \, {\rm Im\,}(V_+(z)), \end{align}

where ${\int\hskip -1,05em -\,}$ stands for principal value integral.

Proof. The first identity (4.4) can be proved by a direct residue calculation using (1.3) and (1.6). We give here another proof, more in the spirit of [Reference Berestycki, Webb and Wong8, Lemma 5.8] and [Reference Charlier and Gharakhloo19, Lemma 8.1]. Let $H,\varphi :\mathbb {C}\setminus \mathbb {T}\to \mathbb {C}$ be functions given by

(4.6)\begin{equation} H(z) = \varphi(z)\left(g'(z)-\frac{1}{2z}\right)-\frac{1}{2z}+\frac{1}{2\pi i}\int_{\mathbb{T}}\frac{V'(w)}{w-z}dw, \quad \varphi(z)=\begin{cases} -1, & |z|<1, \\ 1, & |z|>1. \end{cases} \end{equation}

Clearly, $H(\infty )=0$, and for $z\in \mathbb {T}$ we have

\[ H_+(z)-H_-(z)={-}\left(g'_+(z)+g_-'(z)-\frac{1}{z}\right)+V'(z)=0, \]

where for the last equality we have used (3.6). So $H(z)\equiv 0$ by Liouville's theorem. Identity (4.4) now follows from relations (3.5) and

\[ 0=H_+(z)+H_-(z)={-}(g'_+(z)-g'_-(z))-\frac{1}{z}+\frac{1}{i\pi}{\int\hskip -1,05em -\,}_{\mathbb{T}}\frac{V'(w)}{w-z}dz, \qquad z\in \mathbb{T}. \]

The second identity (4.5) follows from a direct residue computation, using (1.3).

Proposition 4.4 As $n\to +\infty$,

(4.7)\begin{equation} \log\frac{D_{n}(\vec\alpha,\vec\beta,V,W)}{D_{n}(\vec\alpha,\vec\beta,0,W)}=c_1 n^2 + c_2 n + c_3 + {\mathcal{O}}(n^{{-}1+2\beta_{\max}}),\end{equation}

where

\begin{align*} c_1 & ={-}\frac{V_{0}}{2}-\frac{1}{2}\int_0^{2\pi}V(e^{i\theta}) {\rm d}\mu_{V}(e^{i\theta}), \\ c_2& = \sum_{k=0}^{m} \frac{\alpha_{k}}{2}(V(t_{k})-V_{0}) - \sum_{k=0}^{m} 2i\beta_{k} {\rm Im\,}(V_+(t_{k})) + \int_0^{2\pi}W(e^{i\theta}){\rm d}\mu_V(e^{i\theta}) -W_{0}, \\ c_3& =\sum_{k=0}^m\frac{\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4}}{\psi(t_k)}\left(\frac{1}{2\pi}-\psi(t_k)\right). \end{align*}

Proof. We will use (2.7) with $V=\hat {V}_{s}$ and $\gamma =s$, i.e.

(4.8)\begin{equation} \partial_{s} \log D_{n}(\vec{\alpha},\vec{\beta},\hat{V}_{s},W) = \frac{1}{2\pi}\int_{0}^{2\pi}[Y^{{-}1}(z)Y'(z)]_{21}z^{{-}n+1}\partial_{s}f(z){\rm d}\theta, \end{equation}

where $f(z)=e^{-n\hat {V}_{s}(z)}\omega (z)$ and $Y(\cdot ) = Y_{n}(\cdot ;\vec {\alpha },\vec {\beta },\hat {V}_{s},W)$. Recall from proposition 2.2 that (4.8) is valid only when $D_{k}^{(n)}(f) \neq 0$, $k=n-1,n,n+1$. However, it follows from the analysis of subsection 3.5 (see also remark 3.2) that the right-hand side of (4.8) exists for all $n$ sufficiently large, for all $(\theta _{1},\ldots,\theta _{m})\in \Theta$, all $\alpha _{0},\ldots,\alpha _{m}\in \mathfrak {A}$, all $\beta _{0},\ldots,\beta _{m}\in \mathfrak {B}$ and all $s \in [0,1]$. Hence, we can extend (4.8) by continuity (see also [Reference Charlier14, Reference Charlier and Gharakhloo19, Reference Deift, Its and Krasovsky26, Reference Its and Krasovsky37, Reference Krasovsky40] for similar situations with more details provided). By (2.1), for $z\in \mathbb {T}\setminus \{t_{0},\ldots,t_{m}\}$ we have

(4.9)\begin{align} & [Y(z)^{{-}1}Y'(z)]_{21,+}=[Y(z)^{{-}1}Y'(z)]_{21,-}, \end{align}
(4.10)\begin{align} & [Y(z)^{{-}1}Y'(z)]_{21}={-}\frac{z^{n}}{f(z)}\left([Y(z)^{{-}1}Y'(z)]_{11,+}-[Y(z)^{{-}1}Y'(z)]_{11,-}\right), \end{align}

and thus, using that $\partial _s\log f(z) = -nV(z)$ is analytic in a neighbourhood of $\mathbb {T}$,

(4.11)\begin{equation} \partial_{s} \log D_{n}(\vec{\alpha},\vec{\beta},\hat{V}_{s},W)=\frac{-1}{2\pi i}\int_{\mathcal{C}_e\cup\mathcal{C}_i}\left[Y^{{-}1}(z)Y'(z)\right]_{11}\partial_s\log f(z)dz, \end{equation}

where $\mathcal {C}_i \subset \{z:|z|<1\}\cap U$ is a closed curve oriented counterclockwise and surrounding $0$, and $\mathcal {C}_e \subset \{z:|z|>1\} \cap U$ is a closed curve oriented clockwise and surrounding $0$. We choose $\mathcal {C}_i$ and $\mathcal {C}_e$ such that they do not intersect $\mathbb {T}\cup \gamma _+\cup \gamma _{-}\cup \mathcal {D}_{t_0}\cup \cdots \cup \mathcal {D}_{t_m}$.

Inverting the transformations $Y \mapsto T \mapsto S \mapsto R$ of § 3 using (3.13), (3.15) and (3.31), for $z \in \mathcal {C}_e\cup \mathcal {C}_i$ we find

\begin{align*} \left[Y^{{-}1}(z)Y'(z)\right]_{11}& =ng'(z)+\left[P^{(\infty)}(z)^{{-}1}P^{(\infty)\prime}(z)\right]_{11}\nonumber\\& \quad +\left[P^{(\infty)}(z)^{{-}1}R(z)^{{-}1}R'(z)P^{(\infty)}(z)\right]_{11}. \end{align*}

Substituting the above in (4.11), we find the following exact identity:

\[ \partial_{s} \log D_{n}(\vec{\alpha},\vec{\beta},\hat{V}_{s},W) = I_{1,s}+I_{2,s}+I_{3,s}, \]

where

(4.12)\begin{align} I_{1,s}& =\frac{-n}{2\pi i}\int_{\mathcal{C}_e\cup\mathcal{C}_i}g'(z)\partial_s\log f(z)dz, \end{align}
(4.13)\begin{align} I_{2,s}& =\frac{-1}{2\pi i}\int_{\mathcal{C}_e\cup\mathcal{C}_i}\left[P^{(\infty)}(z)^{{-}1}P^{(\infty)\prime}(z)\right]_{11}\partial_s \log f(z)dz, \end{align}
(4.14)\begin{align} I_{3,s}& =\frac{-1}{2\pi i}\int_{\mathcal{C}_e\cup\mathcal{C}_i}\left[P^{(\infty)}(z)^{{-}1}R(z)^{{-}1}R'(z)P^{(\infty)}(z)\right]_{11}\partial_s \log f(z)dz. \end{align}

Using $\partial _s\log f(z) = -nV(z)$ and (3.5) (with $\psi$ replaced by $\psi _{s}$), we find

\[ I_{1,s}=\frac{n^2}{2\pi i}\int_{\mathbb{T}}(g_+'(z)-g_-'(z))V(z)dz={-}n^2\int_{\mathbb{T}}V(z)\psi_{s}(z)\frac{dz}{iz}, \]

and since $\psi _{s}=\frac {1-s}{2\pi }+s\psi$,

(4.15)\begin{align} \int_0^1 I_{1,s}{\rm d}s& ={-}\frac{n^2}{2}\int_0^{2\pi}V(e^{i\theta})\left( \frac{1}{2\pi}+\psi(e^{i\theta}) \right){\rm d}\theta \nonumber\\ & ={-}\frac{n^2}{2}\left(V_{0}+\int_0^{2\pi}V(e^{i\theta}) {\rm d}\mu_{V}(e^{i\theta}) \right) = c_1 n^2.\end{align}

Now we turn to the analysis of $I_{2,s}$. Using (3.17), we obtain

(4.16)\begin{equation} \Big[P^{(\infty)}(z)^{{-}1}P^{(\infty)\prime}(z)\Big]_{11}=\varphi(z)\partial_z[\log D(z)] \end{equation}

where $\varphi$ is defined in (4.6). Also, by (3.18), (3.20) and (3.21), we have

(4.17)\begin{equation} \partial_z\log D(z)= \begin{cases} W_+'(z)+\sum_{k=0}^{m}\left(\beta_k+\frac{\alpha_k}{2}\right)\frac{1}{z-t_k}, & |z|<1,\\ - W_{-}'(z)+\sum_{k=0}^{m}\left(\beta_k-\frac{\alpha_k}{2}\right)\left(\frac{1}{z-t_k}-\frac{1}{z}\right), & |z|>1, \end{cases} \end{equation}

where $W_\pm$ are defined in (1.4), and by (1.6), we have

(4.18)\begin{equation} - \sum_{k={-}\infty}^{+\infty}|k|W_{k}V_{{-}k} = \int_0^{2\pi}W(e^{i\theta}){\rm d}\mu_V(e^{i\theta}) -W_{0}. \end{equation}

Substituting (4.16) and (4.17) in (4.13), and doing a residue computation, we obtain

\begin{align*} I_{2,s} & ={-} n \sum_{k={-}\infty}^{+\infty}|k|W_{k}V_{{-}k} + n\sum_{k=0}^{m} \frac{\alpha_{k}}{2}(V(t_{k})-V_{0})\\ & \quad - n \sum_{k=0}^{m} \beta_{k} \left( \frac{1}{\pi i}{\int\hskip -1,05em -\,}_{\mathbb{T}} \frac{V(z)}{z-t_{k}} dz - V_{0} \right) = c_{2}n, \end{align*}

where for the last equality we have used (4.5) and (4.18). Clearly, $I_{2,s}$ is independent of $s$, and therefore $\int _{0}^{1}I_{2,s}{\rm d}s = c_{2}n$. We now analyse $I_{3,s}$ as $n \to + \infty$. From (3.35), we have

\[ R^{{-}1}(z)R'(z)=n^{{-}1}R^{(1)\prime}(z) + {\mathcal{O}}(n^{{-}2 + 2 \beta_{\max}}), \]

and, using first (3.17) and then (3.36),

\begin{align*} & \left[P^{(\infty)}(z)^{{-}1}n^{{-}1}R^{(1)\prime}(z)P^{(\infty)}(z)\right]_{11}\\& \quad =\frac{1}{n} \times \begin{cases} \left[R^{(1)\prime}(z)\right]_{22}, & |z|<1 \\ \left[R^{(1)\prime}(z)\right]_{11}, & |z|>1 \end{cases} =\frac{-\varphi(z)}{2\pi n}\sum_{k=0}^{m}\frac{(\beta_k^2-\frac{\alpha_k^2}{4})t_k}{\psi(t_k)(z-t_k)^2}. \end{align*}

Therefore, as $n \to + \infty$

\[ I_{3,s}=\frac{1}{2\pi}\sum_{k=0}^m\frac{(\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4})t_k}{\psi(t_k)}\frac{1}{ 2\pi i}\left(\int_{C_i}-\int_{C_e}\right)\frac{V(z)}{(z-t_k)^2}dz + {\mathcal{O}}(n^{{-}1+2\beta_{\max}}). \]

Partial integration yields

\[ \frac{1}{2\pi i}\left(\int_{C_i}-\int_{C_e}\right)\frac{V(z)}{(z-t_k)^2}dz=\frac{1}{2\pi i}\left(\int_{C_i}-\int_{C_e}\right)\frac{V'(z)}{z-t_k}dz=\frac{1}{\pi i}{\int\hskip -1,05em -\,}_{\mathbb{T}}\frac{V'(z)}{z-t_k}dz, \]

and thus, by (4.4), we have

\[ I_{3,s}= \frac{1}{2\pi}\sum_{k=0}^{m}\frac{(\beta_{k}^{2}-\frac{\alpha_{k}^{2}}{4})t_k}{\psi(t_k)} \frac{1}{t_k}\left(1-2\pi\psi(t_k)\right) + {\mathcal{O}}(n^{{-}1+2\beta_{\max}}), \qquad \mbox{as } n \to + \infty. \]

Since the above asymptotics are uniform for $s \in [0,1]$ (see remark 3.2), the claim follows.

Theorem 1.1 now directly follows by combining proposition 4.4 with theorem 4.1. (The estimate (1.12) follows from remark 3.3.)

5. Proofs of corollaries 1.4, 1.5, 1.6, 1.8, 1.9

Let $e^{\phi _{1}},\ldots,e^{i\phi _{n}}$ be distributed according to (1.17) with $\phi _{1},\ldots,\phi _{n}\in [0,2\pi )$. Recall that $N_{n}(\theta ) = \#\{\phi _{j}\in [0,\theta )\}$ and that the angles $\phi _{1},\ldots,\phi _{n}$ arranged in increasing order are denoted by $0 \leq \xi _{1} \leq \xi _{2} \leq \ldots \leq \xi _{n} < 2\pi$.

Proof of corollary 1.4.

The asymptotics for the cumulants $\{\kappa _{j}\}_{j=1}^{+\infty }$ follow directly from (1.30), theorem 1.3 (with $m=0$, $\alpha _{0}=0$ and with $W$ replaced by $tW$) and the fact that (1.24) can be differentiated any number of time with respect to $t$ without worsening the error term (see remark 3.3). Furthermore, if $W$ is non-constant, then $\sum _{k = 1}^{+\infty } kW_{k}W_{-k} = \sum _{k = 1}^{+\infty } k|W_{k}|^{2} > 0$ (because $W$ is assumed to be real-valued) and from theorem 1.3 (with $m=0$, $\alpha _{0}=0$ and with $W$ replaced by $\frac {tW}{(2\sum _{k = 1}^{+\infty } kW_{k}W_{-k})^{1/2}}$, $t \in \mathbb {R}$) we also have

\[ \mathbb{E}\Bigg[ \exp \left( t\frac{\sum_{j=1}^{n}W(e^{i\phi_{j}})-n\int_0^{2\pi} W(e^{i\phi}){\rm d}\mu_V(e^{i\phi})}{(2\sum_{k = 1}^{+\infty} kW_{k}W_{{-}k})^{1/2}} \right) \Bigg] = e^{\frac{t^{2}}{2}+ {\mathcal{O}}(n^{{-}1})}, \]

as $n \to + \infty$ with $t\in \mathbb {R}$ arbitrary but fixed. The convergence in distribution stated in corollary 1.4 now follows from standard theorems (see e.g. [Reference Billingsley9, top of page 415]).

Proof of corollary 1.5.

The proof is similar to the proof of corollary 1.4. The main difference is that (i) for the asymptotics of the cumulants, one needs to use theorem 1.3 with $W=0$, $m=0$ if $t = 1$, and with $W=0$, $m=1$, $\alpha _0 = 0$, $u_{1}=0$ if $t \in \mathbb {T} \setminus \{1\}$, and (ii) for the convergence in distribution, one needs to use theorem 1.3 with $W=0$, $m=0$ and $\alpha _{0}$ replaced by $\alpha \sqrt {2}/\sqrt {\log n}$, $\alpha \in \mathbb {R}$ fixed, if $t = 1$, and with $W=0$, $m=1$, $\alpha _0 = 0$, $u_{1}=0$ and $\alpha _1$ replaced by $\alpha \sqrt {2}/\sqrt {\log n}$, $\alpha \in \mathbb {R}$ fixed, if $t \in \mathbb {T}\setminus \{1\}$.

Proof of corollary 1.6.

This proof is also similar to the proof of corollary 1.4. For the asymptotics of the cumulants, one needs to use theorem 1.3 with $W=0$, $m=1$, $\alpha _{0}=\alpha _{1}=0$ and for the convergence in distribution, one needs to use theorem 1.3 with $W=0$, $m=1$, $\alpha _{0}=\alpha _{1}=0$, and with $u_{1}$ replaced by $\pi u/\sqrt {\log n}$, $u\in \mathbb {R}$ fixed.

Proof of corollary 1.8.

The proof is inspired by Gustavsson [Reference Gustavsson36, Theorem 1.2]. Let $\theta \in (0,2\pi )$ and $k_{\theta }=[n \int _{0}^{\theta }{\rm d}\mu _{V}(e^{i \phi })]$, where $[x]:= \lfloor x + \frac {1}{2}\rfloor$, and consider the random variable

(5.1)\begin{equation} Y_{n} := \frac{n\int_{0}^{\xi_{k_{\theta}}}{\rm d}\mu_{V}(e^{i \phi}) - k_{\theta}}{\sqrt{\log n}/\pi} = \frac{\mu_{n}(\xi_{k_{\theta}})-k_{\theta}}{\sigma_{n}}, \end{equation}

where $\mu _{n}(\xi ) := n\int _{0}^{\xi }{\rm d}\mu _{V}(e^{i \phi })$ and $\sigma _{n} := \frac {1}{\pi }\sqrt {\log n}$. For $y \in \mathbb {R}$, we have

(5.2)\begin{equation} \mathbb{P}\big[ Y_{n} \leq y \big] = \mathbb{P}\Big[\xi_{k_{\theta}} \leq \mu_{n}^{{-}1}\left(k_{\theta} + y \sigma_{n}\right) \Big] = \mathbb{P}\Big[N_{n}\left(\mu_{n}^{{-}1}\left(k_{\theta} + y \sigma_{n} \right)\right) \geq k_{\theta} \Big]. \end{equation}

Letting $\tilde {\theta } := \mu _{n}^{-1}(k_{\theta } + y \sigma _{n} )$, we can rewrite (5.2) as

(5.3)\begin{equation} \mathbb{P}\big[ Y_{n} \leq y \big] = \mathbb{P}\Bigg[ \frac{N_{n}(\tilde{\theta})-\mu_{n}(\tilde{\theta})}{\sqrt{\sigma_{n}^{2}}} \geq \frac{k_{\theta}-\mu_{n}(\tilde{\theta})}{\sigma_{n}} \Bigg] = \mathbb{P}\Bigg[ \frac{\mu_{n}(\tilde{\theta})-N_{n}(\tilde{\theta})}{\sigma_{n}} \leq y \Bigg]. \end{equation}

As $n \to +\infty$, we have

(5.4)\begin{equation} k_{\theta} = [\mu_{n}(\theta)] = {\mathcal{O}}(n), \qquad \tilde{\theta} = \theta \left(1+{\mathcal{O}}\left(\tfrac{\sqrt{\log n}}{n}\right)\right). \end{equation}

Since theorem 1.3 also holds in the case where $\theta$ depends on $n$ but remains bounded away from $0$, the same is true for the convergence in distribution in corollary 1.6. By (5.4), $\tilde {\theta }$ remains bounded away from $0$, and therefore corollary 1.6 together with (5.3) implies that $Y_{n}$ converges in distribution to a standard normal random variable. Since

\begin{align*} & \mathbb{P}\Bigg[ \frac{n\psi(e^{i\eta_{k_{\theta}}})}{\sqrt{\log n}/\pi}(\xi_{k_{\theta}}-\eta_{k_{\theta}}) \leq y \Bigg]\\ & \quad= \mathbb{P}\Bigg[ Y_{n} \leq \frac{\mu_{n}(\eta_{k_{\theta}} + y \frac{\sigma_{n}}{n\psi(e^{i\eta_{k_{\theta}}})})-\mu_{n}(\eta_{k_{\theta}})}{\sigma_{n}} \Bigg] \\ & \quad = \mathbb{P}\Bigg[ Y_{n} \leq \int_{\eta_{k_\theta}}^{\eta_{k_\theta} + \frac{y\sigma_n}{n \psi(e^{i \eta_{k_\theta}})}} \frac{n\psi(e^{i\phi})}{\sigma_n} {\rm d}\phi \Bigg] = \mathbb{P}\big[ Y_{n} \leq y +o(1) \big] \end{align*}

as $n\to + \infty$, this implies the convergence in distribution in the statement of corollary 1.8.

Proof of corollary 1.9.

Let $\mu _{n}(\xi ) := n\int _{0}^{\xi }{\rm d}\mu _{V}(e^{i \phi })$, $\sigma _{n} := \frac {1}{\pi }\sqrt {\log n}$, and for $\theta \in [0,2\pi )$, let $\overline {N}_{n}(\theta ) := N_{n}(\theta )-\mu _{n}(\theta )$. Using theorem 1.3 with $W=0$, $m \in {\mathbb {N}}_{>0}$, $\alpha _{0}=\ldots =\alpha _{m}=0$ and $u_{1},\ldots,u_{m} \in \mathbb {R}$, we infer that for any $\delta \in (0,\pi )$ and $M>0$, there exists $n_{0}'=n_{0}'(\delta,M)\in \mathbb {N}$ and $\mathrm {C}=\mathrm {C}(\delta,M)>0$ such that

(5.5)\begin{equation} \mathbb{E} \left( e^{\sum_{k=1}^{m}u_{k}\overline{N}_{n}(\theta_{k})} \right) \leq \mathrm{C} \exp \left( \frac{\sum_{k=0}^{m} u_{k}^{2}}{2}\frac{\sigma_{n}^{2}}{2} \right), \end{equation}

for all $n\geq n_{0}'$, $(\theta _{1},\ldots, \theta _{m})$ in compact subsets of $(0,2\pi )_{\mathrm {ord}}^{m}\cap (\delta,2\pi -\delta )^{m}$ and $u_{1},\ldots,u_{m} \in [-M,M]$, and where $u_{0}=-u_{1}-\ldots -u_{m}$.

Lemma 5.1 For any $\delta \in (0,\pi )$, there exists $c>0$ such that for all large enough $n$ and small enough $\epsilon >0$,

(5.6)\begin{equation} \mathbb{P}\left(\sup_{\delta \leq \theta \leq 2\pi-\delta}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi (1+\epsilon) \right) \geq 1-\frac{c}{\log n}. \end{equation}

Proof. A naive adaptation of [Reference Charlier15, Lemma 8.1] (an important difference between [Reference Charlier15] and our situation is that $\sigma _{n} = \frac {\sqrt {\log n}}{\sqrt {2}\pi }$ in [Reference Charlier15] while here we have $\smash {\sigma _{n} = \frac {\sqrt {\log n}}{\pi }}$) yields

\[ \mathbb{P}\left(\sup_{\delta \leq \theta \leq 2\pi-\delta}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \sqrt{2}\pi (1+\epsilon) \right) \geq 1-o(1). \]

Inequality (5.5) can in fact be used to obtain the stronger statement (5.6).Footnote 1 Recall that $\eta _{k}= \mu _{n}^{-1}(k)$ is the classical location of the $k$-th smallest point $\xi _{k}$ and is defined in (1.34). Since $\mu _{n}$ and $N_{n}$ are increasing functions, for $x\in [\eta _{k-1},\eta _k]$ with $k \in \{1,\ldots,n\}$, we have

(5.7)\begin{equation} N_{n}(x)-\mu_{n}(x)\leq N_{n}(\eta_k)-\mu_{n}(\eta_{k-1}) =N_{n}(\eta_k)-\mu_{n}(\eta_{k})+1, \end{equation}

which implies

\[ \sup_{\delta \leq x \leq 2\pi-\delta}\frac{N_{n}(x)-\mu_{n}(x)}{\sigma_{n}^2} \leq \sup_{k\in \mathcal{K}_{n}} \frac{N_{n}(\eta_k)-\mu_{n}(\eta_{k})+1}{\sigma_{n}^2}, \]

where $\mathcal {K}_{n} = \{k: \eta _{k}>\delta \mbox { and } \eta _{k-1}<2\pi -\delta \}$. Hence, for any $v>0$,

\[ \mathbb{P}\left(\sup_{\delta \leq x \leq 2\pi-\delta}\frac{N_{n}(x)-\mu_{n}(x)}{\sigma^2_{n}}>v\right)\leq \mathbb{P}\left( \sup_{k\in \mathcal{K}_{n}} \frac{N_{n}(\kappa_k)-\mu_{n}(\kappa_{k})}{\sigma_{n}^2}>v-\frac{1}{\sigma_{n}^{2}}\right). \]

Let $\epsilon _{0}>0$ be small and fixed, and let $\mathcal {I}$ be an arbitrary but fixed subset of $(0,\epsilon _{0}]$. Claim (5.6) will follow if we can prove for any $\epsilon \in \mathcal {I}$ that

(5.8)\begin{equation} \mathbb{P}\left( \sup_{k\in \mathcal{K}_{n}} \frac{N_{n}(\eta_{k})-\mu_{n}(\eta_{k})}{\sigma_{n}^2}>\pi(1+\epsilon)\right) \leq \frac{c_{1}}{\log n}, \end{equation}

for some $c_{1}=c_{1}(\mathcal {I})>0$. Let $m \in {\mathbb {N}}$ be fixed and $S_{m}$ and $S_{m}'$ be the following two collections of points of size $m$

\begin{align*} & S_{m} = \Bigg\{ \delta + (2\pi-2\delta) \frac{4j+1}{4m}: \qquad j=0,\ldots,m-1 \Bigg\}, \\ & S_{m}' = \Bigg\{ \delta + (2\pi-2\delta) \frac{4j+2}{4m}: \qquad j=0,\ldots,m-1 \Bigg\}. \end{align*}

Let $X_{n}(\theta ):= (N_{n}(\theta )-\mu _{n}(\theta ))/\sigma _{n}$. For any $\theta \in [\delta,2\pi -\delta ]$, we have by corollary 1.6 that $\mathbb {E}[X_{n}(\theta )] = {\mathcal {O}}(\frac {\sqrt {\log n}}{n})$ and $\mbox {Var}[X_{n}(\theta )] \leq 2$ for all large enough $n$. Hence, by Chebyshev's inequality, for any fixed $\ell > 0$, $\mathbb {P}(\frac {|X_{n}|}{\sigma _{n}} \geq \ell ) \leq \frac {3}{\ell ^{2}\sigma _{n}^{2}}$ for all large enough $n$. Using this inequality with $\ell =\frac {\pi \epsilon }{2}$ together with a union bound, we get

\begin{align*} \mathbb{P}\left(\sup_{\hat{\theta} \in S_{m}\cup S_{m}'}\Bigg|\frac{N_{n}(\hat{\theta})-\mu_{n}(\hat{\theta})}{\sigma^2_{n}}\Bigg| > \frac{\pi\epsilon}{2} \right) & = \mathbb{P}\left(\sup_{\hat{\theta} \in S_{m}\cup S_{m}'}\Bigg|\frac{X_{n}(\hat{\theta})}{\sigma_{n}}\Bigg| > \frac{\pi\epsilon}{2} \right) \\& \leq \frac{3 \times 2m}{(\frac{\pi\epsilon}{2})^{2}\frac{\log n}{\pi^{2}}} = \frac{24m}{\epsilon^{2}\log n}, \end{align*}

and then

(5.9)\begin{align} & {\mathbb{P}} \left(\sup_{k \in \mathcal{K}_{n}}\Bigg|\frac{N_{n}(\eta_{k})-\mu_{n}(\eta_{k})}{\sigma^2_{n}}\Bigg| > \pi (1+\epsilon) \right) \leq \frac{24 m}{\epsilon^{2}\log n} \nonumber\\ & \quad+ \sum_{k \in \mathcal{K}_{n}} {\mathbb{P}} \left( \Bigg|\frac{N_{n}(\eta_{k})-\mu_{n}(\eta_{k})}{\sigma^2_{n}}\Bigg| > \pi (1+\epsilon) \quad \mbox{ and } \quad \sup_{\hat{\theta} \in S_{m}\cup S_{m}'}\Bigg|\frac{X_{n}(\hat{\theta})}{\sigma_{n}}\Bigg| \leq \frac{\pi \epsilon}{2} \right). \end{align}

The reason for introducing two subsets $S_{m},S_{m}'$ is the following: for any $k \in \mathcal {K}_{n}$, one must have that $\eta _{k}$ remains bounded away from at least one of $S_{m},S_{m}'$ (so that (5.5) can be applied). Indeed, suppose for example that $\theta$ is bounded away from $S_{m}$, then by (5.5) (with $m$ replaced by $m+1$ and with $u_{1}=u$ and $u_{2}=\ldots =u_{m+1}=-\frac {u}{m}$) we have

\[ {\mathbb{E}} \Bigg[ \exp \left( u \overline{N}_{n}(\eta_{k})-\frac{u}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta}) \right) \Bigg] \leq \mathrm{C} \exp \Bigg\{ \frac{u^{2}\sigma_{n}^{2}}{4}\left( 1+\frac{1}{m} \right) \Bigg\} \]

and similarly,

\[ {\mathbb{E}} \Bigg[ \exp \left( \frac{u}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta}) - u \overline{N}_{n}(\eta_{k}) \right) \Bigg] \leq \mathrm{C} \exp \Bigg\{ \frac{u^{2}\sigma_{n}^{2}}{4}\left( 1+\frac{1}{m} \right) \Bigg\}. \]

Hence, if $\eta _{k}$ remains bounded away from $S_{m}$, we have (with $\gamma :=\pi (1+\epsilon /2)$ and $\alpha := \frac {1}{2}(1+\frac {1}{m})$)

(5.10)\begin{align} & {\mathbb{P}} \left( \Bigg|\frac{\overline{N}_{n}(\eta_{k})}{\sigma^2_{n}}\Bigg| > \pi (1+\epsilon) \quad \mbox{ and } \quad \sup_{\hat{\theta} \in S_{m}\cup S_{m}'}\Bigg| \frac{\overline{N}_{n}(\hat{\theta})}{\sigma_{n}^{2}}\Bigg| \leq \frac{\pi \epsilon}{2} \right) \end{align}
(5.11)\begin{align} & \quad\leq {\mathbb{P}} \left( \frac{\overline{N}_{n}(\eta_{k})-\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta})}{\sigma_{n}^{2}} > \gamma \right) + {\mathbb{P}} \left( \frac{\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta})-\overline{N}_{n}(\eta_{k})}{\sigma_{n}^{2}} > \gamma \right) \nonumber\\ & \quad = {\mathbb{P}} \left( e^{\frac{\gamma}{\alpha}(\overline{N}_{n}(\eta_{k})-\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta}))} > e^{\frac{\gamma^{2}}{\alpha}\sigma_{n}^{2}} \right) \nonumber\\ & \qquad + {\mathbb{P}} \left( e^{\frac{\gamma}{\alpha}(\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta})-\overline{N}_{n}(\eta_{k}))} > e^{\frac{\gamma^{2}}{\alpha}\sigma_{n}^{2}} \right) \nonumber\\ & \quad\leq {\mathbb{E}} \left( e^{\frac{\gamma}{\alpha}(\overline{N}_{n}(\eta_{k})-\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta}))} \right)e^{-\frac{\gamma^{2}}{\alpha} \sigma_{n}^{2}}\nonumber\\ & \qquad + {\mathbb{E}} \left( e^{\frac{\gamma}{\alpha}(\frac{1}{m}\sum_{\hat{\theta}\in S_{m}}\overline{N}_{n}(\hat{\theta})-\overline{N}_{n}(\eta_{k}))} \right)e^{-\frac{\gamma^{2}}{\alpha}\sigma_{n}^{2}} \nonumber\\ & \quad\leq 2 \mathrm{C} \exp \left( \frac{\gamma^{2}\sigma_{n}^{2}}{4\alpha^{2}}\left( 1+\frac{1}{m} \right)-\frac{\gamma^{2}}{\alpha}\sigma_{n}^{2} \right) = 2 \mathrm{C} \exp \left( - \frac{\pi^{2}(1+\frac{\epsilon}{2})^{2}\sigma_{n}^{2}}{1+\frac{1}{m}} \right)\nonumber\\ & \quad = 2 \mathrm{C} n^{- \frac{(1+\frac{\epsilon}{2})^{2}}{1+\frac{1}{m}}}. \end{align}

We obtain the same bound (5.11) if $\eta _{k}$ in (5.10) is instead bounded away from $S_{m}'$. The above exponent is less than $-1$ provided that $m$ is sufficiently large relative to $\epsilon$. Since the number of points in $\mathcal {K}_{n}$ is proportional to $n$, claim (5.6) now directly follows from (5.9) (recall also (5.8)).

Lemma 5.2 Let $\delta \in (0,\frac {\pi }{2})$ and $\epsilon > 0$. For all sufficiently large $n$, if the event

(5.12)\begin{equation} \sup_{\delta \leq \theta \leq 2\pi-\delta}\left|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\right| \leq \pi(1+\epsilon) \end{equation}

holds true, then we have

(5.13)\begin{equation} \sup_{k \in (\mu_{n}(2\delta),\mu_{n}(2\pi-2\delta))} \Bigg|\frac{\mu_{n}(\xi_k) - k}{\sigma^2_{n}}\Bigg| \leq \pi (1+\epsilon) + \frac{1}{\sigma_{n}^{2}}, \end{equation}

Proof. The proof is almost identical to the proof of [Reference Charlier15, Lemma 8.2] so we omit it.

By combining lemmas 5.1 and 5.2, we arrive at the following result (the proof is very similar to [Reference Charlier15, Proof of (1.38)], so we omit it).

Lemma 5.3 For any $\delta \in (0,\pi )$, there exists $c>0$ such that for all large enough $n$ and small enough $\epsilon >0$,

(5.14)\begin{equation} \mathbb{P}\left( \max_{\delta n \leq k \leq (1-\delta)n} \psi(e^{i\eta_{k}})|\xi_{k}-\eta_{k}| \leq \frac{1+\epsilon}{\pi} \frac{\log n}{n} \right) \geq 1-\frac{c}{\log n}. \end{equation}
Extending lemmas 5.1 and 5.3 to $\delta =0$.

In this paper, the support of $\mu _{V}$ is $\mathbb {T}$. Therefore, the point $1 \in \mathbb {T}$ should play no special role in the study of the global rigidity of the points, which suggests that (5.6) and (5.14) should still hold with $\delta =0$. The next lemma shows that this is indeed the case.

Lemma 5.4 Proof of (1.35)

For each small enough $\epsilon >0$, there exists $c>0$ such that

\[ \mathbb{P}\left(\sup_{0 \leq \theta < 2\pi}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi(1+\epsilon) \right) \geq 1-\frac{c}{\log n} \]

for all large enough $n$.

Proof. For $-\pi \leq \theta < 0$, let $\tilde {N}_{n}(\theta ):=\#\{\phi _{j}-2\pi \in (-\pi,\theta ]\}$, and for $0 \leq \theta < \pi$, let $\tilde {N}_{n}(\theta ):=\#(\{\phi _{j}-2\pi \in (-\pi,0]\}\cup \{\phi _{j} \in [0,\theta ]\})$. For $-\pi \leq \theta < \pi$, define also $\tilde {\mu }_{n}(\theta ):=n\int _{-\pi }^{\theta }{\rm d}\mu _{V}(e^{i\phi })$. In the same way as for lemma 5.1, the following holds: for any $\delta \in (0,\pi )$, there exists $c_{1}>0$ such that for all large enough $n$ and small enough $\epsilon >0$,

\[ \mathbb{P}\left(\sup_{-\pi+\delta \leq \theta \leq \pi-\delta}\Bigg|\frac{\tilde{N}_{n}(\theta)-\tilde{\mu}_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi(1+\epsilon) \right) \geq 1-\frac{c_{1}}{\log n}. \]

Clearly,

\begin{align*} & \tilde{N}_{n}(\theta) = \begin{cases} N_{n}(\theta+2\pi)-N_{n}(\pi), & \mbox{if } \theta \in (-\pi,0), \\ N_{n}(\theta)+n-N_{n}(\pi), & \mbox{if } \theta \in (0,\pi), \end{cases}\\ & \tilde{\mu}_{n}(\theta) = \begin{cases} \mu_{n}(\theta+2\pi)-\mu_{n}(\pi), & \mbox{if } \theta \in (-\pi,0), \\ \mu_{n}(\theta)+n-\mu_{n}(\pi), & \mbox{if } \theta \in (0,\pi), \end{cases} \end{align*}

and therefore

\[ \frac{\tilde{N}_{n}(\theta)-\tilde{\mu}_{n}(\theta)}{\sigma^2_{n}} ={-} \frac{N_{n}(\pi)-\mu_{n}(\pi)}{\sigma_{n}^{2}} + \begin{cases} \frac{N_{n}(\theta+2\pi)-\mu_{n}(\theta+2\pi)}{\sigma^2_{n}}, & \mbox{if } \theta \in (-\pi,0), \\ [ 0.1\,cm] \frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}, & \mbox{if } \theta \in (0,\pi). \end{cases} \]

Thus, for all large enough $n$,

\[ \mathbb{P}\left(\sup_{\theta \in [0,2\pi)\setminus (\pi-\delta,\pi+\delta)}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi(1+\epsilon) + \Bigg|\frac{N_{n}(\pi)-\mu_{n}(\pi)}{\sigma_{n}^{2}} \Bigg| \right) \geq 1-\frac{c_{1}}{\log n}. \]

Combining the above with (5.6) (with $c$ replaced by $c_{2}$), we obtain

(5.15)\begin{align} \mathbb{P}\left(\sup_{\theta \in [0,2\pi)}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi(1+\epsilon) + \Bigg|\frac{N_{n}(\pi)-\mu_{n}(\pi)}{\sigma_{n}^{2}} \Bigg| \right) \geq 1- \frac{c_{1}+c_{2}}{\log n}. \end{align}

Let $X_{n}:= (N_{n}(\pi )-\mu _{n}(\pi ))/\sigma _{n}$. By corollary 1.6, $\mathbb {E}[X_{n}] = {\mathcal {O}}(\frac {\sqrt {\log n}}{n})$ and $\mbox {Var}[X_{n}] \leq 2$ for all large enough $n$. Hence, by Chebyshev's inequality, for any fixed $\ell > 0$, $\mathbb {P}(\frac {|X_{n}|}{\sigma _{n}} \geq \ell ) \leq \frac {3}{\ell ^{2}\sigma _{n}^{2}}$ for all large enough $n$. Applying this inequality with $\ell = \pi (1+(1+\epsilon )\epsilon ) - \pi (1+\epsilon ) = \pi \epsilon ^{2}$ we see that if $\mathbb {P}(A)$ denotes the left-hand side of (5.15), then

\[ \mathbb{P}(A) \leq \mathbb{P}\left(A \cap \Bigg\{\tfrac{|X_{n}|}{\sigma_{n}} < \ell\Bigg\}\right) + \mathbb{P}\left(\tfrac{|X_{n}|}{\sigma_{n}} \geq \ell\right) \leq \mathbb{P}\left(A \cap \Bigg\{\tfrac{|X_{n}|}{\sigma_{n}} < \ell\Bigg\}\right) + \frac{3}{\ell^{2}\sigma_{n}^{2}}. \]

Together with (5.15), this gives

\begin{align*} & \mathbb{P}\left(\sup_{\theta \in [0,2\pi)}\Bigg|\frac{N_{n}(\theta)-\mu_{n}(\theta)}{\sigma^2_{n}}\Bigg|\leq \pi\left(1+(1+\epsilon)\epsilon\right) \right)\\ & \quad \geq \mathbb{P}\left(A \cap \Bigg\{\tfrac{|X_{n}|}{\sigma_{n}} < \ell\Bigg\}\right) \geq \mathbb{P}(A) - \frac{3}{\ell^{2}\sigma_{n}^{2}} \\ & \quad\geq 1- \frac{c_{1}+c_{2}}{\log n} - \frac{3}{\ell^{2}\sigma_{n}^{2}} \geq 1-\frac{c_{3}}{\log n}, \end{align*}

for some $c_{3}=c_{3}(\epsilon )>0$, which proves the claim.

The upper bound (1.36) can be proved using the same idea as in the proof of lemma 5.4.

Acknowledgements

The work of all three authors was supported by the European Research Council, Grant Agreement No. 682537. C. C. also acknowledges support from the Swedish Research Council, Grant No. 2021-04626. J. L. also acknowledges support from the Swedish Research Council, Grant No. 2021-03877, and the Ruth and Nils-Erik Stenbäck Foundation. We are very grateful to the referees for valuable suggestions, and in particular for providing us with a proof of (5.6).

Appendix A. Equilibrium measure

Assume that $\mu _{V}$ is supported on $\mathbb {T}$. We make the ansatz that $\mu _{V}$ is of form (1.7) for some $\psi$. Let $g$ be as in (3.1). Substituting (3.2) in (1.23) and differentiating, we obtain

\[ g_+'(z)+g_{-}'(z) = V'(z) + \frac{1}{z}, \qquad z \in \mathbb{T}. \]

Since $g'(z) = \frac {1}{z}+{\mathcal {O}}(z^{-2})$ as $z \to \infty$, we deduce that

(A.1)\begin{equation} g'(z) ={-}\frac{\varphi(z)}{2\pi i}\int_{\mathbb{T}}\frac{\frac{1}{s}+V'(s)}{s-z}{\rm d}s, \qquad z \in \mathbb{C}\setminus \mathbb{T}, \end{equation}

where $\varphi (z) := +1$ if $|z|>1$ and $\varphi (z) := -1$ if $|z|<1$. Using (A.1) in (3.5), it follows that

(A.2)\begin{equation} -\frac{2\pi}{z}\psi(z) = \frac{1}{\pi i}{\int\hskip -1,05em -\,}_{\mathbb{T}} \frac{\frac{1}{s}+V'(s)}{s-z}{\rm d}s, \qquad z \in \mathbb{T}. \end{equation}

Recall from (1.3) that $V$ is analytic in the open annulus $U$ and real-valued on $\mathbb {T}$, and therefore

\[ V(z) = V_{0} + \sum_{k \geq 1} (V_{k}z^{k}+\overline{V_{k}}z^{{-}k}), \qquad V'(z) = \sum_{k \geq 1} (k V_{k}z^{k-1}- k\overline{V_{k}}z^{{-}k-1}), \quad z \in U. \]

(It is straightforward to check that the series $\sum _{k \geq 1} k V_{k}z^{k-1}$ and $\sum _{k \geq 1} k\overline {V_{k}}z^{-k-1}$ are convergent in $U$.) Direct computation gives

\[ {\int\hskip -1,05em -\,}_{\mathbb{T}} \frac{\frac{1}{s}+V'(s)}{s-z}{\rm d}s ={-} \frac{\pi i}{z} + \pi i \sum_{k \geq 1} (k V_{k}z^{k-1}+ k\overline{V_{k}}z^{{-}k-1}), \qquad z \in \mathbb{T}, \]

which, by (A.2), proves that $\psi$ is given by (1.6). Since the right-hand side of (1.6) is positive on $\mathbb {T}$ (by our assumption that $V$ is regular), we conclude that $\psi (e^{i\theta }){\rm d}\theta$ is a probability measure satisfying the Euler–Lagrange condition (1.23). Therefore, $\psi (e^{i\theta }){\rm d}\theta$ minimizes (1.5), i.e. $\psi (e^{i\theta }){\rm d}\theta$ is the equilibrium measure associated to $V$. Since the equilibrium measure is unique [Reference Saff and Totik42], this proves (1.7).

Appendix B. Confluent hypergeometric model RH problem

  1. (a) $\Phi _{\mathrm {HG}} : \mathbb {C} \setminus \Sigma _{\mathrm {HG}} \rightarrow \mathbb {C}^{2 \times 2}$ is analytic, where $\Sigma _{\mathrm {HG}}$ is shown in figure 3.

  2. (b) For $z \in \Gamma _{k}$ (see figure 3), $k = 1,\ldots,8$, $\Phi _{\mathrm {HG}}$ obeys the jump relations

    (B.1)\begin{equation} \Phi_{\mathrm{HG},+}(z) = \Phi_{\mathrm{HG},-}(z)J_{k}, \end{equation}
    where
    \begin{align*} & J_{1} = \begin{pmatrix} 0 & e^{{-}i\pi \beta} - e^{i\pi\beta} & 0 \end{pmatrix}, \quad J_{5} = \begin{pmatrix} 0 & e^{i\pi\beta} - e^{{-}i\pi\beta} & 0 \end{pmatrix},\\ & J_{3} = J_{7} = \begin{pmatrix} e^{\frac{i\pi\alpha}{2}} & 0 \\ 0 & e^{-\frac{i\pi\alpha}{2}} \end{pmatrix}, \\ & J_{2} = \begin{pmatrix} 1 & 0 \\ e^{{-}i\pi\alpha}e^{i\pi\beta} & 1 \end{pmatrix}\hspace{-0.1cm}, \hspace{-0.3cm} \quad J_{4} = \begin{pmatrix} 1 & 0 \\ e^{i\pi\alpha}e^{{-}i\pi\beta} & 1 \end{pmatrix}\hspace{-0.1cm}, \hspace{-0.3cm} \quad J_{6} = \begin{pmatrix} 1 & 0 \\ e^{{-}i\pi\alpha}e^{{-}i\pi\beta} & 1 \end{pmatrix}\hspace{-0.1cm}, \\ & J_{8} = \begin{pmatrix} 1 & 0 \\ e^{i\pi\alpha}e^{i\pi\beta} & 1 \end{pmatrix}. \end{align*}
  3. (c) As $z \to \infty$, $z \notin \Sigma _{\mathrm {HG}}$, we have

    (B.2)\begin{equation} \Phi_{\mathrm{HG}}(z) = \left( I + \sum_{k=1}^{\infty} \frac{\Phi_{\mathrm{HG},k}}{z^{k}} \right) z^{-\beta\sigma_{3}}e^{-\frac{z}{2}\sigma_{3}}M^{{-}1}(z), \end{equation}
    where
    (B.3)\begin{align} \Phi_{\mathrm{HG},1} & = \left(\beta^{2}-\frac{\alpha^{2}}{4}\right) \begin{pmatrix} -1 & \tau(\alpha,\beta) - \tau(\alpha,-\beta) & 1 \end{pmatrix}, \nonumber\\ \tau(\alpha,\beta) & ={-}\frac{\Gamma\left( \frac{\alpha}{2}-\beta \right)}{\Gamma\left( \frac{\alpha}{2}+\beta + 1 \right)}, \end{align}
    and
    (B.4)\begin{equation} M(z) = \left\{ \begin{array}{l l} \displaystyle e^{\dfrac{i\pi\alpha}{4} \sigma_{3}}e^{- i\pi\beta \sigma_{3}}, & \displaystyle \dfrac{\pi}{2} < \arg z < \pi, \\ \displaystyle e^{-\dfrac{i\pi\alpha}{4} \sigma_{3}}e^{{-}i\pi\beta \sigma_{3}}, & \displaystyle \pi < \arg z < \dfrac{3\pi}{2}, \\ e^{\dfrac{i\pi\alpha}{4}\sigma_{3}} \begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix}, & -\dfrac{\pi}{2} < \arg z < 0, \\ e^{-\dfrac{i\pi\alpha}{4}\sigma_{3}} \begin{pmatrix} 0 & 1 - 1 & 0 \end{pmatrix}, & 0 < \arg z < \dfrac{\pi}{2}. \end{array} \right. \end{equation}
    In (B.2), $z^{-\beta }$ has a cut along $i\mathbb {R}^{-}$ so that $z^{-\beta } = |z|^{-\beta }e^{-\beta i \arg (z)}$ with $-\frac {\pi }{2} < \arg z < \frac {3\pi }{2}$. As $z \to 0$, we have
    (B.5)\begin{align} \begin{aligned} \displaystyle \Phi_{\mathrm{HG}}(z) & = \left\{\begin{array}{@{}ll} \begin{pmatrix} {\mathcal{O}}(1) & {\mathcal{O}}(\log z) \\ {\mathcal{O}}(1) & {\mathcal{O}}(\log z) \end{pmatrix}, & \mbox{if } z \in II \cup III \cup VI \cup VII, \\ \begin{pmatrix} {\mathcal{O}}(\log z) & {\mathcal{O}}(\log z) \\ {\mathcal{O}}(\log z) & {\mathcal{O}}(\log z) \end{pmatrix}, & \mbox{if } z \in I\cup IV \cup V \cup VIII, \end{array} \right.,\nonumber\\ & \displaystyle \mbox{ if } {\rm Re\,} \alpha = 0, \\\displaystyle \Phi_{\mathrm{HG}}(z) & = \left\{ \begin{array}{@{}ll} \begin{pmatrix} {\mathcal{O}}(z^{\frac{\alpha}{2}}) & {\mathcal{O}}(z^{-\frac{\alpha}{2}}) \\ {\mathcal{O}}(z^{\frac{\alpha}{2}}) & {\mathcal{O}}(z^{-\frac{\alpha}{2}}) \end{pmatrix}, & \mbox{if } z \in II \cup III \cup VI \cup VII, \\ \begin{pmatrix} {\mathcal{O}}(z^{-\frac{\alpha}{2}}) & {\mathcal{O}}(z^{-\frac{\alpha}{2}}) \\ {\mathcal{O}}(z^{-\frac{\alpha}{2}}) & {\mathcal{O}}(z^{-\frac{\alpha}{2}}) \end{pmatrix}, & \mbox{if } z \in I\cup IV \cup V \cup VIII, \end{array} \right. ,\nonumber\\ & \displaystyle \mbox{ if } {\rm Re\,} \alpha > 0, \\\displaystyle \Phi_{\mathrm{HG}}(z) & = \begin{pmatrix} {\mathcal{O}}(z^{\frac{\alpha}{2}}) & {\mathcal{O}}(z^{\frac{\alpha}{2}}) \\ {\mathcal{O}}(z^{\frac{\alpha}{2}}) & {\mathcal{O}}(z^{\frac{\alpha}{2}}) \end{pmatrix}, \quad \displaystyle \mbox{ if } {\rm Re\,} \alpha < 0. \end{aligned} \end{align}

Figure 3. The jump contour $\Sigma _{\mathrm {HG}}$ for $\Phi _{\mathrm {HG}}(z)$. The ray $\Gamma _{k}$ is oriented from $0$ to $\infty$, and forms an angle with $\mathbb {R}^+$ which is a multiple of $\frac {\pi }{4}$.

This model RH problem was first introduced and solved explicitly in [Reference Its and Krasovsky37] for the case $\alpha = 0$, and then in [Reference Deift, Its and Krasovsky24, Reference Foulquié Moreno, Martinez-Finkelshtein and Sousa35] for the general case. The constant matrices $\Phi _{\mathrm {HG},k}$ depend analytically on $\alpha$ and $\beta$ (they can be found explicitly, see e.g. [Reference Foulquié Moreno, Martinez-Finkelshtein and Sousa35, eq. (56)]). Consider the matrix

(B.6)\begin{gather} \widehat{\Phi}_{\mathrm{HG}}(z) = \left(\begin{gathered} \tfrac{\Gamma(1 + \frac{\alpha}{2}-\beta)}{\Gamma(1+\alpha)}G(\frac{\alpha}{2}+\beta, \alpha; z)e^{-\frac{i\pi\alpha}{2}} \\ \tfrac{\Gamma(1 + \frac{\alpha}{2}+\beta)}{\Gamma(1+\alpha)}G(1+\frac{\alpha}{2}+\beta,\alpha;z)e^{-\frac{i\pi\alpha}{2}} \end{gathered}\right.\nonumber\\ \left.\begin{gathered} -\tfrac{\Gamma(1 + \frac{\alpha}{2}-\beta)}{\Gamma(\frac{\alpha}{2}+\beta)}H(1+\frac{\alpha}{2}-\beta,\alpha;ze^{{-}i\pi }) \\ H(\tfrac{\alpha}{2}-\beta,\alpha;ze^{{-}i\pi }) \end{gathered}\right) e^{-\frac{i\pi\alpha}{4}\sigma_{3}}, \end{gather}

where $G$ and $H$ are related to the Whittaker functions:

(B.7)\begin{equation} G(a,\alpha;z) = \frac{M_{\kappa,\mu}(z)}{\sqrt{z}}, \quad H(a,\alpha;z) = \frac{W_{\kappa,\mu}(z)}{\sqrt{z}}, \quad \mu = \frac{\alpha}{2}, \quad \kappa = \frac{1}{2}+\frac{\alpha}{2}-a. \end{equation}

The solution $\Phi _{\mathrm {HG}}$ is given by

(B.8)\begin{equation} \Phi_{\mathrm{HG}}(z) = \left\{ \begin{array}{l l} \widehat{\Phi}_{\mathrm{HG}}(z)J_{2}^{{-}1}, & \mbox{ for } z \in I, \\ \widehat{\Phi}_{\mathrm{HG}}(z), & \mbox{ for } z \in II, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{3}, & \mbox{ for } z \in III, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{3}J_{4}^{{-}1}, & \mbox{ for } z \in IV, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{2}^{{-}1}J_{1}^{{-}1}J_{8}^{{-}1}J_{7}^{{-}1}J_{6}, & \mbox{ for } z \in V, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{2}^{{-}1}J_{1}^{{-}1}J_{8}^{{-}1}J_{7}^{{-}1}, & \mbox{ for } z \in VI, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{2}^{{-}1}J_{1}^{{-}1}J_{8}^{{-}1}, & \mbox{ for } z \in VII, \\ \widehat{\Phi}_{\mathrm{HG}}(z)J_{2}^{{-}1}J_{1}^{{-}1}, & \mbox{ for } z \in VIII. \\ \end{array} \right. \end{equation}

Footnotes

1 We are very grateful to a referee for pointing this out.

References

Arguin, L-P., Belius, D. and Bourgade, P.. Maximum of the characteristic polynomial of random unitary matrices. Comm. Math. Phys. 349 (2017), 703751.CrossRefGoogle Scholar
Baik, J.. Circular unitary ensemble with highly oscillatory potential, e-print arXiv:1306.0216.Google Scholar
Baik, J., Deift, P. and Johansson, K.. On the distribution of the length of the longest increasing subsequence of random permutations. J. Amer. Math. Soc. 12 (1999), 11191178.10.1090/S0894-0347-99-00307-0CrossRefGoogle Scholar
Basor, E.. Asymptotic formulas for Toeplitz determinants. Trans. Amer. Math. Soc. 239 (1978), 3365.CrossRefGoogle Scholar
Basor, E. and Morrison, K. E.. The Fisher–Hartwig conjecture and Toeplitz eigenvalues. Linear Algebra Appl. 202 (1994), 129142.10.1016/0024-3795(94)90187-2CrossRefGoogle Scholar
Basor, E. L. and Tracy, C. A.. The Fisher–Hartwig conjecture and generalizations, current problems in statistical mechanics. Phys. A 177 (1991), 167173.10.1016/0378-4371(91)90149-7CrossRefGoogle Scholar
Basor, E.. A brief history of the strong Szegő limit theorem, Oper. Theory Adv. Appl. Vol. 222 (Birkhäuser/Springer, Basel, 2012).10.1007/978-3-0348-0411-0_8CrossRefGoogle Scholar
Berestycki, N., Webb, C. and Wong, M. D.. Random Hermitian matrices and Gaussian multiplicative chaos. Probab. Theory Related Fields 172 (2018), 103189.10.1007/s00440-017-0806-9CrossRefGoogle Scholar
Billingsley, P.. Probability and measure. Anniversary edition. Wiley Series in Probability and Statistics (John Wiley and Sons, Inc., Hoboken, NJ, 2012).Google Scholar
Böttcher, A.. The Onsager formula, the Fisher–Hartwig conjecture, and their influence on research into Toeplitz operators. J. Stat. Phys. 78 (1995), 575584.CrossRefGoogle Scholar
Böttcher, A. and Silbermann, B.. Toeplitz operators and determinants generated by symbols with one Fisher–Hartwig singularity. Math. Nachr. 127 (1986), 95123.CrossRefGoogle Scholar
Bourgade, P. and Falconet, H.. Liouville quantum gravity from random matrix dynamics e-print arXiv:2206.03029.Google Scholar
Byun, S.-S. and Seo, S.-M.. Random normal matrices in the almost-circular regime, e-print arXiv:2112.11353 to appear in Bernoulli.Google Scholar
Charlier, C.. Asymptotics of Hankel determinants with a one-cut regular potential and Fisher–Hartwig singularities. Int. Math. Res. Not. 2018 (2018), 62.Google Scholar
Charlier, C.. Asymptotics of Muttalib–Borodin determinants with Fisher–Hartwig singularities. Selecta Math. 28 (2022), 50.CrossRefGoogle Scholar
Charlier, C.. Asymptotics of determinants with a rotation-invariant weight and discontinuities along circles. Adv. Math. 408 (2022), 108600.10.1016/j.aim.2022.108600CrossRefGoogle Scholar
Charlier, C. and Claeys, T.. Thinning and conditioning of the Circular Unitary Ensemble. Random Matrices Theory Appl. 6 (2017), 51.CrossRefGoogle Scholar
Charlier, C., Fahs, B., Webb, C. and Wong, M. D.. Asymptotics of Hankel determinants with a multi-cut regular potential and Fisher–Hartwig singularities, e-print arXiv:2111.08395.Google Scholar
Charlier, C. and Gharakhloo, R.. Asymptotics of Hankel determinants with a Laguerre-type or Jacobi-type potential and Fisher–Hartwig singularities. Adv. Math. 383 (2021), 107672.10.1016/j.aim.2021.107672CrossRefGoogle Scholar
Claeys, T., Fahs, B., Lambert, G. and Webb, C.. How much can the eigenvalues of a random Hermitian matrix fluctuate?. Duke Math. J. 170 (2021), 20852235.CrossRefGoogle Scholar
Claeys, T. and Krasovsky, I.. Toeplitz determinants with merging singularities. Duke Math. J. 164 (2015), 28972987.CrossRefGoogle Scholar
Costin, A. and Lebowitz, J. L.. Gaussian fluctuations in random matrices. Phys. Rev. Lett. 75 (1995), 6972.CrossRefGoogle ScholarPubMed
Dai, D., Xu, S.-X. and Zhang, L.. On the deformed Pearcey determinant. Adv. Math. 400 (2022), 108291.10.1016/j.aim.2022.108291CrossRefGoogle Scholar
Deift, P., Its, A. and Krasovsky, I.. Asymptotics of Toeplitz, Hankel, and Toeplitz+Hankel determinants with Fisher–Hartwig singularities. Ann. Math. 174 (2011), 12431299.10.4007/annals.2011.174.2.12CrossRefGoogle Scholar
Deift, P., Its, A. and Krasovsky, I.. Toeplitz matrices and Toeplitz determinants under the impetus of the Ising model: some history and some recent results. Comm. Pure Appl. Math. 66 (2013), 13601438.CrossRefGoogle Scholar
Deift, P., Its, A. and Krasovsky, I.. On the asymptotics of a Toeplitz determinant with singularities, MSRI Publications, Vol. 65 (Cambridge University Press, 2014).Google Scholar
Deift, P., Kriecherbauer, T., McLaughlin, K. T-R., Venakides, S. and Zhou, X.. Strong asymptotics of orthogonal polynomials with respect to exponential weights. Comm. Pure Appl. Math. 52 (1999), 14911552.10.1002/(SICI)1097-0312(199912)52:12<1491::AID-CPA2>3.0.CO;2-#3.0.CO;2-#>CrossRefGoogle Scholar
Deift, P. and Zhou, X.. A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation. Ann. Math. 137 (1993), 295368.CrossRefGoogle Scholar
Ehrhardt, T.. A status report on the asymptotic behavior of Toeplitz determinants with Fisher–Hartwig singularities. Oper. Theory Adv. Appl. 124 (2001), 217241.Google Scholar
Erdős, L., Yau, H.-T. and Yin, J.. Rigidity of eigenvalues of generalized Wigner matrices. Adv. Math. 229 (2012), 14351515.10.1016/j.aim.2011.12.010CrossRefGoogle Scholar
Fahs, B.. Uniform asymptotics of Toeplitz determinants with Fisher–Hartwig singularities. Comm. Math. Phys. 383 (2021), 685730.CrossRefGoogle Scholar
Fisher, M. E. and Hartwig, R. E.. Toeplitz determinants: some applications, theorems, and conjectures. Advan. Chem. Phys. 15 (1968), 333353.Google Scholar
Fokas, A. S., Its, A. R. and Kitaev, A. V.. The isomonodromy approach to matrix models in 2D quantum gravity. Comm. Math. Phys. 147 (1992), 395430.10.1007/BF02096594CrossRefGoogle Scholar
Forrester, P. J.. Charged rods in a periodic background: a solvable model. J. Statist. Phys. 42 (1986), 871894.10.1007/BF01010450CrossRefGoogle Scholar
Foulquié Moreno, A., Martinez-Finkelshtein, A. and Sousa, V. L.. On a conjecture of A. Magnus concerning the asymptotic behavior of the recurrence coefficients of the generalized Jacobi polynomials. J. Approx. Theory 162 (2010), 807831.10.1016/j.jat.2009.08.006CrossRefGoogle Scholar
Gustavsson, J.. Gaussian fluctuations of eigenvalues in the GUE. Ann. Inst. H. Poincare Probab. Statist. 41 (2005), 151178.10.1016/j.anihpb.2004.04.002CrossRefGoogle Scholar
Its, A. and Krasovsky, I.. Hankel determinant and orthogonal polynomials for the Gaussian weight with a jump. Contemporary Mathematics 458 (2008), 215248.CrossRefGoogle Scholar
Johansson, K.. On Szegő's asymptotic formula for Toeplitz determinants and generalizations. Bull. Sci. Math. 112 (1988), 257304.Google Scholar
Kaufman, B. and Onsager, L.. Crystal statistics, III. Short-range order in a binary Ising lattice. Phys. Rev. 76 (1949), 12441252.10.1103/PhysRev.76.1244CrossRefGoogle Scholar
Krasovsky, I.. Correlations of the characteristic polynomials in the Gaussian unitary ensemble or a singular Hankel determinant. Duke Math J. 139 (2007), 581619.10.1215/S0012-7094-07-13936-XCrossRefGoogle Scholar
Lenard, A.. Momentum distribution in the ground state of the one-dimensional system of impenetrable Bosons. J. Math. Phys. 5 (1964), 930943.10.1063/1.1704196CrossRefGoogle Scholar
Saff, E. B. and Totik, V.. Logarithmic Potentials with External Fields (Berlin: Springer-Verlag, 1997).10.1007/978-3-662-03329-6CrossRefGoogle Scholar
Smith, N. R., Le Doussal, P., Majumdar, S. N. and Schehr, G.. Counting statistics for non-interacting fermions in a $d$-dimensional potential. Phys. Rev. E 103 (2021), L030105.10.1103/PhysRevE.103.L030105CrossRefGoogle Scholar
Soshnikov, A.. Gaussian fluctuation for the number of particles in Airy, Bessel, sine, and other determinantal random point fields. J. Statist. Phys. 100 (2000), 491522.CrossRefGoogle Scholar
Szegő, G.. Ein Grenzwertsatz über die Toeplitzschen Determinanten einer reellen positiven Funktion. Math. Ann. 76 (1915), 490503.10.1007/BF01458220CrossRefGoogle Scholar
Szegő, G.. On certain Hermitian forms associated with the Fourier series of a positive function. Comm. Sém. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] 1952 (1952), 228238.Google Scholar
Widom, H.. Toeplitz determinants with singular generating functions. Amer. J. Math. 95 (1973), 333383.CrossRefGoogle Scholar
Figure 0

Figure 1. The jump contour for $S$ with $m=2$.

Figure 1

Figure 2. The four quadrants $Q_{\pm,k}^{R}$, $Q_{\pm,k}^{L}$ near $t_k$ and their images under the map $f_{t_k}$.

Figure 2

Figure 3. The jump contour $\Sigma _{\mathrm {HG}}$ for $\Phi _{\mathrm {HG}}(z)$. The ray $\Gamma _{k}$ is oriented from $0$ to $\infty$, and forms an angle with $\mathbb {R}^+$ which is a multiple of $\frac {\pi }{4}$.