1 Introduction and statement of results
Let
where $\Lambda $ is the von Mangoldt function, defined by $\Lambda (n)=\log p$ if $n=p^m$ , p a prime and $m\ge 1$ , and $\Lambda (n)= 0$ otherwise. Thus, $\psi _2(n)$ counts Goldbach representations of n as sums of both primes and prime powers, and these primes are weighted to make them have a density of 1 on the integers. Fujii [Reference FujiiF1], [Reference FujiiF2], [Reference FujiiF3] in 1991 proved the following theorem concerning the average number of Goldbach representations.
Theorem (Fujii)
Assuming the Riemann Hypothesis, we have
where the sum is over the complex zeros $\rho =\beta +i\gamma $ of the Riemann zeta function $\zeta (s)$ , and the Riemann Hypothesis is $\beta = 1/2$ .
Thus, the average number of Goldbach representations is connected to the zeros of the Riemann zeta function, and as we will see later, is also closely connected to the error in the prime number theorem. The sum over zeros appears in the asymptotic formula even without the assumption of the Riemann Hypothesis, but the Riemann Hypothesis is needed to estimate the error term. With regard to the sum over zeros, it is useful to keep in mind the Riemann–von Mangoldt formula
(see [Reference InghamI, Th. 25] or [Reference TitchmarshT, Th. 9.4]). Thus, $N(T) \sim \frac {T}{2\pi }\log T$ , and we also obtain
This estimate is very useful, and it shows that the sum over zeros in (2) is absolutely convergent. Hence, the Riemann Hypothesis implies that
This was shown by Fujii in [Reference FujiiF1]. Unconditionally, Bhowmik and Ruzsa [Reference Bhowmik and RuzsaBR] showed that the estimate
implies that for all complex zeros $\rho =\beta +i\gamma $ of the Riemann zeta function, we have $\beta <1-\delta /6$ . We are interested, however, in investigating the error in (2), that is, the error estimate not including the sum over zeros.
It was conjectured by Egami and Matsumoto [Reference Egami, Matsumoto, Kanemitsu and LiuEM] in 2007 that the error term in (2) can be improved to $O(N^{1+\varepsilon })$ for any $\varepsilon>0$ . That error bound was finally achieved, assuming the Riemann Hypothesis, by Bhowmik and Schlage-Puchta in [Reference Bhowmik and Schlage-PuchtaBS] who obtained $O(N\log ^5\!N)$ , and this was refined by Languasco and Zaccagnini [Reference Languasco and ZaccagniniLZ1] who obtained the following result.
Theorem (Languasco–Zaccagnini)
Assuming the Riemann Hypothesis, we have
A different proof of this theorem was given in [Reference Goldston, Yang, Tian and YeGY] along the same lines as [Reference Bhowmik and Schlage-PuchtaBS]. It was proved in [Reference Bhowmik and Schlage-PuchtaBS] that unconditionally,
and therefore the error term in (5) is close to best possible.
In this paper, we combine and hopefully simplify the methods of [Reference Bhowmik and Schlage-PuchtaBS] and [Reference Languasco and ZaccagniniLZ1]. Our method is based on an exact form of Fujii’s formula (2) where the error term is explicitly given. We state this as Theorem 1. We will relate this error term to the distributions of primes, and this can be estimated using the variance of primes in short intervals.
We follow the notation and methods of Montgomery and Vaughan [Reference Montgomery and VaughanMV1] fairly closely in what follows. Sums in the form of $\sum _{\rho }$ or $\sum _{\gamma }$ run over nontrivial zeros $\rho =\beta +i\gamma $ of the Riemann zeta function, and all other sums run over the positive integers unless specified otherwise, so that $\sum _n = \sum _{n\ge 1}$ and $\sum _{n\le N} = \sum _{1\le n\le N}$ . We use the power series generating function
which converges for $|z|<1$ , and obtain the generating function for $\psi _2(n)$ in (1) directly since
Our goal is to use properties of $\Psi (z)$ to study averages of the coefficients $\psi _2(n)$ of $\Psi (z)^2$ .
Our version of Fujii’s theorem is as follows. We take
where $0\le r < 1$ , and define
We also, accordingly, write $\Psi (z) = \sum _{n} \Lambda (n) r^ne(\alpha n)$ as $\Psi (r,\alpha )$ .
Theorem 1. For $N\ge 2$ , we have
where, for $0<r<1$ ,
In particular, we have
The prime number theorem is equivalent to
It was shown in [Reference Bhowmik and Schlage-PuchtaBS] that the error term $E(N)$ in Theorem 1 can be estimated using the functionsFootnote 1
The integral $H(x)$ was studied by Cramér [Reference CramérC] in 1921; he proved, assuming the Riemann Hypothesis, that
Selberg [Reference SelbergS] was the first to study variances like $J(x,h)$ and obtain results on primes from this type of variance, both unconditionally and on the Riemann Hypothesis. The estimate we need is a refinement of Selberg’s result, which was first obtained by Saffari and Vaughan [Reference Saffari and VaughanSV]; they proved, assuming the Riemann Hypothesis, that for $1\le h \le x$ ,
By a standard argument using Gallagher’s lemma [Reference MontgomeryM1, Lem. 1.9], we obtain the following unconditional bound for $E(N)$ .
Theorem 2. Let $\log _2\!N$ denote the logarithm base 2 of N. Then, for $N\ge 2$ , we have
where, for $1\le h \le N$ ,
Not only can Theorem 2 be applied to the error term in Fujii’s theorem, but it can also be used in bounding the error in the prime number theorem.
Theorem 3. For $N\ge 2$ and $0<r<1$ , we have
and
The formula (20) has already been used for a related problem concerning averages of Goldbach representations [Reference Bhowmik and RuzsaBR, (7)], and similar formulas are well known [Reference KoukoulopoulosKou, (5.20)].
For our first application, we assume the Riemann Hypothesis and use (16) and (17) to recover (5), and also obtain the classical Riemann Hypothesis error in the prime number theorem due to von Koch in 1901 [Reference von KochKoc].
Theorem 4. Assuming the Riemann Hypothesis, then $\mathcal {E}(N) \ll N\log ^3\!N$ and $\psi (N) = N +O(N^{1/2}\log ^2\!N)$ .
For our second application, we will strengthen the results in Theorem 4 by assuming conjectures on bounds for $J(x,h)$ , which we will prove to be consequences of conjectured bounds related to Montgomery’s pair correlation conjecture. Montgomery introduced the function, for $x>0$ and $T\ge 3$ ,
where the sum is over the imaginary parts $\gamma $ and $\gamma '$ of zeta-function zeros. By [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, Lem. 8], $F(x,T)$ is real, $F(x,T)\ge 0$ , $F(x,T)=F(1/x,T)$ , and assuming the Riemann Hypothesis,
uniformly for $1\le x\le T$ . For larger x, Montgomery [Reference MontgomeryM2] conjectured that, for every fixed $A>1$ ,
holds uniformly for $T\le x \le T^A$ . This conjecture implies the pair correlation conjecture for zeros of the zeta function [Reference Goldston, Mezzadri and SnaithGo]. For all $x>0$ , we have the trivial (and unconditional) estimate
which follows from (4).
The main result in [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM] is the following theorem connecting $F(x,T)$ and $J(x,h)$ .
Theorem (Goldston–Montgomery)
Assume the Riemann Hypothesis. Then the $F(x,T)$ conjecture (24) is equivalent to the conjecture that for every fixed $\epsilon>0$ ,
holds uniformly for $1\le h\le x^{1-\epsilon } $ .
Adapting the proof of this last theorem, we obtain the following results.
Theorem 5. Assume the Riemann Hypothesis.
-
(A) If for any $A>1$ and $T\ge 2$ we have
(27) $$ \begin{align} F(x,T) = o( T\log^2 T) \qquad \text{ uniformly for} \quad T\le x \le T^{A}, \end{align} $$then this implies, for $x\ge 2$ ,(28) $$ \begin{align} J(x,h) = o(h x \log^2\!x),\qquad \text{for} \quad 1\le h\le x,\end{align} $$and this bound implies $\mathcal {E}(N) = o(N\log ^3\!N)$ and $\psi (N) = N +o(N^{1/2}\log ^2\!N)$ . -
(B) If for $T\ge 2$
(29) $$ \begin{align} F(x,T) \ll T\log x \qquad \text{holds uniformly for} \quad T\le x \le T^{\log T}, \end{align} $$then we have, for $x\ge 2$ ,(30) $$ \begin{align} J(x,h) \ll h x \log x,\qquad \text{for} \quad 1\le h\le x,\end{align} $$and this bound implies $\mathcal {E}(N) \ll N\log ^2\!N$ and $\psi (N) = N +O(N^{1/2}(\log N)^{3/2})$ .
Montgomery’s $F(x,T)$ Conjecture (24) immediately implies (27), so we are using a weaker conjecture on $F(x,T)$ in Theorem 5(A). For Theorem 5(B), Montgomery’s $F(x,T)$ Conjecture (24) only implies (29) for $T\le x \le T^A$ , and this is a new conjecture for the range $T^A\le x \le T^{\log T}$ . Theorems 2 and 5 show that with either assumption, we obtain an improvement on the bound $E(N)\ll N\log ^3\!N$ in (5), which we state as the following corollary.
Corollary 1. Assume the Riemann Hypothesis. If either (27) or (28) is true, then we have
whereas if either (29) or (30) is true, we have
We will prove a more general form of the implication that a bound on $F(x,T)$ implies a corresponding bound on $J(x,h)$ , but Theorem 5 contains the most interesting special cases. We make crucial use of the Riemann Hypothesis bound (17) for h very close to x. The conjectures (29) and (30) are weaker than what the Goldston–Montgomery theorem suggests are true, but they suffice in Theorem 5 and the use of stronger bounds does not improve the results on $\mathcal {E}(N)$ . The result on the prime number theorem in Theorem 5(A) is due to Heath-Brown [Reference Heath-BrownH, Th. 1]. The bound in (29) is trivially true by (25) if $x\ge T^{\log T}$ .
For our third application, we consider the situation where there can be zeros of the Riemann zeta function off the half-line, but these zeros satisfy the bound $1/2<\Theta < 1$ , where
The following is a special case of a more general result recently obtained in [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+].
Theorem 6 (Bhowmik, Halupczok, Matsumoto, and Suzuki)
Assume $1/2<\Theta < 1$ . For $N\ge 2$ and $1\le h\le N$ , we have
Weaker results of this type were first obtained by Granville [Reference GranvilleGra1], [Reference GranvilleGra2]. To prove this theorem, we only need to adjust a few details of the proof in [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+] to match our earlier theorems. With more work, the power of $\log N$ can be improved, but that makes no difference in applications. We will not deal with the situation when $\Theta =1$ , which depends on the width of the zero-free region to the left of the line $\sigma =1$ and the unconditional error in the prime number theorem. The converse question of using a bound for $E(N)$ to obtain a zero-free region has been solved in an interesting recent paper of Bhowmik and Ruzsa [Reference Bhowmik and RuzsaBR]. As a corollary, we see that the terms in Fujii’s formula down to size $\beta \ge 1/2$ are main terms assuming the conjecture $\Theta < 3/4$ instead of the Riemann Hypothesis.
Corollary 2. Suppose $1/2 \le \Theta < 3/4$ . Then
The term $E(N)$ in Fujii’s formula will give main terms $\gg N^{3/2}$ from zeros with $\beta \ge 3/4$ . For weighted versions of Fujii’s theorem, there are formulas where the error term corresponding to $E(N)$ is explicitly given in terms of sums over zeros (see [Reference Brüdern, Kaczorowski and PerelliBKP], [Reference Languasco and ZaccagniniLZ2]). In principle, one could use an explicit formula for $\Psi (z)$ in Theorem 1 to obtain a complicated formula for $E(N)$ in terms of zeros, but we have not pursued this.
We conclude with a slight variation on Fujii’s formula. We have been counting Goldbach representations using the von Mangoldt function $\Lambda (n)$ , so that we include prime powers and also representations for odd integers. Doing this leads to nice formulas such as Fujii’s formula because of the weighting by $\log p$ and also that complicated lower order terms coming from prime powers have all been combined into the sum over $\rho $ term in Fujii’s formula. The reason for this can be seen in Landau’s formula [Reference LandauL], which states that for fixed x and $T\to \infty $ ,
where we define $\Lambda (x)$ to be zero for real noninteger x. In the following easily proven theorem, we remove the Goldbach representations counted by the von Mangoldt function for odd numbers.
Theorem 7. We have
and therefore, by (13),
The interesting aspect of (37) is that a new main term $-2N\log N$ has been introduced into Fujii’s formula, and this term comes from not allowing representations where the von Mangoldt function is evaluated at the prime 2 and its powers. If we denote the error term $E_{\mathrm {even}}(N) := - 2N\log N+E(N)$ in (37), then we see that at least one or both of $E(N)$ and $E_{\mathrm {even}}(N)$ are $\Omega (N\log N)$ . The simplest answer for which possibility occurs would be that the error term in Fujii’s formula is smaller than any term generated by altering the support of the von Mangoldt function, in which case $E_{\mathrm {even}}(N) = - 2N\log N +o(N\log N)$ and $E(N) = o(N\log N)$ . Whether this is true or not seems to be a very difficult question.
2 Proofs of Theorems 1–4
Proof of Theorem 1
We first obtain a weighted version of Fujii’s theorem. By (14), we see $\Lambda (n)$ is on average 1 over long intervals, and therefore as a first-order average approximation to $\Psi (z)$ , we use
for $|z|<1$ . Observe that, on letting $n=m+m'$ in the calculations below,
and therefore
and
Thus,
and we conclude that
which is a weighted version of Theorem 1.
To remove the weighting, we take $ z=re(\alpha )$ with $0 < r < 1$ , and recalling (10), we have
Thus, using (39) with $z=re(\alpha )$ and $0 < r < 1$ in (40), we obtain
Utilizing [Reference InghamI, Chap. 2, (13)], $\psi _1(x) := \int _0^x \psi (t)\, dt$ , and $\psi _1(N) = \sum _{n\le N}\psi (n-1)$ , we have
To complete the proof, we use the explicit formula, for $x\ge 1$ ,
(see [Reference InghamI, Th. 28] or [Reference Montgomery and VaughanMV2, Chapter 12 Subsection 12.1.1, Exer. 6]). Substituting (42) into (41), we obtain Theorem 1.
Proof of Theorem 2
Letting
we have
We will now choose r in terms of N by setting
and with this choice, it is easy to see that for $|\alpha | \le 1/2$ , we have
Therefore, we obtain
Letting
then $ \frac 12 h_N(\alpha ) \le \min (N, \frac 1{\alpha }) \le h_N(\alpha )$ in the range $0\le \alpha \le 1/2$ . Now, if we put
then $h_N(\alpha )\le H_N(\alpha ) \le 2 h_N(\alpha )$ and therefore $ \frac 14 H_N(\alpha ) \le \min (N, \frac 1{\alpha }) \le H_N(\alpha ) $ in the range $0\le \alpha \le 1/2$ . We conclude
Thus,
Gallagher’s lemma gives the bound (see [Reference MontgomeryM1, Lem. 1.9] or [Reference Goldston, Yang, Tian and YeGY, §4])
provided $\sum _n|a_n| <\infty $ , and therefore we have
We conclude that
where
To bound $I_1(N,h)$ and $I_2(N,h)$ , we will use partial summation on the integrands with the counting function
Recalling $H(x)$ and $J(x,h)$ from (15), and making use here and later of the inequality $(a+b)^2\le 2(a^2+b^2)$ , we have, for $x\ge 1$ ,
and
Next, by partial summation,
and therefore, for $1\le h \le N$ , we have, using Cauchy–Schwarz and (51),
For $I_2(N,h)$ , we replace x by $x+h$ in (53) and take their difference to obtain
Thus, we have
Applying the Cauchy–Schwarz inequality to the double integral above and changing the order of integration, we bound this term by
and therefore, for $1\le h \le N$ ,
Now, for any integrable function $f(x)\ge 0$ , we have
and therefore
Thus, by (48), (54), and (56), we conclude that, for $1\le h\le N$ ,
Proof of Theorem 3
From (43), we have $ \Psi (z) - I(z) = \sum _n \Lambda _0(n)z^n$ , and (20) follows from (40). To obtain (21), applying the Cauchy–Schwarz inequality to (20) and using (45), we have
Proof of Theorem 4
Assuming the Riemann Hypothesis, from (16) and (17), we have $H(x)\ll x^2$ and $J(x,h) \ll hx\log ^2\!x$ for $1\le h \le x$ , and therefore, by (19), we have $\mathcal {W}(N,h)\ll \frac {N}{h}\log ^2\!N $ for $1\le h\le N$ , and therefore $\mathcal {E}(N) \ll N\log ^3\!N$ , which is the first bound in Theorem 4, and Theorem 3 gives the second bound.
3 Theorem 8 and the proof of Theorem 5
We will prove the following more general form of Theorem 5. In addition to $J(x,h)$ from (15), we also make use of the related variance
The variables h in $J(x,h)$ and $\delta $ in $\mathcal {J}(x,\delta )$ are roughly related by $h\asymp \delta x$ . Saffari and Vaughan [Reference Saffari and VaughanSV] proved in addition to (17) that, assuming the Riemann Hypothesis, we have, for $0< \delta \le 1$ ,
Theorem 8. Assume the Riemann Hypothesis. Let $x\ge 2$ , and let $\mathcal {L}(x)$ be a continuous increasing function satisfying
Then the assertion
implies the assertion
which implies the assertion
which implies $\mathcal {E}(N)\ll N\mathcal {L}(N) \log N$ and $\psi (N) = N +O(N^{1/2}\sqrt {\mathcal {L(N)}}\log N)$ .
Montgomery’s function $F(x,T)$ is used for applications to both zeros and primes. For applications to zeros, it is natural to first fix a large T and consider zeros up to height T and then pick x to be a function of T that varies in some range depending on T. This is how Montgomery’s conjecture is stated in (24) and also how the conjectures (27) and (29) in Theorem 5 are stated. However, in applications to primes, following Heath-Brown [Reference Heath-BrownH], it is more convenient to fix a large x and consider the primes up to x, and then pick T as a function of x that varies in some range depending on x. This is what we have done in Theorem 8.
The ranges we have used in our conjectures on $F(x,T)$ , $\mathcal {J}(x,\delta )$ , and $J(x,T)$ in Theorem 8 are where these conjectures are needed. In proving Theorem 8, however, it is convenient to extend these ranges to include where the bounds are known to be true on the Riemann Hypothesis.
Lemma 1. Assume the Riemann Hypothesis. Letting $x\ge 2$ , then the assertion (61) implies, for any bounded $A\ge 1$ ,
the assertion (62) implies
and the assertion (63) implies
Proof of Lemma 1
We first prove (64). By the trivial bound (25), we have unconditionally in the range $2\le T\le e^{\sqrt {\mathcal {L}(x)}}$ that
which with (61) proves (64) for the range $2\le T\le x$ . For the range $x\le T \le x^A$ , we have $T^{1/A}\le x \le T$ , and therefore on the Riemann Hypothesis (23) implies $F(x,T) \sim T\log x \ll T\mathcal {L}(x) $ by (60).
Next, for (65) in the range $ 2e^{-\sqrt {\mathcal {L}(x)}}\le \delta \le 1$ , we have $\log ^2(2/\delta ) \le \mathcal {L}(x) $ , and therefore on the Riemann Hypothesis by (59), we have $\mathcal {J}(x,\delta ) \ll \delta x^2 \log ^2(2/\delta )\ll \delta x^2 \mathcal {L}(x) $ in this range. For the range $0\le \delta < 1/x$ , we have $0\le \delta x < 1$ and therefore there is at most one integer in the interval $(t, t+\delta t]$ for $0\le t \le x$ . Hence,
By this and (62), we obtain (65). A nearly identical proof for $J(x,h)$ shows that (63) implies (66).
Proof of Theorem 5 from Theorem 8
To prove (A), choose $\mathcal {L}(x) = \epsilon \log ^2\!x$ . The assumption (61) becomes $F(x,T) \ll \epsilon T\log ^2\!x$ for $x^{\sqrt {\epsilon }} \le T \le x$ , or equivalently $T \le x \le T^{1/\sqrt {\epsilon }}$ . Letting $A=1/\sqrt {\epsilon }$ , we have $F(x,T) \ll A^{-2} T\log ^2\!x$ for $T \le x \le T^A$ . The assertion (27) implies that this bound for $F(x,T)$ holds since
if $A\to \infty $ sufficiently slowly. Thus, (63) holds which implies by Lemma 1, $J(x,h) \ll \epsilon h x \log ^2\!x$ for $1\le h\le x$ . Moreover, by Theorem 8, $\mathcal {E}(N)\ll \epsilon N \log ^3\!N$ and $\psi (N) = N +O(\sqrt {\epsilon }N^{1/2}\log ^2\!N)$ , where $\epsilon $ can be taken as small as we wish.
To prove Theorem 5(B), choose $\mathcal {L}(x) = \log x$ . The assumption (61) becomes $F(x,T) \ll T\log x$ for $e^{\sqrt {\log x}} \le T \le x$ , or equivalently $T \le x \le T^{\log T}$ and this is satisfied when (29) holds. Thus, by (63) and Lemma 1, $J(x,h) \ll h x \log x$ for $1\le h\le x$ , and by Theorem 8, $\mathcal {E}(N)\ll N \log ^2\!N$ and $\psi (N) = N +O(N^{1/2}\log ^{3/2} N)$ .
Proof of Theorem 8, (61) implies (62)
We start from the easily verified identity
which is implicitly in [Reference MontgomeryM2] (see also [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, (26)] and [Reference Goldston, Mezzadri and SnaithGo, §4]). Next, Montgomery showed, using (4),
The main tool in proving this part of Theorem 8 is the following result.
Lemma 2. For $0<\delta \le 1$ and $e^{2\kappa } = 1+\delta $ , let
Assuming the Riemann Hypothesis, we have, for $1/x \le \delta \le 1$ ,
We will prove Lemma 2 after completing the proof of Theorem 8.
Since $\kappa = \frac 12\log (1+\delta ) \asymp \delta $ for $0<\delta \le 1$ , we have
and hence
For $U\ge 1/\delta $ , we have, since by (4) $\sum _{\gamma } 1/(1+(t-\gamma )^2)\ll \log (|t|+2)$ ,
On taking $U=2\log ^2(2/\delta )/\delta $ , we have by (68)
We now assume (61) holds and therefore by Lemma 1, (64) also holds. Taking $ 1/x \le \delta \le 1$ , we see that for $1\le k \ll \log \log (4/\delta )$ we have $2\le 2^k/\delta \ll x\log ^Cx$ , for a constant C. Therefore, $F(x,2^k/\delta )$ is in the range where (64) applies, and therefore
which by (70) proves (62) over a wider range of $\delta $ than required.
Proof of Theorem 8, (62) implies (63), and the remaining results
To complete the proof of Theorem 8, we need the following lemma of Saffari–Vaughan [Reference Saffari and VaughanSV, (6.21)] (see also [Reference Goldston, Vaughan, Greaves, Harman and HuxleyGV, pp. 126–127]), which we will prove later.
Lemma 3 (Saffari–Vaughan)
For any $1\le h \le x/4$ , and any integrable function $f(x)$ , we have
Taking $f(t) = \psi (t) - t$ in Lemma 3, and assuming $1\le h \le x/8$ so that we may apply (65), we have
Replacing x by $x/2, x/4, \ldots , x/2^{k-1}$ and adding, we obtain
where we used $\mathcal {L}(x/2^{j-1})\le \mathcal {L}(x)$ , and note that here we need $h\le \frac {x}{2^{k+2}}$ . Taking k so that $\log ^2\!x \le 2^{k} \le 2\log ^2\!x$ , then by (17), we have
Thus,
for $1\le h\le \frac {x}{8\log ^2\!x}\le \frac {x}{2^{k+2}}$ . This proves (63) over a larger range of h than required.
Now, applying to (19) the estimates (16) and (66) (which is implied by (63)), we obtain
which by (18) gives the bound $\mathcal {E}(N)\ll N\mathcal {L}(N) \log N$ and this with (21) gives
Proof of Lemma 2
We define $e^{2\kappa } = 1+\delta $ , $0<\delta \le 1$ , and define
Thus,
and
on noting the integrand is an even function since in the sum for every $\gamma $ there is a $-\gamma $ . The next step is to bring $|a(it)|^2$ into the sum over zeros using [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, Lem. 10], from which we immediately obtain, for $Z\ge 1/\delta $ ,
We comment that the proof of Lemma 10 in [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM] is an elementary argument making use of (4). We have not made use of the Riemann Hypothesis yet, but henceforth we will assume and use $\rho = 1/2+i\gamma $ and $a(\rho ) = a(1/2+i\gamma )$ . In order to keep our error terms small, we now choose
Thus,
Define the Fourier transform by
Plancherel’s theorem says that if $f(t)$ is in $L^1 \cap L^2$ , then $\widehat {f}(u)$ is in $L^2$ , and we have
An easy calculation gives the Fourier transform pair
and therefore by Plancherel’s theorem we have, with $y= 2\pi u$ in the third line below,
On letting $t=xe^{-y}$ in the last integral, we obtain
and we conclude that
We now complete the proof of Lemma 2 by proving that we have
By the standard truncated explicit formula [Reference Montgomery and VaughanMV2, Chap. 12, Th. 12.5] (see also [Reference Goldston, Montgomery, Adolphson, Conrey, Ghosh and YagerGM, (34)]), we have, for $2\le t\le x$ , and $Z\ge x$ ,
where
and $\lVert u\rVert $ is the distance from u to the nearest integer. Using the trivial estimate
we have
There is a small complication at this point since we want to integrate both sides of this equation over $0\le t\le x$ , but (78) requires $2\le t\le x$ . Therefore, integrating from $2\le t \le x$ , we obtain
where
We note first that, by (67),
Next, for $|\text {Re}(s)|\ll 1$ , we have $|a(s)|\ll \min (1,1/|s|)$ , and by (4), we obtain
Thus,
Both of these errors are negligible compared to the error term $\delta x^2 \gg x$ . It remains to estimate $E^*$ . First, for $j=1$ or $2$ ,
The same estimate holds for the term in (79) with $\lVert (1+\delta )t\rVert $ by a linear change of variable in the integral. Thus,
and
We conclude from (81) that
since, by (73), $x \ll \delta x^2$ . Substituting these estimates into (80) proves (77).
Proof of Lemma 3
This proof comes from [Reference Goldston, Vaughan, Greaves, Harman and HuxleyGV, pp. 126–127]. We take $1\le h \le x/4$ . On letting $t=y+u$ with $0\le u\le h$ , we have
Since the integration range of the inner integral always lies in the interval $[x/4,x]$ , we have
where $\mathcal {R}$ is the region defined by $x/4\le y \le x$ and $0\le u\le 2h$ . Making the change of variable $u=\delta y$ , then $\mathcal {R}$ is defined by $x/4\le y \le x$ and $0\le \delta \le 2h/y$ , and changing the order of integration gives
Inverting the order of integration again, we conclude that
4 Proofs of Theorem 6, Corollary 2, and Theorem 7
Proof of Theorem 6
From [Reference InghamI, Th. 30], it is well known that
but it seems less well known that, in 1965, Grosswald [Reference GrosswaldGro] refined this result by proving that, for $1/2<\Theta <1$ ,
from which we immediately obtain
From [Reference Bhowmik, Halupczok, Matsumoto and SuzukiBHM+, Lem. 8], we have from the case $q=1$ that, for $x\ge 2$ and $1\le h\le x$ ,
We first need to prove that the same bound holds for $J(x,h)$ . We have
For $J_1(h)$ , we use (83) to see that, for $1\le h\le x$ ,
For $J_2(x,h)$ , we apply (84) and find for any interval $(x/2^{k+1}, x/2^{k}]$ contained in $[h/2,x]$
and summing over $k\ge 0$ to cover the interval $[h,x]$ , we obtain $ J_2(x,h) \ll h x^{2\Theta }\log ^4\!x.$ Combining these estimates, we conclude, as desired,
and using (83) and (85) in Theorem 2 gives $\mathcal {E}(N) \ll N^{2\Theta }\log ^5\!N$ .
Proof of Corollary 2
Using (13) of Theorem 1, we see that, since the sum over zeros is absolutely convergent, the contribution from zeros with $\beta <1/2$ is $o(x^{3/2})$ .
Proof of Theorem 7
We have if n is odd that
and therefore
We may drop the condition that n is odd in the last sum with an error of $O(\log ^2\!N)$ since the only nonzero terms in the sum when n is even are when n is a power of 2. Hence,
Applying the prime number theorem with a modest error term, we have
Added in proof
Languasco, Perelli, and Zaccagnini [Reference Languasco, Perelli and ZaccagniniLPZ1], [Reference Languasco, Perelli and ZaccagniniLPZ2], [Reference Languasco, Perelli and ZaccagniniLPZ3] have obtained many results connecting conjectures related to pair correlation of zeros of the Riemann zeta function to conjectures on primes. It has been brought to our attention that the main result in our follow-up paper [Reference Goldston and SuriajayaGS] to this paper on the error in the prime number theorem has already been obtained in [Reference Languasco, Perelli and ZaccagniniLPZ2]. Our method is based on a generalization $F_{\beta }(x,T)$ of $F(x,T)$ from (22) where $w(u)$ is replaced with $w_{\beta }(u) = \frac {4\beta ^2}{4\beta ^2+u^2}$ . In [Reference Languasco, Perelli and ZaccagniniLPZ2], they used $F_{\beta }(x,T)$ with a change of variable $\beta = 1/\tau $ . The results we obtained are analogous to some of their results. In the paper [Reference Languasco, Perelli and ZaccagniniLPZ3], it is shown how this method can be applied to generalizations of $\mathcal {J}(x,\delta )$ in (58). We direct interested readers to these papers of Languasco, Perelli, and Zaccagnini.