Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-23T16:52:06.338Z Has data issue: false hasContentIssue false

Fence of saddle solutions of the Allen–Cahn equation in the plane

Published online by Cambridge University Press:  22 November 2024

Yong Liu
Affiliation:
Department of Mathematics, University of Science and Technology of China, 96 Jinzhai Road, Baohe District, Hefei, Anhui Province 230026, China ([email protected]) (corresponding author)
Yitian Zhang
Affiliation:
Department of Mathematics, University of Science and Technology of China, 96 Jinzhai Road, Baohe District, Hefei, Anhui Province 230026, China ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

In a two-dimensional plane, entire solutions of the Allen–Cahn type equation with a finite Morse index necessarily have finite ends. In the case that the nonlinearity is a sine function, all the finite-end solutions have been classified. However, for the classical Allen–Cahn nonlinearity, the structure of the moduli space of these solutions remains unknown. We construct in this paper new finite-end solutions to the Allen–Cahn equation, which will be called fence of saddle solutions, by gluing saddle solutions together. Our construction can be generalized to the case of gluing multiple four-end solutions, with some of their ends being almost parallel.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction and statement of main results

The Allen–Cahn equation

(1.1)\begin{equation} -\Delta u=u-u^{3},\ \ \text{in }\ \mathbb{R}^{n}, \end{equation}

is a semilinear elliptic equation arising from the phase transition phenomenon. Although it takes a quite simple form, many questions remain to be answered for this equation. The famous De Giorgi conjecture is concerned with the classification of bounded, monotone solutions of the Allen–Cahn equation. Many important works have been performed, and considerable progress has been achieved towards solving this conjecture. It is now known that the conjecture is true in dimension two and for dimension between three and eight under an additional assumption on its limit in the monotone direction. It is also known that there are nontrivial counter examples when dimension n > 8. We refer to [Reference Alberti, Ambrosio and Cabre1, Reference Ambrosio and Cabre2Reference Cabre and Terra5, Reference del Pino, Kowalczyk and Wei11, Reference del Pino, Kowalczyk and Wei12, Reference Ghoussoub and Gui15, Reference Jerison and Monneau20, Reference Liu, Wang and Wei25, Reference Pacard and Wei29, Reference Savin30] and the references cited there for an incomplete list of results for this subject.

Monotone bounded solutions are automatically stable in the sense that the second variation of its energy functional is always non-negative. It is therefore natural to classify the stable or finite Morse index solutions of the Allen–Cahn equation. This problem turns out to be quite complicated. Even in dimension two, the whole picture is still not completely understood and only partial results are available. We now know that in $\mathbb{R}^{2}$, finite Morse index solutions must have finitely many ends, and the converse is also true([Reference Wang and Wei31]). By definition, a solution is called finite-end(or multiple-end), if outside a large ball, its zero set is asymptotic to finitely many, say 2k, half straight lines at infinity. It is also called 2k-end solution.

The Allen–Cahn nonlinearity on the right hand side of (1.1) is a special case of the derivative of a double well potential. For these more general nonlinearities, we will call the equation Allen–Cahn type. In [Reference Liu and Wei26], for the elliptic sine-Gordon equation, which is also an Allen–Cahn type equation with $\sin u$ nonlinearity, all the finite-end solutions have been classified and all these solutions have explicit expressions. Recently, this classification result was used in [Reference Chodosh and Mantoulidis7] to compute the p-width of the round sphere. We also refer to [Reference Chodosh and Mantoulidis6, Reference Dey13, Reference Gaspar and Guaraco14, Reference Guaraco16] and the references therein for the application of Allen–Cahn equation to the study of minimal surfaces. We emphasize that the method used in [Reference Liu and Wei26] is based on the integrable system theory and is not applicable for the Allen–Cahn nonlinearity $u-u^{3}$. Hence, it is desirable to find pure partial differential equation methods to construct or classify finite-end solutions of the Allen–Cahn type equations.

Recall that for any unit vector $\mathbf{a\in}\mathbb{R}^{2}$ and $b\in\mathbb{R},$ the function $H\left( \mathbf{a\cdot x}+b\right) $ is a monotone solution of (1.1), where H is the heteroclinic solution:

\begin{equation*} -H^{\prime\prime}=H-H^{3},\ \ H\left( 0\right) =0,H\left( t\right) \rightarrow\pm1,\,\ \text{as }t\rightarrow\pm\infty. \end{equation*}

This is a family of examples of 2-end solutions. Indeed, they are the only monotone bounded solutions in dimension two, and all the 2-end solutions belong to this family. As for four-end solutions, a classical example is the so-called saddle solution. Its nodal set consists of two orthogonal straight lines and can be obtained using the variational method, see [Reference Dang, Fife and Peletier8]. The variational construction can be generalized to obtain solutions having dihedral symmetry, which yields solutions whose zero sets consist of finitely many straight lines intersecting at the origin and making equal angles between consecutive lines.

There are other multiple-end solutions. It was proven in [Reference del Pino, Kowalczyk, Pacard and Wei10] that there exists an abundance of solutions whose nodal curves are governed by the solutions of the Toda system. Their ends are almost parallel, and their Morse indices coincide with that of the Toda system. In particular, there is a family of four-end solutions whose nodal set consists of two curves, which are far away from each other. Moreover, the solutions behave like the heteroclinic solution in the direction orthogonal to the nodal curves. In [Reference Kowalczyk, Liu and Pacard22, Reference Kowalczyk, Liu and Pacard23], we have shown that indeed the saddle solution can be deformed through a family of four-end solutions to the ones with almost parallel ends. Later, it is proved [Reference Gui, Liu and Wei19] that these four-end solutions can also be obtained by mountain pass theorem.

The proof in [Reference Kowalczyk, Liu and Pacard22, Reference Kowalczyk, Liu and Pacard23] used in an essential way the moduli space theory of the finite-end solutions, developed in [Reference del Pino, Kowalczyk and Pacard9]. This theory tells us that for each fixed even integer $2k \gt 0$, around a nondegenerated 2k-end solution, the set of 2k-end solutions is locally a 2k-dimensional manifold. While the full classification of this moduli space seems to be difficult, we know that there is a balancing formula (see the Appendix of [Reference del Pino, Kowalczyk and Pacard9]) for the direction of the ends, which impose some restriction on the ends of the solutions. This balancing formula generalizes the Hamiltonian identity of Gui [Reference Gui17].

It is a general philosophy that gluing construction provides solutions near the ‘boundary’ of the moduli spaces of solutions. In this paper, we would like to construct new finite-end solutions, which will be called fence of saddle solutions. The name is indeed borrowed from a family of minimal surfaces called fence of saddle towers first constructed by Karcher ([Reference Karcher21], section 4.2). The idea of our construction is to juxtapose two saddle solutions along the x axis and glue them together. The distance between these two saddle solutions is sufficiently large.

Our main result can be stated in the following

Theorem 1.1 For each ɛ > 0 small, there exist a six-end solutions $u_{\varepsilon},$ which is odd with respect to the x axis and even with respect to the y axis. Its nodal set consists of three curves: the x axis and the graphs of the functions $x=\pm f_{\varepsilon}\left( y\right) ,$ where $\left\Vert f_{\varepsilon}^{\prime}\right\Vert _{L^{\infty}\left( \mathbb{R}\right) }\rightarrow0$ and $\min_{y\in\mathbb{R}}\left\vert f_{\varepsilon}\left( y\right) \right\vert \rightarrow+\infty,$ as $\varepsilon\rightarrow0.$

We point out that an end-to-end construction of multiple-end solutions has been carried out in [Reference Kowalczyk, Liu, Pacard and Wei24]. That construction enables us to glue $k(k+1)/2$ number of four-end solutions together. The nodal set of the solutions can be regarded as a desingularization of the configuration of k lines in generic position, where any pair of lines are non-parallel. The resulted solutions have 2k ends. In contrast, theorem 1.1 can be regarded as a desingularization of the configuration of three lines, two of which are parallel, and the third one is orthogonal to them, therefore not in a generic position.

The construction in this paper is not a straightforward generalization of the results in [Reference Kowalczyk, Liu, Pacard and Wei24]. As a matter of fact, to deal with this configuration of parallel lines, we will use the Cauchy data matching method. More precisely, due to the strong interaction between the two parallel lines, we need to deal with the gluing problem in an inner region and in an outer region. In these two different regions, we use different methods to construct the local solutions and finally match their boundary data together. The Cauchy data matching argument we use here is in spirit similar to the one used in [Reference Mazzeo and Pacard27, Reference Mazzeo, Pacard and Pollack28], where constant mean curvature surfaces are constructed.

Our method can be generalized to configurations with more lines, some of which are parallel. In this general case, one needs to use the linear theory of the general four-end solutions. Given a four-end solution $u,$ the functions $u_{x},u_{y},u_{\theta}$ are in the kernel of the linearized operator of the Allen–Cahn equation around $u.$ Here, $u_{x},u_{y}$, uθ represent the derivative of u with respect to $x,y$ direction and the θ variable, which is the angle with respect to the origin. Note that $u_{x},u_{y}$ are bounded. But uθ is not bounded and grows linearly around each end. It turns out that the fourth element in the kernel plays an important role in the construction. It has been proven in [Reference Gui18] that any four-end solution is even symmetric with respect to two orthogonal lines. Therefore, we can work in the even symmetry space. By the results in [Reference Kowalczyk, Liu and Pacard22, Reference Kowalczyk, Liu and Pacard23] mentioned above, we know that modulo rigid motions the moduli space of four-end solutions constitutes a one parameter family which is diffeomorphic to $\mathbb{R}.$ Let us denote them by $u\left( \cdot,s\right) ,s\in\mathbb{R}.$ Then, us is also in the kernel of linearized Allen–Cahn operator. As the parameter s varies, the angle of the solutions (the angle between the end in the first quadrant and the x axis) goes from 0 to $\frac{\pi}{2},$ but not necessary monotone increasing. This implies that although a priori it may happen that us is bounded, actually, there exists an abundance of s for which us is unbounded and has linear growth at infinity. Indeed, it is natural to conjecture that for any $s,$ $u_{s}(,s)$ is unbounded and has linear growth at each end. To date, since the angle map is analytic with respect to the parameter $s,$ we at least know that there exists a set consisting of finitely many angles $\theta_{1},...,\theta_{n},$ such that us is unbounded if the angle of u is not equal to any of these $\theta_{i}.$ We can then prove that for a configuration of finitely many lines whose angles avoid the above mentioned set, and for some of which may be parallel to each other, we can carry out our construction.

The paper is organized as follows. In § 2, we recall the linear theory for the linearized Allen–Cahn operator around a four-end solution. In § 3, construct solutions in the inner region. Then in § 4, we treat the outer region. Finally in § 5, we match the Cauchy data on the common boundary of the inner and outer regions to obtain an entire solution.

2. The linearized operator around a four-end solution

In this section, we recall some results about the four-end solutions and the moduli space theory developed in [Reference del Pino, Kowalczyk and Pacard9] for the Allen–Cahn equation. Although our main theorem mentioned in the first section only deals with the gluing construction of saddle solution, which is a special four-end solution, our method can be used to construct similar solutions for more general four-end solutions.

An oriented affine line $\lambda\subset\mathbb{R}^{2}$ can be uniquely written as

\begin{equation*} \lambda:=r\,\mathtt{e}^{\perp}+\mathbb{R}\,\mathtt{e}, \end{equation*}

for some $r\in\mathbb{R}$ and some unit vector $\mathtt{e}\in S^{1}$, which defines the orientation of λ. Here, $\perp$ denotes the clockwise rotation by $\pi/2$ in $\mathbb{R}^{2}$. Hence, the set of oriented affine lines can be parametrized by r and $e.$ Writing $\mathtt{e}=(\cos\theta,\sin \theta)$, we get the usual coordinates $(r,\theta)$, which allow us to identify the set of oriented affine lines with $\mathbb{R}\times S^{1}$.

Assume that we are given four oriented affine lines $\lambda_{1},\ldots ,\lambda_{4}\subset\mathbb{R}^{2}$, which are defined by

\begin{equation*} \lambda_{j}:=r_{j}\,\mathtt{e}_{j}^{\perp}+\mathbb{R}\,\mathtt{e}_{j} \end{equation*}

and assume that these oriented affine lines have corresponding angles $\theta_{1},\ldots,\theta_{4}$ satisfying

\begin{equation*} \theta_{1} \lt \theta_{2} \lt \theta_{3} \lt \theta_{4} \lt 2\pi+\theta_{1}. \end{equation*}

In this case, we will say that the four oriented affine lines are ordered and we will denote by Λ4 the set of four oriented affine lines. It is easy to check that for all R > 0 large enough and for all $j=1,\ldots,4$, there exists $s_{j}\in\mathbb{R}$ such that

  1. (i) The point $r_{j} \, \mathtt{e}^{\perp}_{j} + s_{j} \, \mathtt{e}_{j}$ belongs to the circle $\partial B_{R}$.

  2. (ii) The half affine lines

    (2.1)\begin{equation} \lambda_{j}^{+}:=r_{j}\,\mathtt{e}_{j}^{\perp}+s_{j}\,\mathtt{e} _{j}+\mathbb{R}^{+}\,\mathtt{e}_{j} \end{equation}

    are disjoint and included in $\mathbb{R}^{2}-B_{R}$.

  3. (iii) The minimum of the distance between two distinct half affine lines $\lambda^{+}_{i}$ and $\lambda_{j}^{+}$ is larger than 4.

The set of half affine lines $\lambda_{1}^{+},\ldots,\lambda_{4}^{+}$ together with the circle $\partial B_{R}$ induces a decomposition of $\mathbb{R}^{2}$ into five slightly overlapping connected components

\begin{equation*} \mathbb{R}^{2}=\Omega_{0}\cup\Omega_{1}\cup\ldots\cup\Omega_{4}, \end{equation*}

where $\Omega_{0}:=B_{R+1}$ and

\begin{equation*} \Omega_{j}:=\left( \mathbb{R}^{2}-B_{R-1}\right) \cap\left\{ \mathtt{x} \in\mathbb{R}^{2}\,:\,\mathrm{dist}(\mathtt{x},\lambda_{j}^{+}) \lt \mathrm{dist} (\mathtt{x},\lambda_{i}^{+})+2,\,\forall i\neq j\,\right\} , \end{equation*}

for $j=1,\ldots,4$. Here, $\mathrm{dist}(\cdot,\lambda_{j}^{+})$ denotes the distance to $\lambda_{j}^{+}$. Observe that, for all $j=1,\ldots,4$, the set $\Omega_{j}$ contains the half affine line $\lambda_{j}^{+}$.

Let ${\mathbb{I}}_{0},{\mathbb{I}}_{1},\ldots,{\mathbb{I}}_{4}$ be a smooth partition of unity of $\mathbb{R}^{2}$, which is subordinate to the above decomposition. Hence,

\begin{equation*} \sum_{j=0}^{4}\mathbb{I}_{j}\equiv1, \end{equation*}

and the support of $\mathbb{I}_{j}$ is included in $\Omega_{j}$. Without loss of generality, we can assume that ${\mathbb{I}}_{0}\equiv1$ in

\begin{equation*} \Omega_{0}^{\prime}:=B_{R-1} \end{equation*}

and ${\mathbb{I}}_{j}\equiv1$ in

\begin{equation*} \Omega_{j}^{\prime}:=\left( \mathbb{R}^{2}-B_{R-1}\right) \cap\left\{ \mathtt{x}\in\mathbb{R}^{2}\,:\,\mathrm{dist}(\mathtt{x},\lambda_{j} ^{+}) \lt \mathrm{dist}(\mathtt{x},\lambda_{i}^{+})-2,\,\forall i\neq j\,\right\} , \end{equation*}

for $j=1,\ldots,4$. Finally, without loss of generality, we can assume that

\begin{equation*} \Vert{\mathbb{I}}_{j}\Vert_{C^{2}(\mathbb{R}^{2})}\leq C. \end{equation*}

With these notations at hand, one can define

\begin{equation*} u_{\lambda}:=\sum_{j=1}^{4}(-1)^{j}\,{\mathbb{I}}_{j}\,H(\mathrm{dist} ^{s}(\,\cdot\,,\lambda_{j})), \end{equation*}

where $\lambda:=(\lambda_{1},\ldots,\lambda_{4})$ and

\begin{equation*} \mathrm{dist}^{s}(\mathtt{x},\lambda_{j}):=\mathtt{x}\cdot\mathtt{e} _{j}^{\perp}-r_{j} \end{equation*}

denotes the signed distance from a point $\mathtt{x}\in\mathbb{R}^{2}$ to λj.

Observe that, by construction, the function uλ is, away from a compact set and up to a sign, asymptotic to copies of the heteroclinic solution with ends $\lambda_{1}^+, \ldots, \lambda_{4}^+$.

Let $\mathcal{S}_{4}$ denote the set of functions u, which are defined in $\mathbb{R}^{2}$ and satisfy

\begin{equation*} u-u_{\lambda}\in W^{2,2}\,(\mathbb{R}^{2}), \end{equation*}

for some ordered set of oriented affine lines $\lambda_{1},\ldots,\lambda _{4}\subset\mathbb{R}^{2}$.

Definition 2.1. The set $\mathcal{M}_{4}$ is defined to be the set of solutions u of (1.1), which belong to $\mathcal{S}_{4}$. A function in $\mathcal{M}_{4}$ will be called a four-end solution.

Letting

\begin{equation*} \lambda=(\lambda_{1},\ldots,\lambda_{4})\in\Lambda_{4}, \end{equation*}

we write $\lambda_{j}^{+}=\mathtt{x}_{j}+\mathbb{R}^{+}\,\mathtt{e}_{j}$ as in (2.1). Given $\gamma,\delta\in\mathbb{R}$, we define a positive weight function $\Gamma_{\gamma,\delta}$ by

(2.2)\begin{equation} \Gamma_{\gamma,\delta}(\mathtt{x}):={\mathbb{I}}_{0}(\mathtt{x})+\sum _{j=1}^{4}\,{\mathbb{I}}_{j}(\mathtt{x})\,e^{{\gamma}\,(\mathtt{x} -\mathtt{x}_{j})\cdot\mathtt{e}_{j}}\,\left( \cosh((\mathtt{x}-\mathtt{x} _{j})\cdot\mathtt{e}_{j}^{\perp})\right) ^{{\delta}}, \end{equation}

so that, by construction, γ is the rate of decay or blow up along the half lines $\lambda_{j}^{+}$ and δ is the rate of decay or blow up in the direction orthogonal to $\lambda_{j}^{+}$.

With this definition in mind, we define the weighted Lebesgue space

\begin{equation*} L_{\gamma,\delta}^{2}(\mathbb{R}^{2}):=\left\{\Gamma_{\gamma,\delta}\cdot u\mid u\in L^{2} (\mathbb{R}^{2})\right\} \end{equation*}

and the weighted Sobolev space

\begin{equation*} W_{\gamma,\delta}^{2,2}(\mathbb{R}^{2}):=\left\{\Gamma_{\gamma,\delta}\cdot u\mid u\in W^{2,2} (\mathbb{R}^{2})\right\}. \end{equation*}

If $\phi\in L_{\gamma,\delta}^{2}(\mathbb{R}^{2}),$ then its norm is naturally defined to be

\begin{equation*} \left\Vert \phi\right\Vert _{L_{\gamma,\delta}^{2}(\mathbb{R}^{2} )}:=\left\Vert \frac{\phi}{\Gamma_{\gamma,\delta}}\right\Vert _{L^{2} (\mathbb{R}^{2})}. \end{equation*}

Observe that the partition of unity, the weight function, and the induced weighted spaces all depend on the choice of $\lambda\in\Lambda_{4}$. Therefore, it depends on the four-end solution.

One important result is that if u is a solution of (1.1), which is close to uλ (in $W^{2,2}$ topology), then $u-u_{\lambda}$ tends to 0 exponentially fast at infinity.

Proposition 2.2 (Refined Asymptotics, [Reference del Pino, Kowalczyk and Pacard9], theorem 2.1)

Assume that $u\in\mathcal{S}_{4}$ is a solution of (1.1) and defined by $\lambda\in\Lambda_{4}$, such that

\begin{equation*} u-u_{\lambda}\in W^{2,2}(\mathbb{R}^{2}). \end{equation*}

Then, there exist $\delta\in(-\sqrt{2},0)$ and γ < 0 such that

\begin{equation*} u-u_{\lambda}\in W_{\gamma,\delta}^{2,2}(\mathbb{R}^{2}). \end{equation*}

More precisely, δ < 0 and γ < 0 can be chosen such that

\begin{equation*} \gamma\in(-\mu_{1},0),\qquad\gamma^{2}+\delta^{2} \lt 2,\qquad -\sqrt {2} \lt \delta+\gamma\,\cot\theta_{\lambda}, \end{equation*}

where θλ is equal to the half of the minimum of the angles between two consecutive oriented affine lines $\lambda_{1},\ldots,\lambda_{4}$ and $\mu_{1}=\sqrt{\frac{3}{2}}$ is the square root of the second eigenvalue of the linearized operator around the one-dimensional heteroclinic solution. Note that in the case of more general double well potential, the second eigenvalue does not have an explicit form (except the elliptic sine-Gordon case).

We shall define the linearized Allen–Cahn operator

\begin{equation*} L_{u}\phi:=-\Delta\phi+\left( 3u^{2}-1\right) \phi. \end{equation*}

Introduce the notation

\begin{equation*} s_{i}:=(\mathtt{x}-\mathtt{x}_{i})\cdot\mathtt{e}_{i},\ \ t_{i}:=(\mathtt{x}-\mathtt{x}_{i})\cdot\mathtt{e}_{i}^{\perp}. \end{equation*}

This means that si is the coordinate along the i-th end and ti is the coordinate in the orthogonal direction of the i-th end. Let D be the deficiency space, which is an eight-dimensional space defined by

\begin{equation*} D:=\operatorname{span}\left\{ du\left( X_{i}\right) ,du\left( Y_{i}\right) ,i=1,...,4\right\} , \end{equation*}

where the vector fields $X_{i},Y_{i}$ are given by

(2.3)\begin{equation} X_{i}=\mathbb{I}_{i}(\mathtt{x})\mathtt{e}_{i}^{\bot},Y_{i}=\mathbb{I} _{i}(\mathtt{x})\left( s_{i}\mathtt{e}_{i}^{\perp }-t_{i}\mathtt{e}_{i}\right) . \end{equation}

Roughly speaking, at the main order, functions in D have the form $c_{1}H^{\prime}\left( t_{i}\right) +c_{2}s_{i}H^{\prime}\left( t_{i}\right) $ for some constants c 1 and c 2 around each end. The linear decomposition lemma (lemma 6.2 of [Reference del Pino, Kowalczyk and Pacard9]) implies that Lu is a surjective operator when it is regarded as an operator from $W_{\gamma ,\delta}^{2, 2}\left( \mathbb{R}^{2}\right) \oplus D$ to $L_{\gamma,\delta} ^{2}\left( \mathbb{R}^{2}\right) $ for $\gamma\in\left( \gamma _{0},0\right) ,\delta\in\left( \delta_{0},0\right) ,$ if $\left\vert \gamma_{0}\right\vert $ and $\left\vert \delta_{0}\right\vert $ are small. Moreover, it is a Fredholm operator. Duality arguments show that its index equals $4,$ which is equal to half of the dimension of the deficiency space $D.$

There is special four-end solution called saddle solution, which will be denoted by $U.$ It is the unique bounded solution on the plane which has the same sign as the function $xy.$ The existence of saddle solution is established in [Reference Dang, Fife and Peletier8]. Applying proposition 2.2, we obtain

(2.4)\begin{equation} \left\vert U\left( x,y\right) -H\left( y\right) \right\vert \leq C_{\gamma_{0},\delta_{0}}e^{\gamma_{0}\left\vert x\right\vert +\delta _{0}\left\vert y\right\vert },\ \text{for }x \gt \left\vert y\right\vert , \end{equation}

where $\gamma_{0},\delta_{0}$ are negative constants satisfying the following conditions:

\begin{equation*} \gamma_{0} \gt -\sqrt{\frac{3}{2}},\quad \gamma_{0}+\delta_{0} \gt -\sqrt{2}. \end{equation*}

Observe that for the saddle solution, the angle between two consecutive ends is equal to $\pi/2$. Hence, the above conditions are actually equivalent to the condition on $\gamma,\delta$ that appeared in Proposition 2.2.

To make things more explicit, we assume that for $U,$ the direction of its first end is $\mathtt{e}_{1}=\left( 1,0\right) .$ We can also assume that the functions $\mathbb{I}_{i}$ appeared in $\left( 2.3\right) $ satisfy

\begin{equation*} \mathbb{I}_{1}\left( x,y\right) =\mathbb{I}_{1}\left( x,-y\right) ,\text{ }\mathbb{I}_{3}\left( x,y\right) =\mathbb{I}_{3}\left( x,-y\right) ,\text{ }\mathbb{I}_{2}\left( x,y\right) =\mathbb{I}_{4}\left( x,-y\right) . \end{equation*}

With these properties, the function $\Gamma_{\gamma,\delta}$ defined by $\left( 2.2\right) $ will be even in the y variable.

Since we would like to construct solutions which are odd in the y variable, for $j=0,1,2,$ and $\mu\in\left( 0,1\right) ,$ we introduce the weighted Holder space

\begin{equation*} C_{\gamma,\delta}^{j,\mu}\left( \mathbb{R}_{od}^{2}\right) :=\left\{ \Gamma_{\gamma,\delta}\cdot\phi:\phi\in C^{j,\mu}\left( \mathbb{R}^{2}\right) , \phi\left( x,y\right) =-\phi\left( x,-y\right) \right\} .\ \ \end{equation*}

That is, the subscript od means odd in the y variable. With this notation, when $\gamma,\delta$ are negative, functions in this space will decay in exponential rate. For function $\eta\in C_{\gamma,\delta}^{j,\mu}\left( \mathbb{R}_{od}^{2}\right) ,$ its norm is defined to be

\begin{equation*} \left\Vert \eta\right\Vert _{C_{\gamma,\delta}^{j,\mu}\left( \mathbb{R} _{od}^{2}\right) }:=\left\Vert \frac{\eta}{\Gamma_{\gamma,\delta}}\right\Vert _{C^{j,\mu}\left( \mathbb{R}^{2}\right) }. \end{equation*}

In the rest of the paper, there are various places where we will consider weighted spaces with the suitable decay rate along the end or orthogonal ends. It will be helpful to remind that actually, we can indeed fix γ and δ to be negative and very close to zero. However, it is also worth pointing out that in different places, the requirement to be imposed on γ or δ to ensure the validity of the corresponding estimates and mapping properties of the operators could be different, and in some of these places, we have provided the more precise range for these constants $\gamma,\delta$ (for instance, in equation (2.5) of the next lemma).

Now, we also define the one-dimensional deficiency space

\begin{equation*} \mathcal{D}:=\operatorname{span}\left\{ dU\left( Y_{2}-Y_{4}\right) \right\} . \end{equation*}

Note that the function $dU\left( Y_{2}-Y_{4}\right) $ is odd in y and behaves like $cyH^{\prime}\left( x\right) $ along the y direction, for some constant c ≠ 0.

Lemma 2.3. Suppose $\gamma,\delta \lt 0,$ and

(2.5)\begin{equation} \left\vert \gamma\right\vert \in\left( 0,\sqrt{\frac{3}{2}}\right) ,\text{ }\gamma^{2}+\delta^{2} \lt 2. \end{equation}

Then, the map

\begin{equation*} L_{U}:C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) \oplus\mathcal{D}\rightarrow C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R} _{od}^{2}\right) \end{equation*}

is an isomorphism.

Before proceeding to the proof, let us explain why we need this lemma. As we can see in proposition 6.1 of [Reference del Pino, Kowalczyk and Pacard9], the linearized operator has already been proved to be surjective in suitable weighted Sobolev spaces. However, to handle the nonlinear terms appearing in solving the nonlinear problems of the later sections, we need to use the Holder spaces to close the loop of fixed point argument, instead of using the global L 2 spaces which have less regularity properties.

Proof. Let $\gamma,\delta$ be the constants satisfying (2.5). It follows from the linear decomposition lemma (proposition 6.1 of [Reference del Pino, Kowalczyk and Pacard9]) that the Fredholm operator

(2.6)\begin{equation} L_{U}:W_{\gamma,\delta}^{2,2}(\mathbb{R}_{od}^{2})\oplus\mathcal{D} \oplus\operatorname{span}\left\{ dU\left( X_{2}-X_{4}\right) \right\} \rightarrow L_{\gamma,\delta}^{2}(\mathbb{R}_{od}^{2}) \end{equation}

is surjective. Indeed, we can consider those functions with odd symmetry in the y variable. The operator is injective in exponentially decaying spaces. Hence, duality argument tells us that it is surjective in exponentially growing (with small rate) weighted spaces. Linear decomposition lemma tells us that LU with the domain given in (2.6) is surjective. We also known that the index of LU is equal to 1.

Note that $\partial_{x}U$ is in the kernel of $L_{U}.$ Moreover, there exists a constant c such that

\begin{equation*} dU\left( X_{2}-X_{4}\right) -c\partial_{x}U\in W_{\gamma,\delta} ^{1,2}(\mathbb{R}_{od}^{2}). \end{equation*}

Therefore,

(2.7)\begin{equation} L_{U}:W_{\gamma,\delta}^{2,2}(\mathbb{R}_{od}^{2})\oplus\mathcal{D}\rightarrow L_{\gamma,\delta}^{2}(\mathbb{R}_{od}^{2}) \end{equation}

is an isomorphism.

For each $m \gt 0,$ let χm be a cutoff function such that

\begin{equation*} \chi_{m}\left( x,y\right) =\left\{ \begin{array} [c]{l} 0,\ \text{for }x^{2}+y^{2} \gt \left( m+1\right) ^{2},\\ 1,\ \text{for }x^{2}+y^{2} \lt m^{2}. \end{array} \right. \end{equation*}

Let $\varphi\in C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{od}^{2}\right) ,$ normalized such that

\begin{equation*}\left\Vert \varphi \right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{od}^{2}\right) }=1.\end{equation*}

We introduce the compactly supported function $\varphi_m:=\varphi\chi_{m}$. Then, $\varphi_m\in L_{\gamma,\delta}^{2}(\mathbb{R}_{od}^{2}).$ According to (2.7), there exists $g_{m}\in W_{\gamma,\delta}^{2,2}(\mathbb{R}_{od}^{2} )\oplus\mathcal{D}$ such that

\begin{equation*} L_{U}g_{m}=\varphi_{m}. \end{equation*}

Elliptic regularity implies that

(2.8)\begin{equation} g_{m}\in C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) \oplus\mathcal{D}.\end{equation}

We would like to show that

(2.9)\begin{equation} \left\Vert g_{m}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R} _{od}^{2}\right) \oplus\mathcal{D}}\leq C\left\Vert \varphi\chi _{m}\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{od}^{2}\right) }, \end{equation}

where C is independent of m. Once this is proved, then as m tends to $+\infty,$ gm will converge in $C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) \oplus\mathcal{D}$ to a solution g of the equation $L_{U}g=\varphi$ and then the map

\begin{equation*} L_{U}:C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) \oplus\mathcal{D}\rightarrow C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R} _{od}^{2}\right) \end{equation*}

will be an isomorphism.

Suppose to the contrary that (2.9) was not true. Then there was a subsequence, still denoted by $g_{m},$ such that

\begin{equation*} A_{m}:=\left\Vert g_{m}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) \oplus\mathcal{D}}\rightarrow+\infty. \end{equation*}

Let us define the normalized functions

\begin{equation*} \tilde{g}_{m}:=\frac{g_{m}}{A_{m}},\ \ \tilde{\varphi}_{m} :=\frac{\varphi\chi_{m}}{A_{m}}. \end{equation*}

In view of (2.8), the normalized function $\tilde{g}_{m}$ can be decomposed into the following form:

\begin{equation*} \tilde{g}_{m}=f_{m}+\alpha_{m}dU\left( Y_{2}-Y_{4}\right) , \end{equation*}

where $f_{m}\in C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{od}^{2}\right) $ and $\alpha_{m}\in\mathbb{R}.$

Since the norm of $\tilde{g}_{m}$ is equal to 1, αm together with the norm of fm is uniformly bounded from above. We now claim that αm converges to zero as m tends to infinity. Indeed, if this were not true, then using the equation

\begin{equation*} L_{U}\tilde{g}_{m}=\tilde{\varphi}_{m}, \end{equation*}

we find that $\tilde{g}_{m}$ will converge to a solution $\tilde{g}$ of the equation $L_{U}\tilde{g}=0.$ Moreover, since αm is assumed to be bounded away from zero, $\tilde{g}$ will grow linearly in the y direction. Note that $\tilde{g}$ is also odd in the y variable. But linearly growing kernels of the linearized operator around the saddle solution are well understood, if such a kernel is odd in y, then it has be to 0 (this follows from the analysis in [Reference Kowalczyk, Liu and Pacard22, Reference Kowalczyk, Liu and Pacard23]). This is a contradiction, and the claim is thus proved.

Now, we have

\begin{equation*} L_{U}f_{m}=\tilde{\varphi}_{m}-\alpha_{m}L_{U}\left[ dU\left( Y_{2} -Y_{4}\right) \right] . \end{equation*}

Let $\left( x_{m},y_{m}\right) $ be a point where

\begin{equation*} \frac{f_{m}\left( x_{m},y_{m}\right) }{\Gamma_{\gamma,\delta}\left( x_{m},y_{m}\right) }\geq\frac{1}{2}\left\Vert \frac{f_{m}}{\Gamma _{\gamma,\delta}}\right\Vert _{L^{\infty}\left( \mathbb{R}^{2}\right) }. \end{equation*}

Note that the right-hand side is bounded away from zero uniformly in $m.$ We define functions

\begin{equation*} W_{m}\left( x,y\right) :=\frac{f_{m}\left( x_{m}+x,y_{m}+y\right) } {\Gamma_{\gamma,\delta}\left( x_{m},y_{m}\right) }. \end{equation*}

There are several possibilities.

Case 1. As $m\rightarrow+\infty,$ there hold $x_{m}\rightarrow+\infty$, $y_{m}\rightarrow+\infty,$ and $x_{m}\leq y_{m}.$

In this case, the point $(x_m,y_m)$ will be far away from the ends, and hence, $\left\vert U\left( x_{m},y_{m}\right) \right\vert \rightarrow1$. Note that Wm satisfies the equation

\begin{equation*} -\Delta W_{m}+\left( 3U^{2}\left( x+x_{m},y+y_{m}\right) -1\right) W_{m}=\frac{\tilde{\varphi}_{m}-\alpha_{m}L_{U}\left[ dU\left( Y_{2} -Y_{4}\right) \right] }{\Gamma_{\gamma,\delta}\left( x_{m},y_{m}\right) }, \end{equation*}

where the function in the right-hand side is evaluated at $(x+x_m,y+y_m)$. Now using the fact that the weighted norm of both $\tilde{\varphi}_m$ and αm tends to zero, we deduce that Wm will converge to a solution $W_{\infty}$ satisfying

(2.10)\begin{equation} -\Delta W_{\infty}+2W_{\infty}=0. \end{equation}

Moreover, by the definition of Wm, we have $\left\Vert W_{\infty}\right\Vert _{L^{\infty}(\mathbb{R}^2)} \gt 0$. On the other hand, since the weighted norm of fm is uniformly bounded, then using the asymptotic behaviour of the weight function $\Gamma_{\gamma,\delta}$, there holds

(2.11)\begin{equation} \left\vert W_{\infty}\left( x,y\right) \right\vert \leq Ce^{\delta x+\gamma y}. \end{equation}

But under assumption (2.5), solution of (2.10) satisfying (2.11) has to be identically zero (this can be proved using Fourier transform on the equation satisfied by the function $ e^{-\gamma x-\delta y}W_{\infty}\left( x,y\right) $). This is a contradiction.

Case 2. As $m\rightarrow+\infty,$ $\left\vert x_{m}\right\vert $ is uniformly bounded, and $y_{m}\rightarrow+\infty.$

In this case, up to a subsequence, for some constant c 0, the translated function $U(x+x_m,y+y_m)$ will converge to the function heteroclinic solution $H(x+c_0)$. Therefore, $\ W_{m}$ will converge to a solution $\bar{W}_{\infty}$ satisfying

\begin{equation*} -\Delta\bar{W}_{\infty}+\left( 3H\left( x+c_{0}\right) ^{2}-1\right) \bar{W}_{\infty}=0. \end{equation*}

Similarly, as before, $\left\Vert \bar{W}_{\infty}\right\Vert _{L^{\infty}(\mathbb{R}^2)} \gt 0$ and

\begin{equation*} \left\vert \bar{W}_{\infty}\left( x,y\right) \right\vert \leq Ce^{\gamma y}. \end{equation*}

Using the assumption that $\left\vert \gamma\right\vert \in\left( 0,\sqrt{\frac{3}{2}}\right) ,$ we can deduce (still using Fourier transform) $\bar{W}_{\infty}=0$. This is a contradiction.

The other cases can be similarly treated.

3. Family of solutions in the inner region

Let ɛ > 0 be a small parameter. As we have already mentioned in the first section, our aim in this paper is to construct a six-end solution uɛ, which will be called a fence of saddle solution, by gluing two saddle solutions together.

The solution uɛ will look like two saddle solutions juxtaposed along the x axis. It will be even symmetric with respect to the y axis and odd with respect to the x axis. The distance between the two saddle solutions will be large. It is worth pointing out that to make this construction possible, we need to slightly translate and rotate the vertical (y direction) ends of these two saddle solutions, thus eventually making a small angle between these ‘almost’ vertical ends.

To describe uɛ in a more precise way, let F be the unique even solution with $F\left( 0\right) =-1$, solving the equation:

(3.1)\begin{equation} F^{\prime\prime}+12\sqrt{2}e^{2\sqrt{2}F}=0. \end{equation}

We define

(3.2)\begin{equation} \tilde{F}\left( y\right) :=F\left( \varepsilon y\right) +\frac{\sqrt{2} }{2}\ln\varepsilon. \end{equation}

Note that $\tilde{F}$ solves the same equation as F.

Before proceeding, let us briefly explain why we need to introduce this function F. Indeed, the equation satisfied by F is the simplest case of the so-called Toda system. The Toda system plays an important role in the analysis of Allen–Cahn equation. As we have mentioned in § 1, in [Reference del Pino, Kowalczyk, Pacard and Wei10], families of multiple-end solutions of the Allen–Cahn equation have been constructed from the Toda system. The nodal sets of these solutions are very close to graphs of solutions of the Toda system. More details in this respect with be discussed in the next section. It is also worth mentioning that in dimensions larger than 2, nontrivial solutions of the Allen–Cahn equation can also be constructed from suitable minimal surfaces. In the case of two-dimensional planes, minimal surfaces are simply straight lines, and the graph of F does converge to half straight lines at infinity.

We now introduce the constant

\begin{equation*}d_{1}\in\left( 0,\frac{\sqrt{2}}{4}\right) ,\end{equation*}

which is assumed to be independent of ɛ. Define an even function $\tilde{f}$ such that

\begin{equation*} \tilde{f}\left( y\right) :=\left\{ \begin{array} [c]{l} \tilde{F}\left( 0\right) ,\ \ \text{for }|y|\leq d_{1}\left\vert \ln\varepsilon\right\vert ,\\ \tilde{F}\left( y-d_{1}\left\vert \ln\varepsilon\right\vert \right) ,\text{ for }y \gt d_{1}\left\vert \ln\varepsilon\right\vert . \end{array} \right. \end{equation*}

When the parameter ɛ > 0 is small,

(3.3)\begin{equation}\tilde{f}\left( 0\right)=F\left(0\right) +\frac{\sqrt{2} }{2}\ln\varepsilon \lt\lt 0. \end{equation}

The nodal set of uɛ will then be close, in suitable sense, to the union of the graphs of $\pm\tilde{f}$ and the x axis. That is,

\begin{equation*} \left\{\left( x,y\right):x=\tilde{f}\left( y\right) \right\} \cup\left\{ \left( x,y\right) :x=-\tilde{f}\left( y\right) \right\} \cup\left\{\left( x,y\right) :y=0\right\}. \end{equation*}

Around the point $\left( \pm\tilde{f}\left( 0\right) ,0\right) ,$ uɛ will resemble −U or $U.$ Moreover, for $\left\vert y\right\vert $ large, around the graphs of $\pm\tilde{f},$ the solution uɛ will look like the one-dimensional heteroclinic solution H in the direction orthogonal to the graphs.

We will use the Cauchy data matching method developed in the context of constant mean curvature surfaces case in [Reference Mazzeo and Pacard27, Reference Mazzeo, Pacard and Pollack28]. This method is well suited for our problem, because the solutions of the inner and outer regions act differently and it is more convenient to deal with these two parts separately.

Before proceeding to the details, let us briefly sketch the main steps of the proof. In the first step, we construct a family of solutions in the inner region

(3.4)\begin{equation} \Omega:=\left\{ \left( x,y\right) :\left\vert y\right\vert \lt d_{1}\left\vert \ln\varepsilon\right\vert \right\} , \end{equation}

with suitable boundary data on $\partial\Omega.$ This step uses the nondegeneracy of U in an essential way and is a finite dimensional Lyapunov–Schmidt reduction.

In the second step, we construct a family of solutions in the outer region

\begin{equation*} \Lambda:=\left\{ \left( x,y\right) :\left\vert y\right\vert \gt d_{1}\left\vert \ln\varepsilon\right\vert \right\} , \end{equation*}

again with suitable boundary data on $\partial\Lambda.$ This is an infinite dimensional Lyapunov–Schmidt reduction argument.

In the third step, we show that there exist certain boundary data in the inner region and outer region, such that the corresponding solutions will match with each other up to C 1 on the boundary. By the elliptic nature of the equation, this then yields a smooth entire solution of the Allen–Cahn equation.

3.1. Mapping properties of the linearized operators for the inner region

Recall that H is the one-dimensional heteroclinic solution of the Allen–Cahn equation. Let us use Π to denote the L 2-projection onto the the space spanned by H ʹ and use $\Pi^{\perp}$ to denote the projection onto the orthogonal space of $H^{\prime}.$ That is, for $\eta\in L^{2}\left( \mathbb{R}\right) ,$

(3.5)\begin{equation} \Pi\eta:=\frac{\int_{\mathbb{R}}\eta H^{\prime}ds}{\int_{\mathbb{R}} H^{\prime2}ds}H^{\prime},\ \ \Pi^{\perp}\eta:=\eta-\Pi\eta. \end{equation}

We will use the notation

\begin{equation*} \mathbb{R}_{+}^{2}:=\left\{ \left( x,y\right) :y \gt 0\right\} . \end{equation*}

Then, we define

\begin{align*} C_{\gamma,\delta}^{j,\mu}\left( \mathbb{R}_{+}^{2}\right) & :=\left\{ e^{\gamma y}\cosh^{\delta}\left( x\right) \phi:\phi\in C^{j,\mu}\left( \mathbb{R}_{+}^{2}\right) \right\} ,\\ C_{\delta}^{j,\mu}\left( \mathbb{R}\right) & :=\left\{ \cosh^{\delta }\left( x\right) \phi:\phi\in C^{j,\mu}\left( \mathbb{R}\right) \right\} . \end{align*}

Lemma 3.1. Let $0 \lt \sigma_{0} \lt 2$ be a fixed constant. Suppose $\gamma,\delta$ are negative constants such that

\begin{equation*} \gamma\in\left( -\frac{1}{\sqrt{2}},0\right) ,\quad\gamma^{2}+\delta ^{2} \lt 2-\sigma_{0}. \end{equation*}

Then, there exist operators $G,P,$

\begin{align*} G & :C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{+}^{2}\right) \rightarrow C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{+}^{2}\right) ,\\ P & :\Pi^{\perp}\left( C_{\delta}^{2,\mu}\left( \mathbb{R}\right) \right) \rightarrow C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{+} ^{2}\right) , \end{align*}

such that for each $\xi\in C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{+} ^{2}\right) $ with

(3.6)\begin{equation} \Pi\xi\left( \cdot,y\right) =0,\ \text{for }y\geq0, \end{equation}

and each $\phi\in\Pi^{\perp}\left( C_{\delta}^{2,\mu}\left( \mathbb{R} \right) \right) $, the function $G\xi+P\phi\in C_{\gamma,\delta}^{2,\mu }\left( \mathbb{R}_{+}^{2}\right) $ is a solution of the problem

(3.7)\begin{equation} \left\{ \begin{array} [c]{l} -\Delta w+\left( 3H^{2}\left( x\right) -1\right) w=\xi,\\ \Pi w\left( \cdot,y\right) =0,\ \text{for }y\geq0,\\ w\left( \cdot,0\right) =\phi. \end{array} \right. \end{equation}

Moreover, $\left\Vert G\right\Vert +\left\Vert P\right\Vert \leq C,$ where C is independent of $\gamma,\delta.$

We remark that for later use, we actually only need to assume that the negative constants γ and δ are sufficiently close to 0 (but independent of ɛ), and same remark applies to proposition 3.2. See also the paragraphs right before lemma 2.3.

Proof. We first consider the case of ϕ = 0.

Given $\xi\in C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R}_{+}^{2}\right) $ satisfying (3.6), to show the existence of a solution to problem (3.7), consider the function space E 0 consisting of those functions $w\in H_{0}^{1}\left( \mathbb{R}_{+} ^{2}\right) ,$ with the additional requirement

\begin{equation*} \int_{\mathbb{R}}w\left( \cdot,y\right) H^{\prime}dx=0,\ \text{for a.e. }y \gt 0. \end{equation*}

Note that the functional

\begin{equation*} \int_{\mathbb{R}_{+}^{2}}\left[ \left\vert \nabla w\right\vert ^{2}+\left( 3H^{2}\left( x\right) -1\right) w^{2}\right] dxdy \end{equation*}

is coercive on this space. Hence, by Lax–Milgram theorem, there exists $w\in E_{0}$ such that

\begin{equation*} \left\{ \begin{array} [c]{l} -\Delta w+\left( 3H^{2}-1\right) w=\xi,\ \text{in }\mathbb{R}_{+}^{2},\\ w\left( x,0\right) =0. \end{array} \right. \end{equation*}

Moreover, by elliptic regularity,

(3.8)\begin{equation} \left\Vert w\right\Vert _{L^{\infty}\left( \mathbb{R}_{+}^{2}\right) }\leq C\left\Vert \xi\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \mathbb{R} _{+}^{2}\right) } \end{equation}

where C is independent of $\xi.$ With the estimate (3.8) at hand, we can use the same barrier construction arguments as that of lemma 3.4 and lemma 3.5 in [Reference del Pino, Kowalczyk, Pacard and Wei10] to deduce that

\begin{equation*} \left\Vert w\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{+} ^{2}\right) }\leq C\left\Vert \xi\right\Vert _{C_{\gamma,\delta}^{0,\mu }\left( \mathbb{R}_{+}^{2}\right) }. \end{equation*}

In the general case that $\phi\neq0,$ we introduce a new function

\begin{equation*} \tilde{w}\left( x,y\right) :=w\left( x,y\right) -\chi\left( y\right) \phi\left( x\right) , \end{equation*}

where χ is a cutoff function such that

\begin{equation*} \chi\left( y\right) =\left\{ \begin{array} [c]{l} 1,\ \text{for }y\in\left[ 0,1\right] ,\\ 0,\ \text{for }y\in\left( 2,+\infty\right) . \end{array} \right. \end{equation*}

Then, the problem (3.7) for the unknown function w will be transformed to the following problem for $\tilde{w}$:

\begin{equation*} \left\{ \begin{array} [c]{l} -\Delta\tilde{w}+\left( 3H^{2}-1\right) \tilde{w}=\xi-\Delta\left( \chi \phi\right) +\left( 3H^{2}-1\right) \chi\phi,\\ \Pi\tilde{w}\left( \cdot,y\right) =0,\ \text{for }y\geq0,\\ \tilde{w}\left( \cdot,0\right) =0. \end{array} \right. \end{equation*}

The previous argument tells us that it has a solution $\tilde{w}$ with

\begin{equation*} \left\Vert \tilde{w}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{+}^{2}\right) }\leq C\left\Vert \xi-\Delta\left( \chi \phi\right) +\left( 3H^{2}-1\right) \chi\phi\right\Vert _{C_{\gamma,\delta }^{0,\mu}\left( \mathbb{R}_{+}^{2}\right) }. \end{equation*}

It follows that (3.7) has a solution w with

\begin{equation*} \left\Vert w\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R}_{+} ^{2}\right) }\leq C\left\Vert \xi\right\Vert _{C_{\gamma,\delta}^{0,\mu }\left( \mathbb{R}_{+}^{2}\right) }+C\left\Vert \phi\right\Vert _{C_{\delta }^{2,\mu}\left( \mathbb{R}\right) }. \end{equation*}

This completes the proof.

Keep in mind that we want to construct solutions in the inner region, by gluing saddle solutions. For this purpose, we need to analyse the mapping property of the linearized operator LU restricted to functions defined on $\Omega,$ which is defined by (3.4).

Let $\Gamma_{\gamma,\delta}$ be the weight function introduced in the previous section and define the space

\begin{equation*} C_{\gamma,\delta}^{j,\mu}\left( \Omega_{od}\right) :=\left\{ \phi \Gamma_{\gamma,\delta}:\phi\in C^{j,\mu}\left( \Omega\right) \ \text{and }\phi\left( x,y\right) =-\phi\left( x,-y\right) \right\} . \end{equation*}

For each $\xi\in C_{\gamma,\delta}^{j,\mu}\left( \Omega_{od}\right) ,$ its norm is defined to be

\begin{equation*} \left\Vert \xi\right\Vert _{C_{\gamma,\delta}^{j,\mu}\left( \Omega _{od}\right) }:=\left\Vert \frac{\xi}{\Gamma_{\gamma,\delta}}\right\Vert _{C^{j,\mu}\left( \Omega_{od}\right) }. \end{equation*}

Throughout the paper, we will set

\begin{equation*} y_{\varepsilon}=d_{1}\left\vert \ln\varepsilon\right\vert , \end{equation*}

where d 1 is the constant that appeared in the definition of $\Omega.$ The next result deals with the mapping property of LU in the space of functions odd in y.

Proposition 3.2. Let $\gamma,\delta$ be fixed negative constants sufficiently close to zero and $\left\vert \delta\right\vert \leq\left\vert \gamma\right\vert $. There exist linear maps

\begin{align*} G_{1} & :C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) \rightarrow C_{\gamma,\delta}^{2,\mu}\left( \Omega_{od}\right) ,\\ P_{1} & :\Pi^{\perp}\left( C_{\delta}^{2,\mu}\left( \mathbb{R}\right) \right) \rightarrow C_{\gamma,\delta}^{2,\mu}\left( \Omega_{od}\right) , \end{align*}

with $\ \left\Vert G_{1}\right\Vert +\left\Vert P_{1}\right\Vert \leq C\varepsilon^{-4d_{1}|\gamma|},$ where C depends on $\gamma,\delta$ (but does not depend on ɛ), such that for all $\eta\in C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) $ and all $\phi\in\Pi^{\perp}\left( C_{\delta}^{2,\mu}\left( \mathbb{R} \right) \right) $, the function $w:=G_{1}\left( \eta\right) +P_{1}\left( \phi\right) $ is a solution of the problem

\begin{equation*} \left\{ \begin{array} [c]{l} L_{U}w=\eta,\ \text{in }\Omega,\\ \Pi^{\perp}w\left( \cdot,y_{\varepsilon}\right) =\phi. \end{array} \right. \end{equation*}

Here, Π is the projection operator defined in (3.5), acting on functions with the x variable.

Proof. Let ζ be the solution of

(3.9)\begin{equation} \left\{ \begin{array}[c]{l} -\Delta\zeta+\left( 3H^{2}\left( x\right) -1\right) \zeta=0,\text{ }y \lt y_{\varepsilon},\\ \zeta\left( \cdot,y_{\varepsilon}\right) =\phi,\\ \zeta\left( x,y\right) \rightarrow0,\ \text{as }y\rightarrow-\infty. \end{array} \right. \end{equation}

The existence of ζ can be deduced from lemma 3.1, if we translate the boundary of $\mathbb{R}^2_+$ to $y=y_{\varepsilon}$. Moreover, we can assume that $\Pi\zeta=0$. Furthermore, if δ is close to zero, then the solution ζ has the following estimate:

(3.10)\begin{equation} \left\Vert \cosh^{\left\vert \delta\right\vert }\left( x\right) e^{\frac{1}{2}\left( y-y_{\varepsilon}\right) }\zeta\right\Vert _{C^{2,\mu }\left( \Omega\right) }\leq C\left\Vert \phi\right\Vert _{C_{\delta}^{2,\mu }\left( \mathbb{R}\right) }. \end{equation}

We remark that the weight function $\Gamma_{\gamma,\delta}$ has a form different from the weight function appeared before ζ. Indeed, $\Gamma_{\gamma,\delta}$ measures the decay along each end. On the other hand, the decay estimate (3.10) tells us that the boundary data ϕ have a very small influence on the solution near the origin, where the saddle solution is centred around.

Let α 0 be a fixed constant independent of ɛ and ϑ be a cutoff function such that

\begin{equation*} \vartheta\left( s\right) =\left\{ \begin{array} [c]{l} 1,\ \text{for }s \gt \alpha_{0},\\ 0,\ \text{for }s \lt \alpha_{0}-1. \end{array} \right. \end{equation*}

We then define

\begin{equation*} \bar{\zeta}\left( x,y\right) :=\vartheta\left( y\right) \zeta\left( x,y\right) -\vartheta\left( -y\right) \zeta\left( x,-y\right) \end{equation*}

and introduce a new function

(3.11)\begin{equation} \bar{w}:=w-\bar{\zeta}. \end{equation}

The equation $L_{U}w=\eta$ is transformed into

\begin{equation*} L_{U}\bar{w}=L_{U}w-L_{U}\bar{\zeta}:=\bar{\eta}. \end{equation*}

We then solve

(3.12)\begin{equation} \left\{ \begin{array} [c]{l} L_{U}\bar{w}=\bar{\eta},\ \text{in }\Omega,\\ \Pi^{\perp}\bar{w}\left( \cdot,\pm y_{\varepsilon}\right) =0. \end{array} \right. \end{equation}

Let α 1 be a large positive constant to be determined later on. Observe that U is close to $H\left( x\right) $ for $y\in\left[ \alpha _{1}, y_{\varepsilon} \right] .$ We look for a solution w 1 of

(3.13)\begin{equation} \left\{ \begin{array} [c]{l} -\Delta w_{1}+\left( 3H^{2}\left( x\right) -1\right) w_{1}=\bar{\eta }\text{, for }y\in\left[ \alpha_{1}, y_{\varepsilon} \right] ,\\ w_{1}=0\ \text{and }\partial_{y}\left( \Pi w_{1}\right) =0,\ \text{for }y=\alpha_{1},\\ \Pi^{\perp}w_{1}\left( \cdot,y_{\varepsilon}\right) =0. \end{array} \right. \end{equation}

To solve this, we can write w 1 as $\Pi w_{1}+\Pi^{\perp}w_{1},$ and $\bar{\eta}$ as $\Pi\bar{\eta}+\Pi^{\perp}\bar{\eta},$ and split the problem into two parts, one is orthogonal to H ʹ and the other one is parallel to $H^{\prime}.$ If

\begin{equation*} \Pi w_{1}=\frac{\rho\left( y\right) }{\int_{\mathbb{R}}H^{\prime2} ds}H^{\prime}\left( x\right) , \end{equation*}

then multiplying the first equation in (3.13) by $H'(x)$ and integrating in x, we find that the parallel part reduces to a second-order ordinary differential equation of the form

\begin{equation*} \rho^{\prime\prime}\left( y\right) =-\int_{\mathbb{R}}\bar{\eta}\left( s,y\right) H^{\prime}\left( s\right) ds, \end{equation*}

with the initial condition $\rho\left( \alpha_{1}\right) =\rho^{\prime }\left( \alpha_{1}\right) =0.$ Then,

\begin{equation*} \rho\left( y\right) =-\int_{\alpha_{1}}^{y}\left( \int_{\alpha_{1}}^{k} \int_{\mathbb{R}}\bar{\eta}\left( s,t\right) H^{\prime}\left( s\right) dsdt\right) dk. \end{equation*}

Note that

\begin{equation*} \sup_{y\in\left[ \alpha_{1},d_{1}\left\vert \ln\varepsilon\right\vert \right] }\left\vert e^{\left\vert \gamma\right\vert y}\rho\left( y\right) \right\vert \leq Ce^{d_{1}\left\vert \gamma\ln\varepsilon\right\vert }\left\vert \ln\varepsilon\right\vert \left\Vert \bar{\eta}\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) }. \end{equation*}

For the orthogonal part, we can use the same argument as that of lemma 3.1 to obtain a solution $\Pi^{\perp}w_{1}$ of

\begin{equation*} \left\{ \begin{array}[c]{l} -\Delta\left( \Pi^{\perp}w_{1}\right) +\left( 3H^{2}\left( x\right) -1\right) \Pi^{\perp}w_{1}=\Pi^{\perp}\bar{\eta}\text{, for }y\in\left[ \alpha_{1}, y_{\varepsilon} \right] ,\\ \Pi^{\perp}w_{1}\left( \cdot,y\right) =0,\ \text{for }y=\alpha_{1}\ \text{and }y_{\varepsilon} . \end{array} \right. \end{equation*}

We then get a solution w 1 of (3.13) with the following estimate:

(3.14)\begin{equation} \left\Vert w_{1}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega _{od}\right) }\leq C\varepsilon^{-d_{1}\left\vert \gamma\right\vert }\left\vert \ln\varepsilon\right\vert \left\Vert \bar{\eta}\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) }, \end{equation}

where C depends on $\gamma,\delta.$

We know from the estimate (2.4) that for some universal constant $c_1, c \gt 0$ (for instance, c can chosen to be 1) which only depend on the saddle solution, there holds

(3.15)\begin{equation} \left\vert U\left( x,y\right) -H\left( x\right) \right\vert \leq c_1 e^{-c\left\vert y\right\vert},\ \text{for }y\in\left[ \alpha_{1} ,y_{\varepsilon}\right] . \end{equation}

A perturbation argument tells us that if we choose α 1 to be large enough, then the following problem is solvable:

\begin{equation*} \left\{ \begin{array} [c]{l} L_{U}u_{1}=\bar{\eta}\text{, for }y\in\left[ \alpha_{1},y_{\varepsilon} \right] ,\\ u_{1}=0\ \text{and }\partial_{y}\left( \Pi u_{1}\right) =0,\ \text{for }y=\alpha_{1},\\ \Pi^{\perp}u_{1}\left( \cdot,y_{\varepsilon}\right) =0. \end{array} \right. \end{equation*}

To be more precise, we write the equation

\begin{equation*} L_{U}u_{1}=\bar{\eta} \end{equation*}

into the form

\begin{equation*} u_{1}=L_{H}^{-1}\left[ \bar{\eta}+3\left( H^{2}-U^{2}\right) u_{1}\right] . \end{equation*}

Observe that the the coefficient $\varepsilon^{-d_{1}\left\vert \gamma \right\vert }\left\vert \ln\varepsilon\right\vert $ appears in (3.14) simply because the weight function grows exponentially along the end, and hence, the estimate is worst on the boundary $y=y_{\varepsilon}$ and will be much better if y is small. Then, using (3.15), we can see that if $\left\vert \gamma \right\vert $ is less that $\frac{c}{2},$ there holds

\begin{equation*} \left\Vert 3L_{H}^{-1}\left( H^{2}-U^{2}\right) u_{1}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{od}\right) }\leq Ce^{-\frac {c\alpha_{1}}{2}}\left\Vert u_{1}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{od}\right) }. \end{equation*}

We can then choose α 1 large enough such that $Ce^{-\frac{c\alpha_{1} }{2}} \lt 1.$ Then, a direct application of the contraction mapping principle will yield the existence of a required solution.

With this function u 1 at hand, we now define g such that g = 0 in $\mathbb{R}^{2}\backslash\Omega$ and

\begin{equation*} g\left( x,y\right) =\bar{\eta}\left( x,y\right) -L_{U}\left[ \vartheta\left( y\right) u_{1}\left( x,y\right) -\vartheta\left( -y\right) u_{1}\left( x,-y\right) \right] ,\ \text{in }\Omega. \end{equation*}

Consider the equation

(3.16)\begin{equation} L_{U}v=g\text{,} \ \text{in }\mathbb{R}^{2}. \end{equation}

By lemma 2.3, (3.16) has a solution v with

(3.17)\begin{equation} \left\Vert v\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \mathbb{R} ^{2}\right) \oplus\mathcal{D}}\leq C\left\Vert g\right\Vert _{C_{\gamma ,\delta}^{0,\mu}\left( \mathbb{R}^{2}\right) }. \end{equation}

Let $\varkappa$ be a smooth cutoff function such that $\varkappa\left( y\right) =\varkappa\left( -y\right) $ and

\begin{equation*} \varkappa\left( y\right) =\left\{ \begin{array} [c]{l} 1,\ \text{for }\left\vert y\right\vert \lt y_{\varepsilon}-1,\\ 0,\ \text{for }\left\vert y\right\vert \geq y_{\varepsilon}. \end{array} \right. \end{equation*}

If we define

\begin{equation*} \Phi:=\Pi v+\varkappa\Pi^{\perp}v. \end{equation*}

Then, $\Pi^{\perp}\Phi=0$ on $\partial\Omega.$ Recall that v may contain functions in the deficiency space $\mathcal{D}$, which grows linearly in the y direction. By (3.17), there holds

\begin{equation*} \left\Vert \Phi\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega _{od}\right) }\leq C\varepsilon^{-d_{1}\left\vert \gamma\right\vert }\left\vert \ln\varepsilon\right\vert \left\Vert g\right\Vert _{C_{\gamma ,\delta}^{0,\mu}\left( \Omega_{od}\right) }. \end{equation*}

Observe that g is supported in the region where $\left\vert y\right\vert \lt \alpha_{1}+1.$ Hence, $\Pi^{\perp}v$ decays exponentially fast for $\left\vert y\right\vert \gt \alpha_{1}+1\ $and there holds

\begin{equation*} \left\vert \Pi^{\perp}v\right\vert \leq Ce^{-\left\vert y\right\vert }\left\Vert g\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega _{od}\right) }. \end{equation*}

We compute

\begin{align*} L_{U}\Phi & =L_{U}v-L_{U}\left[ \left( 1-\varkappa\right) \Pi^{\perp }v\right] \\ & =g-L_{U}\left[ \left( 1-\varkappa\right) \Pi^{\perp}v\right] . \end{align*}

It follows that for some $\sigma \gt 0,$ there holds

\begin{equation*} \left\Vert L_{U}\Phi-g\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) }\leq C\varepsilon^{\sigma}\left\Vert g\right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega_{od}\right) }. \end{equation*}

Observe that Φ depends linearly on $g.$ Hence, it can be written as $Mg,$ where M is a linear operator. We seek h such that

\begin{equation*} L_{U}M\left( g+h\right) =g. \end{equation*}

This can be written as

\begin{equation*} h=h-L_{U}Mh+g-L_{U}Mg. \end{equation*}

The existence of a solution h follows from the contraction mapping principle. Then, the function

\begin{equation*} \bar{w}=M\left( g+h\right) +\vartheta\left( y\right) u_{1}\left( x,y\right) -\vartheta\left( -y\right) u_{1}\left( x,-y\right) \end{equation*}

solves the problem (3.12), and the function w given by (3.11) is the required solution. This completes the proof.

3.2. Solutions with prescribed boundary data on $\partial\Omega$

With the linearized Allen–Cahn operator of the saddle solution being understood, we proceed to solve the nonlinear problem in the inner region Ω.

Let $\tilde{f}\left( 0\right) $ be defined in (3.3), we introduce the notation $y_{1}=y_{2}=y$ and

\begin{equation*} x_{1}=x-\tilde{f}\left( 0\right) ,\ \ x_{2}=x+\tilde{f}\left( 0\right) . \end{equation*}

Then, we define the approximate solution

\begin{equation*} u_{0}\left( x,y\right) :=U_{1}\left( x,y\right) -U_{2}\left( x,y\right) -H\left( y\right) , \end{equation*}

where

\begin{equation*} U_{1}\left( x,y\right) =U\left( x_{1},y_{1}\right) ,\ \ U_{2}\left( x,y\right) =U\left( x_{2},y_{2}\right) . \end{equation*}

We emphasize that u 0 actually depends on the parameter $\varepsilon,$ since $\tilde{f}\left( 0\right) \sim\frac{\sqrt{2}}{2}\ln\varepsilon.$ Moreover, u 0 is even in x and odd in $y:$

\begin{equation*} u_{0}\left( x,y\right) =u_{0}\left( -x,y\right) =-u_{0}\left( x,-y\right) . \end{equation*}

Although this symmetry is not essential for our construction, it does simplify notations.

Let $u=u_{0}+\Psi.$ Then, u will be a solution of the Allen–Cahn equation if the perturbation Ψ satisfies

\begin{equation*} L_{u_{0}}\Psi:=-\Delta\Psi+\left( 3u_{0}^{2}-1\right) \Psi =\underbrace{\Delta u_{0}+u_{0}-u_{0}^{3}}_{E\left( u_{0}\right) }\underbrace{-3u_{0}\Psi^{2}-\Psi^{3}}_{T\left( \Psi\right) }. \end{equation*}

Throughout the paper, for any function $h,$ we will use h e to denote its even reflection across the y axis. That is,

\begin{equation*} h^{e}\left( x,y\right) :=h\left( -x,y\right) . \end{equation*}

Let η be a smooth cutoff function such that $\eta\left( x\right) +\eta^{e}\left( x\right) =1$ and

(3.18)\begin{equation} \eta\left( x\right) =\left\{ \begin{array} [c]{l} 1,\ \text{for }x \lt -1,\\ 0,\ \text{for }x \gt 1. \end{array} \right. \end{equation}

We will construct solutions of the Allen–Cahn equation in the inner region Ω with the form

\begin{equation*} u=u_{0}+\underbrace{\phi+\phi^{e}}_{\Psi}, \end{equation*}

where ϕ satisfies

(3.19)\begin{equation} L_{U_{1}}\phi=\left[ E\left( u_{0}\right) +T\left( \Psi\right) +L_{U_{1} }\phi-L_{u_{0}}\phi+L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right] \eta:=K_{0} \end{equation}

and ϕ e satisfies

(3.20)\begin{equation} L_{U_{2}}\phi^{e}=\left[ E\left( u_{0}\right) +T\left( \Psi\right) +L_{U_{1}}\phi-L_{u_{0}}\phi+L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right] \eta^{e}. \end{equation}

Keep in mind that due to the even symmetry of the functions in the x variable, if ϕ solves (3.19), then automatically ϕ e solves (3.20). One can check that taking the sum of (3.19) and (3.20) yields

\begin{equation*} L_{u_{0}}\left( \Psi\right) =E\left( u_{0}\right) +T\left( \Psi\right) . \end{equation*}

To solve equation (3.19), we need to estimate the error of the approximate solution $u_{0}.$ Using the identity

\begin{align*} & \left( a+b+c\right) ^{3}-a^{3}-b^{3}-c^{3}\\ & =3\left( a+c\right) \left( b+c\right) \left( a-c+b+c\right) \\ & =3\left( a^{2}-c^{2}\right) \left( b+c\right) +3\left( a+c\right) \left( b+c\right) ^{2}, \end{align*}

we compute

\begin{align*} E\left( u_{0}\right) & =\Delta\left[ U_{1}-U_{2}-H\left( y\right) \right] +U_{1}-U_{2}-H\left( y\right) -U_{1}^{3}+U_{2}^{3}+H^{3}\left( y\right) \\ & +3\left[ U_{1}^{2}-H^{2}\left( y\right) \right] \left[ U_{2}+H\left( y\right) \right] -3\left[ U_{1}-H\left( y\right) \right] \left[ U_{2}+H\left( y\right) \right] ^{2}. \end{align*}

Since $U_{1},U_{2},H$ solve the Allen–Cahn equation, there holds

(3.21)\begin{equation} E\left( u_{0}\right) =3\left[ U_{1}^{2}-H^{2}\left( y\right) \right] \left[ U_{2}+H\left( y\right) \right] -3\left[ U_{1}-H\left( y\right) \right] \left[ U_{2}+H\left( y\right) \right] ^{2}. \end{equation}

To proceed, recall that in the previous section, we have defined the weight function $\Gamma_{\gamma,\delta}\left( x,y\right) $ associated with the saddle solution $U\left( x,y\right) .$ We use $C^{j,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) $ to denote the space of functions with variables $\left( x_{1},y_{1}\right) $ which are of $C^{j,\mu}$ in $\Omega.$ One can also regard $\Omega_{\left( x_{1},y_{1}\right) }$ as the set

\begin{equation*} \left\{ \left( x_{1},y_{1}\right) :x_{1}\in\mathbb{R},\left\vert y_{1}\right\vert \lt y_{\varepsilon}\right\} . \end{equation*}

We then define the space

\begin{equation*} C_{\gamma,\delta}^{j,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) :=\{\Gamma_{\gamma,\delta}\left( x_{1},y_{1}\right)\cdot g: g\in C^{j,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) \} , \end{equation*}

where the norms are computed for functions with respect to the $\left( x_{1},y_{1}\right) $ variable.

We now estimate the error of the approximate solution.

Lemma 3.3. There exist $\delta^{\ast} \lt 0,\gamma^{\ast} \lt 0$ such that for all $\delta\in\left( \delta^{\ast},0\right) ,\gamma\in\left( \gamma^{\ast },0\right) ,$ the following estimate holds:

\begin{equation*} \left\Vert \eta E\left( u_{0}\right) \right\Vert _{C_{\gamma,\delta}^{0,\mu }\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\leq C\varepsilon ^{\frac{6}{5}}, \end{equation*}

where C is independent of $\varepsilon.$

Proof. Since the function is even in the x variable, we only need to estimate the error in the left half plane. That is, in the region where x < 0. In this region, we have $|x_1| \lt |x_2|$.

By (3.21), the main order of $\eta E\left( u_{0}\right) $ is

\begin{equation*} I:=3\left( U^{2}\left( x_{1},y_{1}\right) -H^{2}\left( y\right) \right) \left( U\left( x_{2},y_{2}\right) +H\left( y\right) \right) . \end{equation*}

Case 1. Estimates in the region

\begin{equation*} \left\{ \left( x_{1},y_{1}\right) :x_{1} \gt \left\vert y_{1}\right\vert, x \lt 0 \right\} . \end{equation*}

Let us take γ 0 close to $-\sqrt{\frac{3}{2}}$ and δ 0 close to $-\sqrt{2}+\sqrt{\frac{3}{2}}.$ Then using (2.4), we can estimate

\begin{align*} \left\vert I\right\vert & \leq C\left\vert U\left( x_{1},y_{1}\right) -H\left( y\right) \right\vert \left\vert U\left( x_{2},y_{2}\right) +H\left( y\right) \right\vert \\ & \leq C\exp\left[ \gamma_{0}\left( \left\vert x_{1}\right\vert +\left\vert x_{2}\right\vert \right) +2\delta_{0}\left\vert y_{1}\right\vert \right] \\ & \leq C\varepsilon^{\sqrt{3}-\kappa}\exp\left( 2\delta_{0}\left\vert y_{1}\right\vert \right) , \end{align*}

where κ is a fixed small positive constant. Therefore, in this region,

\begin{equation*} \left\vert I\exp\left( \left\vert \gamma x_{1}\right\vert +\left\vert \delta y_{1}\right\vert \right) \right\vert \leq C\varepsilon^{\sqrt{3}-\kappa} \exp\left( \left( \left\vert \delta\right\vert +2\delta_{0}\right) \left\vert y_{1}\right\vert +\left\vert \gamma\right\vert \left\vert x_{1}\right\vert \right) . \end{equation*}

Observe that in this region, we have

\begin{equation*} 0 \lt x_{1} \lt \frac{\sqrt{2}}{2}\left\vert \ln\varepsilon\right\vert +C,\ \ \end{equation*}

Therefore, if $\left\vert \delta\right\vert ,\left\vert \gamma\right\vert $ are chosen to be small such that $\left\vert \delta\right\vert +2\delta _{0} \lt 0,$ $\frac{\sqrt{2}\left\vert \gamma\right\vert }{2} \lt \kappa,$ and $\sqrt{3}-2\kappa \gt \frac{6}{5},$ then

\begin{align*} \left\vert I\exp\left( \left\vert \gamma x_{1}\right\vert +\left\vert \delta y_{1}\right\vert \right) \right\vert & \leq C\varepsilon^{\sqrt{3}-\kappa }\exp\left( \frac{\sqrt{2}\left\vert \gamma\right\vert }{2}\left\vert \ln\varepsilon\right\vert \right) \\ & \leq C\varepsilon^{\sqrt{3}-2k}\leq C\varepsilon^{\frac{6}{5}}. \end{align*}

Case 2. Estimates in the region

\begin{equation*} \left\{ \left( x_{1},y_{1}\right) :0 \lt x_{1} \lt y_{1}\ \text{and }\left\vert x_{2}\right\vert \gt \left\vert y_{2}\right\vert \right\} . \end{equation*}

There holds

\begin{align*} \left\vert U\left( x_{1},y_{1}\right) -H\left( x_{1}\right) \right\vert & \leq Ce^{-\gamma_{0}\left\vert y_{1}\right\vert }e^{-\delta_{0}\left\vert x_{1}\right\vert },\\ \left\vert U\left( x_{2},y_{2}\right) +H\left( y_{2}\right) \right\vert & \leq Ce^{-\gamma_{0}\left\vert x_{2}\right\vert }e^{-\delta_{0}\left\vert y_{2}\right\vert }. \end{align*}

As a consequence,

\begin{align*} \left\vert I\right\vert & \leq\left( \left\vert U\left( x_{1} ,y_{1}\right) -H\left( x_{1}\right) \right\vert +\left\vert H\left( x_{1}\right) -H\left( y\right) \right\vert \right) e^{-\gamma _{0}\left\vert x_{2}\right\vert -\delta_{0}\left\vert y_{2}\right\vert }\\ & \leq C\exp\left( -\gamma_{0}\left\vert y_{1}\right\vert -\delta _{0}\left\vert x_{1}\right\vert -\gamma_{0}\left\vert x_{2}\right\vert -\delta_{0}\left\vert y_{2}\right\vert \right) \\ & \quad +C\exp\left( -\sqrt {2}\left\vert x_{1}\right\vert -\gamma_{0}\left\vert x_{2}\right\vert -\delta_{0}\left\vert y_{2}\right\vert \right) \\ & \leq C\varepsilon^{\sqrt{3}-\kappa}\exp\left( -\delta_{0}\left\vert x_{1}\right\vert -\delta_{0}\left\vert y_{2}\right\vert \right) . \end{align*}

It follows that when $\left\vert \gamma\right\vert $ is small,

\begin{equation*} \left\vert I\exp\left( \left\vert \gamma y_{1}\right\vert +\left\vert \delta x_{1}\right\vert \right) \right\vert \leq C\varepsilon^{\frac{6}{5}}. \end{equation*}

The estimates in other regions are similar.

Note that we also need to estimate the Holder norm of the error. This can be done in a similar way, since the Holder norm of U also has exponential decay along each end.

Note that the exponent $\frac{6}{5}$ is not of particular importance, what is crucial here is that this exponent is greater than 1.

For function $\phi=\phi\left( x\right) $, we set

\begin{equation*} \phi^{\ast}\left( x_{1}\right) :=\phi\left( x\right) . \end{equation*}

Here, x is regarded as a function depending on x 1, defined at the beginning of this subsection. $\left\Vert \cdot\right\Vert _{C^{2,\mu}\left( \mathbb{R}_{x_{1}}\right) }$ will denote the $C^{2,\mu}$ norm with respect to the x 1 variable. For a function $\varphi\left( x,y\right) ,$ we also set

\begin{equation*} \varphi^{\ast}\left( x_{1},y_{1}\right) :=\varphi\left( x,y\right) . \end{equation*}

The following proposition states that we can construct solution u to the Allen–Cahn equation in $\Omega,$ with $u=u_{0}+h$ at $y=y_{\varepsilon},$ for suitable prescribed boundary data $h.$

Proposition 3.4. Suppose $h\in C^{2,\mu}\left( \mathbb{R}\right) $ satisfies

\begin{equation*} \left\Vert \frac {\left( \eta h\right) ^{\ast}} {\Gamma_{\gamma,\delta}\left( x_{1},y_{\varepsilon}\right)} \right\Vert _{C^{2,\mu}\left( \mathbb{R}_{x_{1} }\right) }\leq C\varepsilon^{\frac{11}{10}} \end{equation*}

and

\begin{equation*} \int_{\mathbb{R}}\left( \eta h\right) ^{\ast}H^{\prime}\left( x_{1}\right) dx_{1}=0. \end{equation*}

Assume that $d_{1}\left\vert \gamma\right\vert \lt \frac{1}{40}.$ Then, there exists a solution ϕ to the problem

(3.22)\begin{equation} \left\{ \begin{array} [c]{l} L_{U_{1}}\phi=K_{0},\ \text{in }\Omega,\\ \Pi_{x_{1}}^{\perp}\phi^{\ast}\left( \cdot,y_{\varepsilon}\right) =\left( \eta h\right) ^{\ast}. \end{array} \right. \end{equation}

Here, $\Pi_{x_{1}}^{\perp}$ is the orthogonal projection operator with respect to the x 1 variable. The function ϕ satisfies

(3.23)\begin{equation} \left\Vert \phi\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\leq C\varepsilon^{\frac{21}{20}}. \end{equation}

Moreover, at the boundary, we have

(3.24)\begin{equation} \left\Vert \frac {\left(\partial_y (\eta \phi)\right) ^{\ast}\left( x_{1},y_{\varepsilon}\right)} {\Gamma_{\gamma,\delta}\left( x_{1},y_{\varepsilon}\right)} \right\Vert _{C^{2,\mu}\left( \mathbb{R}_{x_{1} }\right) }\leq C\varepsilon^{\frac{11}{10}} \end{equation}

Proof. Using the operators G 1 and P 1 of proposition 3.2 (with $\left( x,y\right) $ replaced by $\left( x_{1},y_{1}\right) $), we can recast (3.22) as the following fixed point problem:

(3.25)\begin{equation} \phi=G_{1}\left( \left[ E\left( u_{0}\right) -T\left( \Psi\right) +L_{U_{1}}\phi-L_{u_{0}}\phi+L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right] \eta\right) +P_{1}\left( \eta h\right) . \end{equation}

Using the estimate of $E\left( u_{0}\right) $ obtained in lemma 3.3, we find that

\begin{align*} \left\Vert G_{1}\left( \eta E\left( u_{0}\right) \right) \right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) } & \leq\left\Vert G_{1}\right\Vert \left\Vert \eta E\left( u_{0}\right) \right\Vert _{C_{\gamma,\delta}^{0,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\\ & \quad \leq C\varepsilon^{\frac{11}{10}-4d_{1}\left\vert \gamma\right\vert }\leq C\varepsilon^{\frac{21}{20}},\\ \left\Vert P_{1}\left( \eta h\right) \right\Vert _{C_{\gamma,\delta}^{2,\mu }\left( \Omega_{\left( x_{1},y_{1}\right) }\right) } & \leq\left\Vert P_{1}\right\Vert \left\Vert \eta h\right\Vert _{C^{2,\mu}_{\delta}\left( \mathbb{R}_{x_{1}}\right) }\leq C\varepsilon^{\frac{21}{20}}. \end{align*}

Let M 0 be a fixed large constant. We define

\begin{equation*} \mathcal{S}:=\left\{ g\in C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1} ,y_{1}\right) }\right) :\left\Vert g\right\Vert _{C_{\gamma,\delta}^{2,\mu }\left( \Omega_{\left( x_{1},y_{1}\right) }\right) } \lt M_{0}\varepsilon ^{\frac{21}{20}}\right\} . \end{equation*}

We would like to prove that the right-hand side of (3.25) is a contraction mapping in $\mathcal{S},$ provided that ɛ is small enough. Indeed, for $\phi_{1},\phi_{2}\in \mathcal{S}$, since

\begin{equation*} T\left( \Psi\right) =-3u_{0}\Psi^{2}-\Psi^{3}, \end{equation*}

there holds

\begin{align*} & \left\Vert G_{1}\left[ \eta T\left( \phi_{1}+\phi_{1}^{e}\right) \right] -G_{1}\left[ \eta T\left( \phi_{2}+\phi_{2}^{e}\right) \right] \right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1} ,y_{1}\right) }\right) }\\ & \quad \leq C\varepsilon^{\frac{11}{10}-4d_{1}\left\vert \gamma\right\vert }\left\Vert \phi_{1}-\phi_{2}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\\ & \quad \leq C\varepsilon\left\Vert \phi_{1}-\phi_{2}\right\Vert _{C_{\gamma ,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }. \end{align*}

Recall that

\begin{equation*} L_{u_{0}}\phi-L_{U_{1}}\phi=3\left( u_{0}^{2}-U_{1}^{2}\right) \phi. \end{equation*}

In the region where $\eta\neq0,$ we have

\begin{equation*} \left\vert u_{0}\left( x,y\right) -U_{1}\left( x,y\right) \right\vert =\left\vert U_{2}\left( x,y\right) +H\left( y\right) \right\vert \leq C\varepsilon. \end{equation*}

Actually, in this region, a similar estimate holds for the Holder norm:

\begin{align*} \left\Vert u_{0}\left( x,y\right) -U_{1}\left( x,y\right) \right\Vert_{C^{2,\mu}} =\left\Vert U_{2}\left( x,y\right) +H\left( y\right) \right\Vert_{C^{2,\mu}} \leq C\varepsilon. \end{align*}

It then follows that

\begin{align*} & \left\Vert G_{1}\left[ \left( L_{U_{1}}\phi_{1}-L_{u_{0}}\phi_{1}\right) \eta\right] -G_{1}\left[ \left( L_{U_{1}}\phi_{2}-L_{u_{0}}\phi_{2}\right) \eta\right] \right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\\ & \ \ \leq C\varepsilon^{1-4d_{1}\left\vert \gamma\right\vert }\left\Vert \phi_{1}-\phi_{2}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega _{\left( x_{1},y_{1}\right) }\right) }. \end{align*}

Let us consider the term

\begin{equation*} \left\Vert \left( L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right) \eta\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }=\left\Vert 3\left( u_{0}^{2}-U_{2}^{2}\right) \phi^{e} \eta\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1} ,y_{1}\right) }\right) }. \end{equation*}

To estimate it, we write

\begin{equation*} \left\vert 3\left( u_{0}^{2}-U_{2}^{2}\right) \phi^{e}\eta\Gamma _{\gamma,\delta}\right\vert =\left\vert 3\left( u_{0}^{2}-U_{2}^{2}\right) \phi^{e}\Gamma_{\gamma,\delta}^{e}\eta\frac{\Gamma_{\gamma,\delta}} {\Gamma_{\gamma,\delta}^{e}}\right\vert . \end{equation*}

Using the decay estimate of $u_{0}-U_{2}$ and the definition of $\Gamma _{\gamma,\delta},$ we see that

\begin{equation*} \left\vert \left( u_{0}+U_{2}\right) \frac{\Gamma_{\gamma,\delta}} {\Gamma_{\gamma,\delta}^{e}}\right\vert \leq C\varepsilon^{\sqrt{2}\left\vert \gamma\right\vert \left\vert \ln\varepsilon\right\vert }. \end{equation*}

From this we arrive at

\begin{equation*} \left\Vert \left( L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right) \eta\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\leq C\varepsilon^{\sqrt{2}\left\vert \ln\varepsilon\right\vert \left\vert \gamma\right\vert }\left\Vert \phi\right\Vert _{C_{\gamma,\delta }^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }. \end{equation*}

Therefore, we obtain

\begin{align*} & \left\Vert G_{1}\left[ \left( L_{U_{2}}\phi_{1}^{e}-L_{u_{0}}\phi_{1} ^{e}\right) \eta\right] -G_{1}\left[ \left( L_{U_{2}}\phi_{2}^{e} -L_{u_{0}}\phi_{2}^{e}\right) \eta\right] \right\Vert _{C_{\gamma,\delta }^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }\\ & \ \ \leq C\varepsilon^{\left( \sqrt{2}-4d_{1}\right) \left\vert \ln\varepsilon\right\vert \left\vert \gamma\right\vert }\left\Vert \phi _{1}-\phi_{2}\right\Vert _{C_{\gamma,\delta}^{2,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) }. \end{align*}

Keep in mind that we have assumed $d_{1} \lt \frac{\sqrt{2}}{4}.$

It follows from these estimates that the right-hand side of $\left( 3.25\right) $ is a contraction map by the estimates. We can also check that the right-hand side of (3.25) maps each $\phi\in \mathcal{S}$ into $\mathcal{S}.$ We then conclude that (3.25) has a solution in $\mathcal{S}.$ Finally, the better estimate (3.24) at the boundary is a consequence of the fact that the operators $P_1,G_1$ are constructed by taking a cutoff for the function in (3.9). This completes the proof.

4. Family of solutions in the outer region

In this section, we will construct a family of solutions in the ‘outer region’

\begin{equation*} \Lambda:=\left\{ \left( x,y\right) \in\mathbb{R}^{2} :\left\vert y\right\vert \gt y_{\varepsilon }\right\} , \end{equation*}

with suitable boundary data to be described later on. Since we are interested in solutions which are odd in the y variable, it will be suffice for us to consider the equation in the domain

\begin{equation*} \Lambda^{+}:=\left\{ \left( x,y\right) \in\mathbb{R}^{2}:y \gt y_{\varepsilon }\right\} . \end{equation*}

The idea for solving the equation in the outer region is very similar to that of [Reference del Pino, Kowalczyk, Pacard and Wei10], where ‘entire solutions’ with almost parallel ends are constructed using infinite dimensional Lyapunov-Schmidt reduction. The main difference here is that the so-called ‘projected equation’ turns out to be a boundary value problem, while in [Reference del Pino, Kowalczyk, Pacard and Wei10], it is an ODE on the whole real line.

4.1. The Fermi coordinate and the approximate solution

The first step towards a construction in the outer region is to define suitable approximate solutions. We have in mind that the solutions should have a nodal set being close to the graph of the Toda equation, which have already been introduced in (3.2). Geometrically, the solution will have two ends in the upper half plane. As y tends to infinity, around each end, the solution will then converge to the one-dimensional heteroclinic solution. Since the Allen–Cahn equation is invariant with respect to translation and rotation, then the linearized Allen–Cahn operator has kernels which correspond to these geometric actions. This fact has another important consequence. That is, if we want to perturb an approximate solution into a genuine one, we are forced to translate or rotate the ends a litter bit.

To rigorously deal with the geometric perturbation of the ends, we need to introduce the following two-dimensional space:

\begin{equation*} \mathcal{E}:=\text{Span}\left\{ s\mapsto\chi\left( s\right) ,s\mapsto s\chi\left( s\right) \right\} , \end{equation*}

where χ is a smooth function such that

\begin{equation*} \chi\left( s\right) =\left\{ \begin{array} [c]{c} 1,\ \text{for }s \gt 2,\\ 0,\ \text{for }s \lt 1. \end{array} \right. \end{equation*}

Then, the function χ corresponds to translation of the ends, and $s\chi$ corresponds to rotation. If $a\chi+bs\chi\in\mathcal{E}$, then its $\mathcal{E}$ norm is defined to be

\begin{equation*} \left\Vert a\chi+bs\chi\right\Vert _{\mathcal{E}}:=\sqrt{a^{2}+b^{2}}. \end{equation*}

For $v\in\mathcal{E}$ with $\left\Vert v\right\Vert _{\mathcal{E}}\leq C\varepsilon^{\frac{1}{20}},$ we set $\bar{v}_{\varepsilon}\left( y\right) =v\left( \varepsilon y\right) .$ Let $\tilde{F}$ be the function defined in (3.2) and

\begin{equation*} f\left( y\right) :=\tilde{F}\left( y\right) +\bar{v}_{\varepsilon}\left( y\right) . \end{equation*}

Note that f depends on $\varepsilon.$

To define the approximate solution, we need to use the notion of Fermi coordinate with respect to the graph Γ of the function $x=f\left( y\right) $. This new coordinate will still be denoted by $\left( x_{1},y_{1}\right).$ It is defined in the following way. Suppose $Z=\left( x,y\right) \in\Lambda^{+}$ is a point in a tubular neighbourhood$ ($of size $\varepsilon^{-1})$ of $\Gamma.$ Assume $\tilde{Z}=\left( f\left( y_{1}\right) ,y_{1}\right) $ is the unique point on Γ, which realizes the distance from Z to $\Gamma.$ Then, for some $x_{1},$

\begin{equation*} \left( x,y\right) =\left( f\left( y_{1}\right) ,y_{1}\right) +x_{1}\left( \frac{1}{\sqrt{1+f^{\prime2}\left( y_{1}\right) }} ,\frac{-f^{\prime}\left( y_{1}\right) }{\sqrt{1+f^{\prime2}\left( y_{1}\right) }}\right) . \end{equation*}

The Fermi coordinate of Z is defined to be $\left( x_{1},y_{1}\right) .$

The Laplacian operator written in this coordinate has the form

\begin{align*} \Delta_{\left( x,y\right) }&=\frac{1}{A}\partial_{y_{1}}^{2}+\partial_{x_{1} }^{2}+\frac{1}{2}\frac{\partial_{x_{1}}A}{A}\partial_{x_{1}}-\frac{1}{2} \frac{\partial_{y_{1}}A}{A^{2}}\partial_{y_{1}}\\ &={\Delta_{\left( x_1,y_1 \right)}}+\left( \frac{1}{A}-1 \right)\partial_{y_1}^2+\frac{1}{2}\frac{\partial_{x_{1}}A}{A}\partial_{x_{1}}-\frac{1}{2}\frac{\partial_{y_{1}}A}{A^{2}}\partial_{y_{1}}, \end{align*}

where

\begin{equation*} A=1+\left( f^{\prime}\left( y_{1}\right) \right) ^{2}-2x_{1} \frac{f^{\prime\prime}\left( y_{1}\right) }{\sqrt{1+\left( f^{\prime }\left( y_{1}\right) \right) ^{2}}}+x_{1}^{2}\frac{\left( f^{\prime\prime }\left( y_{1}\right) \right) ^{2}}{\left( 1+\left( f^{\prime}\left( y_{1}\right) \right) ^{2}\right) ^{2}}. \end{equation*}

For more details, we refer to [Reference del Pino, Kowalczyk, Pacard and Wei10], section 5.1. In our case, the terms other than $\Delta_{\left( x_1,y_1 \right)}$ can be regarded as perturbation terms.

We use $C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty )\right) $ to denote the space consisting of those functions $\rho\in C^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) $ such that the following norm is finite:

\begin{equation*} \left\Vert \rho\right\Vert _{C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }:=\left\Vert \rho e^{\left\vert \gamma\right\vert y}\right\Vert _{C^{2,\mu}\left( [\varepsilon y_{\varepsilon },+\infty)\right) }. \end{equation*}

Suppose that we are given a function $\rho\in C^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) $, with

\begin{equation*} \left\Vert \rho\right\Vert _{C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }\leq C\varepsilon^{\frac{1}{20}}. \end{equation*}

Let

\begin{equation*} \bar{\rho}_{\varepsilon}\left( y\right) :=\rho\left( \varepsilon y\right) . \end{equation*}

Abusing notation with the previous section, for each function $\varphi,$ we still use $\varphi^{\ast}$ to denote the corresponding function with $\left( x_{1},y_{1}\right) $ variable. That is,

\begin{equation*} \varphi^{\ast}\left( x_{1},y_{1}\right) =\varphi\left( x,y\right) . \end{equation*}

Let H 1 be the function such that

\begin{equation*} H_{1}^{\ast}\left( x_{1},y_{1}\right) :=H\left( x_{1}-\bar{\rho }_{\varepsilon}\left( y_{1}\right) \right) . \end{equation*}

Let θ be a smooth cutoff function with $\theta\left( x,y\right) =\theta\left( -x,y\right) $ and

\begin{equation*} \theta\left( x,y\right) =\left\{ \begin{array} [c]{l} 1,\ \text{for }x\in\left( f\left( y\right) -\frac{1}{\varepsilon},-f\left( y\right) +\frac{1}{\varepsilon}\right) ,\\ 0,\ \text{for }x \gt -f\left( y\right) +\frac{1}{\varepsilon}+1,\ \text{or }x \lt f\left( y\right) -\frac{1}{\varepsilon}-1. \end{array} \right. \end{equation*}

Define the approximate solution

(4.1)\begin{equation} \bar{u}_{0}=\left( H_{1}+H_{1}^{e}\right) \theta-1. \end{equation}

We use L to denote the operator

\begin{equation*} L\xi:=-\Delta\xi+2\xi. \end{equation*}

We will look for a solution $\bar{u}$ of the Allen–Cahn equation in $\Lambda ^{+}$ with the form

(4.2)\begin{equation} \bar{u}=\bar{u}_{0}+\underbrace{\left( \psi+\psi^{e}\right) \theta+\omega }_{\bar{\Psi}}. \end{equation}

To do this, let us define

\begin{equation*} K_{0}:=E\left( \bar{u}_{0}\right) +T\left( \bar{\Psi}\right) +L\omega-L_{\bar{u}_{0}}\omega+L_{H_{1}}\psi-L_{\bar{u}_{0}}\psi+L_{H_{1}^{e} }\psi^{e}-L_{\bar{u}_{0}}\psi^{e}. \end{equation*}

We required that ω satisfies $\omega\left( x,y\right) =\omega\left( -x,y\right) $ and

(4.3)\begin{equation} L\omega=\left( 1-\theta\right) K_{0}+L_{\bar{u}_{0}}\left( \psi+\psi ^{e}\right) -L_{\bar{u}_{0}}\left[ \left( \psi+\psi^{e}\right) \theta\right] :=K_{1}. \end{equation}

Let η be the function defined in (3.18), and then we also require that the unknown functions ψ and ψ e satisfy

(4.4)\begin{equation} L_{H_{1}}\psi=K_{0}\theta\eta:=K_{2} \end{equation}

and

(4.5)\begin{equation} L_{H_{1}^{e}}\psi^{e}=K_{0}\theta\eta^{e}. \end{equation}

By taking the sum of these three equations, we get

(4.6)\begin{equation} L_{\bar{u}_{0}}\left( \bar{\Psi}\right) =E\left( \bar{u}_{0}\right) +T\left( \bar{\Psi}\right) . \end{equation}

That is, $\bar{u}_{0}+\bar{\Psi}$ satisfies the Allen–Cahn equation. To see this, we first observe that the second and third equations are equivariant to each other. Indeed, the third equation results from taking reflection across the y variable for the second equation. Now summing up equations (4.4) with (4.5) and using the fact that $\eta+\eta^{e}=1,$ we obtain

(4.7)\begin{equation} L_{H_{1}}\psi+L_{H_{1}^{e}}\psi^{e}=K_{0}\theta. \end{equation}

Adding equations (4.3) and (4.7) together, we get

\begin{align*} & L\omega+L_{H_{1}}\psi+L_{H_{1}^{e}}\psi^{e}\\ & \quad =K_{0}+L_{\bar{u}_{0}}\left( \psi+\psi^{e}\right) -L_{\bar{u}_{0}}\left[ \left( \psi+\psi^{e}\right) \theta\right] \\ & \quad =E\left( \bar{u}_{0}\right) +T\left( \bar{\Psi}\right) +L\omega -L_{\bar{u}_{0}}\omega+L_{H_{1}}\psi-L_{\bar{u}_{0}}\psi+L_{H_{1}^{e}}\psi ^{e}-L_{\bar{u}_{0}}\psi^{e}\\ & \qquad +L_{\bar{u}_{0}}\left( \psi+\psi^{e}\right) -L_{\bar{u}_{0}}\left[ \left( \psi+\psi^{e}\right) \theta\right] . \end{align*}

That is,

\begin{equation*} L_{\bar{u}_{0}}\left[ \omega+\left( \psi+\psi^{e}\right) \theta\right] =E\left( \bar{u}_{0}\right) +T\left( \bar{\Psi}\right) . \end{equation*}

This is precisely equation (4.6).

Roughly speaking, equation (4.4) takes care of the error near the nodal set, while equation (4.5) deals with the error away from the nodal set, where the approximate solution is very close to 1 or −1.

Recall that F is the solution of the Toda equation defined in (3.1). We put

\begin{equation*} \alpha_{1}=2\sqrt{2}\lim_{s\rightarrow+\infty}F^{\prime}\left( s\right) \lt 0. \end{equation*}

Lemma 4.1. Let $\gamma\in\left( \alpha_{1},0\right) $ be a fixed constant and ɛ > 0 be sufficiently small. For given $\varphi\in C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon },+\infty)\right) $ and $\left( s,t\right) \in\mathbb{R}^{2},$ the linear problem

\begin{equation*} \left\{ \begin{array} [c]{l} g^{\prime\prime}+48e^{2\sqrt{2}\bar{F}}g=\varphi,\ \text{for }y \gt \varepsilon y_{\varepsilon},\\ g\left( \varepsilon y_{\varepsilon}\right) =s\ \text{and }g^{\prime}\left( \varepsilon y_{\varepsilon}\right) =t \end{array} \right. \end{equation*}

has a solution $g=G_{2}\left( s,t\right) +P_{2}\left( \varphi\right) ,$ where

\begin{align*} G_{2} & :\mathbb{R}^{2}\rightarrow C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) ,\\ P_{2} & :C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon} ,+\infty)\right) \rightarrow C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) \end{align*}

are operators such that

\begin{equation*} \left\Vert G_{2}\right\Vert +\left\Vert P_{2}\right\Vert \leq C, \end{equation*}

where C is independent of $\varepsilon.$

Proof. For each $a\in\mathbb{R},$ the Toda equation

\begin{equation*} u^{\prime\prime}+12\sqrt{2}e^{2\sqrt{2}u}=0 \end{equation*}

has a solution u with $u\left( 0\right) =a$ and $u'(0)=0$. By the uniqueness of solution to the ODE, we see that u is an even solution. Let us write it as $u\left( y;a\right).$ Recall that $F\left( y\right) =u\left( y;-1\right) .$

The functions $\xi_{1}:=\frac{e^{2\sqrt{2}}}{12\sqrt{2}}\partial_{y}u|_{a=-1}$ and $\xi_{2} :=\partial_{a}u|_{a=-1}$ satisfy the linearized Toda equation

\begin{equation*} \xi^{\prime\prime}+48e^{2\sqrt{2}F}\xi=0. \end{equation*}

Note that ξ 1 is odd, ξ 2 is even, and

\begin{align*} \xi_{1}\left( 0\right) & =0,\xi_{1}^{\prime}\left( 0\right) =-1,\\ \xi_{2}\left( 0\right) & =1,\xi_{2}^{\prime}\left( 0\right) =0. \end{align*}

Moreover, as y tends to $+\infty,$ ξ 1 tends to a nonzero constant and ξ 2 grows linearly.

For given constants $s,t$, let us choose $s_{\varepsilon},t_{\varepsilon}$ such that

\begin{equation*} \left\{ \begin{array} [c]{l} -t_{\varepsilon}\xi_{1}\left( \varepsilon y_{\varepsilon}\right) +s_{\varepsilon}\xi_{2}\left( \varepsilon y_{\varepsilon}\right) =s,\\ -t_{\varepsilon}\xi_{1}^{\prime}\left( \varepsilon y_{\varepsilon}\right) +s_{\varepsilon}\xi_{2}^{\prime}\left( \varepsilon y_{\varepsilon}\right) =t. \end{array} \right. \end{equation*}

Since $\varepsilon y_{\varepsilon}=O\left( \varepsilon\left\vert \ln\varepsilon\right\vert \right) ,$ there holds

\begin{equation*} s_{\varepsilon}\sim s\ \text{and } -t_{\varepsilon}\sim t. \end{equation*}

Let us define

\begin{equation*} g\left( y\right) =t_{\varepsilon}\xi_{1}\left( y\right) +s_{\varepsilon }\xi_{2}\left( y\right) +\xi_{2}\left( y\right) \int_{\varepsilon y_{\varepsilon}}^{y}\xi_1(s)\varphi(s) ds-\xi_{1}\left( y\right) \int_{\varepsilon y_{\varepsilon}}^{y}\xi_2(s)\varphi(s) ds. \end{equation*}

Then,

\begin{equation*} g\left( \varepsilon y_{\varepsilon}\right) =s,\ \ g^{\prime}\left( \varepsilon y_{\varepsilon}\right) =t, \end{equation*}

and

\begin{equation*} g^{\prime\prime}+48e^{2\sqrt{2}F}g=\varphi. \end{equation*}

Integrating by parts, we get

\begin{align*} g\left( y\right) & =\left( t_{\varepsilon}+\int_{\varepsilon y_{\varepsilon}}^{+\infty}\xi_{2}\varphi ds\right) \xi_{1}\left( y\right) +\left( s_{\varepsilon}+\int_{\varepsilon y_{\varepsilon}}^{+\infty}\xi _{1}\varphi ds\right) \xi_{2}\left( y\right) \\ & \quad -\xi_{2}\left( y\right) \int_{y}^{+\infty}\xi_{1}\varphi ds+\xi _{1}\left( y\right) \int_{y}^{+\infty}\xi_{2}\varphi ds. \end{align*}

We compute

\begin{align*} & -\xi_{2}\left( y\right) \int_{y}^{+\infty}\xi_{1}\varphi ds+\xi _{1}\left( y\right) \int_{y}^{+\infty}\xi_{2}\varphi ds\\ & =\xi_{2}\left( y\right) \int_{y}^{+\infty}\left( \int_{+\infty} ^{y}\varphi\left( t\right) dt\right) \xi_{1}^{\prime}\left( s\right) ds-\xi_{1}\left( y\right) \int_{y}^{+\infty}\left( \int_{+\infty} ^{y}\varphi\left( t\right) dt\right) \xi_{2}^{\prime}\left( s\right) ds. \end{align*}

The integrals can be estimated directly, and this yields

\begin{equation*} \left\Vert g\right\Vert _{C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }\leq C\left\vert s\right\vert +C\left\vert t\right\vert +C\left\Vert \varphi\right\Vert _{C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }. \end{equation*}

This completes the proof.

We define the weighted space

\begin{equation*} C_{\varepsilon\gamma,\delta}^{j,\mu}\left( \Lambda^{+}\right) :=\left\{ \left( \cosh x\right) ^{\delta}\left( \cosh y\right) ^{\varepsilon\gamma }\phi:\phi\in C^{j,\mu}\left( \Lambda^{+}\right) \right\} . \end{equation*}

With this definition at hand, we are now ready to prove the main result of this section, which establishes the existence of solutions to the Allen–Cahn equation in the outer region with suitable boundary data.

Proposition 4.2. Let $a,b\in\mathbb{R}$ satisfy

\begin{equation*} \left\vert a\right\vert +\left\vert b\right\vert \leq C\varepsilon^{\frac {21}{20}}. \end{equation*}

Assume that $h\in C^{2,\mu}\left( \mathbb{R}\right) $ satisfies

\begin{equation*} \left\Vert \Gamma_{\gamma,\delta}\left( x_{1},y_{\varepsilon}\right) \left( \eta h\right) ^{\ast}\right\Vert _{C^{2,\mu}\left( \mathbb{R}_{x_{1}}\right) }\leq C\varepsilon^{\frac{11}{10}} \end{equation*}

and the orthogonality condition:

\begin{equation*} \int_{\mathbb{R}}\left( \theta\eta h\right) ^{\ast}H_{1}^{\prime}dx_{1}=0. \end{equation*}

Then, there exists solution $\left( \omega,\psi,\bar{\rho}_{\varepsilon} ,\bar{v}_{\varepsilon}\right) $ to the problem

\begin{equation*} \left\{ \begin{array} [c]{l} L\omega=K_{1},\ \text{in }\Lambda^{+},\\ \left( L_{H_{1}}\psi\right) ^{\ast}=K_{2},\ \text{for }y_{1} \gt y_{\varepsilon },\\ \omega=h-h\theta^2\ \text{and }\psi=\theta\eta h,\ \text{on }\partial\Lambda ^{+},\\ \int_{\mathbb{R}}\psi^{\ast}H_{1}^{\prime}dx_{1}=0,\ \text{for }y_{1}\geq y_{\varepsilon}.\\ \bar{\rho}_{\varepsilon}\left( y_{\varepsilon}\right) =a\ \text{and } \bar{\rho}_{\varepsilon}^{\prime}\left( y_{\varepsilon}\right) =b, \end{array} \right. \end{equation*}

with

\begin{equation*} \left\Vert \psi\right\Vert _{C_{\varepsilon\gamma,\delta}^{2,\mu}\left( \Lambda_{\left( x_{1},y_{1}\right) }^{+}\right) }\leq C\varepsilon ^{\frac{11}{10}},\ \ \left\Vert \rho\right\Vert _{C_{\gamma}^{2,\mu }\left( [\varepsilon y_{\varepsilon},+\infty)\right) }+\left\Vert v\right\Vert _{\mathcal{E}}\leq C\varepsilon^{\frac{1}{20}}. \end{equation*}

Proof. The proof will be split into three steps.

Step 1. For given $\rho,v,\psi$ with

\begin{equation*} \left\Vert \rho\right\Vert _{C_{\gamma}^{2,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }+\left\Vert v\right\Vert _{\mathcal{E}}\leq C\varepsilon^{\frac{1}{20}} \end{equation*}

and

\begin{equation*} \left\Vert \psi\right\Vert _{C_{\varepsilon\gamma,\delta}^{2,\mu}\left( \Lambda_{\left( x_{1},y_{1}\right) }^{+}\right) }\leq C\varepsilon ^{\frac{11}{10}}, \end{equation*}

we solve the following boundary problem for the function $\omega:$

(4.8)\begin{equation} \left\{ \begin{array} [c]{l} L\omega=K_{1}\left( \omega,\psi,\bar{\rho}_{\varepsilon},\bar{v} _{\varepsilon}\right) ,\ \text{in }\Lambda^{+},\\ \omega=\left( 1-\theta^2\right) h,\ \text{on }\partial\Lambda^{+}. \end{array} \right. \end{equation}

To do this, we first consider the linear problem

(4.9)\begin{equation} \left\{ \begin{array} [c]{l} L\varphi=g,\ \text{in }\Lambda^{+},\\ \varphi=\xi,\ \text{on }\partial\Lambda^{+}. \end{array} \right. \end{equation}

For $\xi\in C^{2,\mu}\left( \mathbb{R}\right) $ and $g\in C^{0,\mu}\left( \Lambda^+\right) ,$ there exists a unique $\varphi\in C^{2,\mu}\left( \Lambda^+\right) $ that solves (4.9), with

\begin{equation*} \left\Vert \varphi\right\Vert _{C^{2,\mu}\left( \Lambda^{+}\right) }\leq C\left\Vert g\right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }+C\left\Vert \xi\right\Vert _{C^{2,\mu}\left( \mathbb{R}\right) }. \end{equation*}

The solution φ can be written as

\begin{equation*} \varphi=\mathcal{G}\left( g\right) +\mathcal{P}\left( \xi\right) . \end{equation*}

The problem (4.8) can then be transformed into a fixed point problem

(4.10)\begin{equation} \omega=\mathcal{G}\left( K_{1}\left( \omega,\psi,\bar{\rho}_{\varepsilon },\bar{v}_{\varepsilon}\right) \right) +\mathcal{P}\left( h-h\theta^2\right) . \end{equation}

We prove that the right-hand side of (4.10) is a contraction map. Observe that in the region where $\theta\neq1,$ by the exponential convergence of $\left\vert \bar{u}_{0}\right\vert $ to 1 away from the interface, we have

\begin{equation*} \left\Vert 1-\bar{u}_{0}^{2}\right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }=O\left( \varepsilon^{2}\right) . \end{equation*}

It follows that in this region,

\begin{equation*} \left\Vert L\omega-L_{\bar{u}_{0}}\omega\right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }=\left\Vert 3\left( 1-\bar{u}_{0}^{2}\right) \omega\right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }\leq C\varepsilon ^{2}\left\Vert \omega\right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }. \end{equation*}

Using this, one verifies that

\begin{equation*} \left\Vert K_{1}\left( \omega_{1}\right) -K_{1}\left( \omega_{2}\right) \right\Vert _{C^{0,\mu}\left( \Lambda^{+}\right) }\leq C\varepsilon ^{2}\left\Vert \omega_{1}-\omega_{2}\right\Vert _{C^{2,\mu}\left( \Lambda ^{+}\right) }. \end{equation*}

Hence, we get the existence of a solution $\omega=\omega\left( \bar{\rho }_{\varepsilon},\bar{v}_{\varepsilon},\psi\right) $ for (4.10).

Step 2. With the solution ω at hand, for given $\bar{\rho }_{\varepsilon},\bar{v}_{\varepsilon},$ consider the following problem for the unknown pair $\left( \psi,k\right) :$

\begin{equation*} \left\{ \begin{array} [c]{l} \left( L_{H_{1}}\psi\right) ^{\ast}=K_{2}\left( \omega,\psi,\bar{\rho }_{\varepsilon},\bar{v}_{\varepsilon}\right) -k\left( y_{1}\right) H_{1}^{\prime},\ \ y_{1} \gt y_{\varepsilon},\\ \psi=\theta\eta h\ \text{on }\partial\Lambda^{+},\ \ \\ \int_{\mathbb{R}}\psi^{\ast}H_{1}^{\prime}dx_{1}=0,\ \text{for all }y_{1}\geq y_{\varepsilon}. \end{array} \right. \end{equation*}

Recall that

\begin{align*} \Delta\psi & =\Delta_{\left( x_{1},y_{1}\right) }\psi+\left( \frac{1} {A}-1\right) \partial_{y_{1}}^{2}\psi+\frac{1}{2}\frac{\partial_{x_{1}}A} {A}\partial_{x_{1}}\psi-\frac{1}{2}\frac{\partial_{y_{1}}A}{A^{2}} \partial_{y_{1}}\psi\\ & :=\Delta_{\left( x_{1},y_{1}\right) }\psi+\mathcal{Q}\left( \psi\right) . \end{align*}

Therefore, in the $\left( x_{1},y_{1}\right) $ coordinate, the equation to be solved for ψ becomes

(4.11)\begin{equation} -\Delta_{\left( x_{1},y_{1}\right) }\psi+\left( 3H_{1}^{2}-1\right) \psi=K_{2}+\mathcal{Q}\left( \psi\right) -k\left( y_{1}\right) H_{1}^{\prime}. \end{equation}

To solve this equation, it is natural to require the right hand to be orthogonal to $H_{1}^{\prime}$ for each $y_{1}.$ Therefore, we set

\begin{equation*} k\left( y_{1}\right) =\frac{\int_{\mathbb{R}}\left[ K_{2}+\mathcal{Q} \left( \psi\right) \right] H_{1}^{\prime}dx_{1}}{\int_{\mathbb{R}} H_{1}^{\prime2}dx_{1}}. \end{equation*}

Using the operators G and P of lemma 3.1, we write equation (4.11) as

\begin{equation*} \psi=G\left[ K_{2}+\mathcal{Q}\left( \psi\right) -k\left( y_{1}\right) H_{1}^{\prime}\right] +P\left( \theta\eta h\right) . \end{equation*}

The right-hand side is a contraction map for ψ in the set

\begin{equation*} S_{1}:=\left\{ \phi\in C_{\varepsilon\gamma,\delta}^{2,\mu}\left( \Lambda_{\left( x_{1},y_{1}\right) }^{+}\right) :\left\Vert \phi\right\Vert \leq M_{0}\varepsilon^{\frac{11}{10}},\ \ \psi=\theta\eta h\ \text{on }\partial\Lambda^{+}\right\} , \end{equation*}

where M 0 is a fixed large constant. The contraction property can be directly verified for $\mathcal{Q}\left( \psi\right) $ and the terms $T\left( \bar{\Psi}\right) ,$ $L_{H_{1}}\psi-L_{\bar{u}_{0}}\psi,$ since all of them contain order $O\left( \varepsilon^{\alpha}\right) $ terms before $\psi.$ For the term $L\omega-L_{\bar{u}_{0}}\omega,$ we can use the exponential decay property of ω away from the support of $K_{1},$ which is a consequence of the standard barrier construction for the constant coefficient operator $-\Delta+2$. See, for instance, lemma 3.4 and Equation (5.60) of reference [Reference del Pino, Kowalczyk, Pacard and Wei10] for this type of arguments.

Step 3. Now for given $\bar{\rho}_{\varepsilon},\bar{v}_{\varepsilon},$ we have obtained $\omega=\omega\left( \bar{\rho}_{\varepsilon},\bar {v}_{\varepsilon}\right) $, $\psi=\psi\left( \bar{\rho}_{\varepsilon} ,\bar{v}_{\varepsilon}\right) $ from Step 1 and Step 2. They will be the desired solution, if it happens that k = 0.

Note that k is a function of the variable y. At this stage, we know that the function k actually depends on $\bar{\rho}_{\varepsilon},\bar{v}_{\varepsilon}.$ Therefore, we can write the problem k = 0 more precisely, as

\begin{equation*}k(\cdot;\bar{\rho}_{\varepsilon},\bar{v}_{\varepsilon})=0. \end{equation*}

To achieve this, we multiply both sides of the equation

\begin{equation*} \left( L_{H_{1}}\psi\right) ^{\ast}=K_{2}\left( \omega,\psi,\bar{\rho }_{\varepsilon},\bar{v}_{\varepsilon}\right) -k\left( y_{1}\right) H_{1}^{\prime}, \end{equation*}

with $H_{1}^{\prime}$ and integrating in $\mathbb{R},$ we then get the following problem to be solved for $\left( \bar{\rho}_{\varepsilon},\bar {v}_{\varepsilon}\right) :$

(4.12)\begin{equation} \left\{ \begin{array} [c]{l} \int_{\mathbb{R}}\left( L_{H_{1}}\psi\right) ^{\ast}H_{1}^{\prime} dx_{1}=\int_{\mathbb{R}}K_{2}H_{1}^{\prime}dx_{1},\ \text{for }y_{1} \gt y_{\varepsilon},\\ \bar{\rho}_{\varepsilon}\left( y_{\varepsilon}\right) =a\ \text{and } \bar{\rho}_{\varepsilon}^{\prime}\left( y_{\varepsilon}\right) =b. \end{array} \right. \end{equation}

The same computation as that of proposition 5.3 of [Reference del Pino, Kowalczyk, Pacard and Wei10] implies that the above problem can be written into the form

\begin{equation*} \left\{ \begin{array} [c]{l} \left( \rho+v\right) ^{\prime\prime}+48e^{2\sqrt{2}F}\left( \rho+v\right) =\varepsilon^{-2}Q\left( \rho,v\right) ,\ \ y_{1} \gt \varepsilon y_{\varepsilon},\\ \rho\left( \varepsilon y_{\varepsilon}\right) =a\ \text{and }\rho^{\prime }\left( \varepsilon y_{\varepsilon}\right) =\varepsilon^{-1}b. \end{array} \right. \end{equation*}

Therefore, using the operators of lemma 4.1, we can write it as

\begin{equation*} \rho+v=G_{2}\left( a,\varepsilon^{-1}b\right) +P_{2}\left( \varepsilon ^{-2}Q\left( \rho,v\right) \right) . \end{equation*}

Moreover, there holds the following estimates:

\begin{equation*} \left\Vert Q\left( \rho,v\right) \right\Vert _{C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }\leq C\varepsilon^{2+\alpha} \end{equation*}

and

\begin{align*} & \left\Vert Q\left( \rho_{1},v_{1}\right) -Q\left( \rho_{2},v_{2}\right) \right\Vert _{C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon} ,+\infty)\right) }\\ & \quad \leq C\varepsilon^{2+\alpha}\left( \left\Vert \rho_{1}-\rho_{2}\right\Vert _{C_{\gamma}^{0,\mu}\left( [\varepsilon y_{\varepsilon},+\infty)\right) }+\left\Vert v_{1}-v_{2}\right\Vert _{\mathcal{E}}\right) , \end{align*}

for some α > 0. We can indeed choose the constant α to be independent of ɛ and $\rho, v$, provided the smallness assumption of $\rho, v$. The reason that such an α exists comes from the fact that Q collects all the perturbation terms which are smaller than quadratic power of ɛ, partly due to the exponential decay of the relevant functions away from the interface and along each end. More detailed computation can be found in [Reference del Pino, Kowalczyk, Pacard and Wei10]. We then finally get a solution $\left( \rho,v\right) $ by the contraction mapping principle. This finishes the proof.

5. Matching the Cauchy data and proof of the main theorem

In the previous two sections, we have constructed solutions in the inner and outer regions. In this section, we show that there exist certain boundary data, such that the Cauchy data (boundary value and its y-derivative) on the common boundary match for the inner and outer region problems, thus yielding an entire solution.

Recall that by proposition 3.4, for the inner region $\Omega,$ for given small function h with

\begin{equation*} \int_{\mathbb{R}}\left( \eta h\right) ^{\ast}H^{\prime}dx_{1}=0, \end{equation*}

we constructed a solution, which will be denoted by $S\left( h\right) $ in this section, of the form

\begin{equation*} S\left( h\right) =u_{0}+\Psi, \end{equation*}

where $u_{0}=U_{1}-U_{2}-H$ is the approximate solution. At the boundary $y=y_{\varepsilon},$ there exists $a\in\mathbb{R}$ such that the function Ψ satisfies

\begin{equation*} \Psi=h+a\left( H^{\prime}\left( x_{1}\right) -H^{\prime}\left( x_{2}\right) \right) . \end{equation*}

We emphasize that a is determined by $h.$

On the other hand, by proposition 4.2, in the outer region $\Lambda^{+}$, for given function $\bar{h}$ with

\begin{equation*} \int_{\mathbb{R}}\left( \theta\eta\bar{h}\right) ^{\ast}H_{1}^{\prime} dx_{1}=0, \end{equation*}

and for $\bar{a},\bar{b}\in\mathbb{R},$ suitably small, we constructed a solution, which will be denoted by $\bar{S},$ of the form

\begin{equation*} \bar{S}\left( \bar{h},\bar{a},\bar{b}\right) =\bar{u}_{0}+\left( \psi +\psi^{e}\right) \theta+\omega. \end{equation*}

Here, $\bar{u}_{0}$ is the approximate solution defined by (4.1), with $\bar{\rho}_{\varepsilon}\left( y_{\varepsilon }\right) =\bar{a}$ and $\bar{\rho}_{\varepsilon}^{\prime}\left( y_{\varepsilon}\right) =\bar{b}.$ At the boundary $y=y_{\varepsilon}$, there holds

\begin{equation*} \omega=\bar{h}-\bar{h}\theta,\ \ \psi=\bar{h}\eta\theta. \end{equation*}

To emphasize the dependence of $\bar u_{0}$ on ρ, we also write $\bar u_{0}$ as $\bar u_{0}(x,y;\bar a)$. At this point, it is worth emphasizing that at the boundary where $y=y_{\varepsilon}$, in view of the definition $\left( 4.1\right) $, the function $\bar{u}_0$ only depends on the boundary condition verified by $\bar{\rho}_{\varepsilon}$.

To match the boundary values for the inner and outer regions, we then need to transform these two types of boundary functions.

Lemma 5.1. There exists $\varepsilon_{0} \gt 0,$ such that for all $\varepsilon\in\left( 0,\varepsilon_{0}\right) ,$ the following two statements are true:

  1. (i) Suppose $\xi\in C^{2,\mu}\left( \mathbb{R}\right) .$ There exist $a\in\mathbb{R}$ and $h\in C^{2,\mu}\left( \mathbb{R}\right) ,$ with

    \begin{equation*} \int_{\mathbb{R}}\left( \eta h\right) ^{\ast}H^{\prime}\left( x_{1}\right) dx=0, \end{equation*}

    such that

    (5.1)\begin{equation} \xi=a\left( H^{\prime}\left( x_{1}\right) -H^{\prime}\left( x_{2}\right) \right) +h. \end{equation}
  2. (ii) Suppose $\xi\in C^{2,\mu}\left( \mathbb{R}\right) $ and

    (5.2)\begin{equation} \left\Vert \xi\right\Vert _{L^{\infty}\left( \mathbb{R}\right) }\leq C\varepsilon. \end{equation}

    Then, there exist $\bar{a}\in\mathbb{R}$ and $\bar{h}\in C^{2,\mu}\left( \mathbb{R}\right) ,$ with

    (5.3)\begin{equation} \int_{\mathbb{R}}\left( \eta\theta\bar{h}\right) ^{\ast}H^{\prime}\left( x_{1}-\bar{a}\right) dx_{1}=0, \end{equation}

    such that

    (5.4)\begin{equation} \xi=\bar{u}_{0}\left( x,y_{\varepsilon};\bar{a}\right) -\bar{u}_{0}\left( x,y_{\varepsilon};0\right) +\bar{h}. \end{equation}

Proof. To prove (5.1), we just set

\begin{align*} a & =\frac{\int_{\mathbb{R}}\eta\xi H^{\prime}\left( x_{1}\right) dx} {\int_{\mathbb{R}}\eta\left( H^{\prime}\left( x_{1}\right) -H^{\prime }\left( x_{2}\right) \right) H^{\prime}\left( x_{1}\right) dx},\\ h & =\xi-a\left( H^{\prime}\left( x_{1}\right) -H^{\prime}\left( x_{2}\right) \right) . \end{align*}

The proof of (5.4) is more complicated. For $s\in\mathbb{R},$ let $H_{1,s}$ be function such that

\begin{equation*} H_{1,s}^{\ast}\left( x_{1}\right) =H\left( x_{1}-s\right) . \end{equation*}

According to (4.1),

\begin{equation*} \bar{u}_{0}\left( x,y_{\varepsilon};s\right) -\bar{u}_{0}\left( x,y_{\varepsilon};0\right) =\left[ H_{1,s}-H_{1,0}+\left( H_{1,s} -H_{1,0}\right) ^{e}\right] \theta. \end{equation*}

Let us define

\begin{equation*} q\left( s\right) :=\int_{\mathbb{R}}\left( \left[ H_{1,s}-H_{1,0}+\left( H_{1,s}-H_{1,0}\right) ^{e}\right] \eta\theta^{2}\right) ^{\ast}H^{\prime }\left( x_{1}-s\right) dx_{1}. \end{equation*}

When ɛ is small, $q^{\prime}\left( 0\right) \geq\delta \gt 0,$ for some δ independent of $\varepsilon.$ We then define $\bar{a}$ to be the number satisfying

\begin{align*} & \int_{\mathbb{R}}\left( \left[ H_{1,\bar{a}}-H_{1,0}+\left( H_{1,\bar {a}}-H_{1,0}\right) ^{e}\right] \eta\theta^{2}\right) ^{\ast}H^{\prime }\left( x_{1}-\bar{a}\right) dx_{1}\\ & \quad =\int_{\mathbb{R}}\left( \eta\theta\xi\right) ^{\ast}H^{\prime}\left( x_{1}-\bar{a}\right) dx_{1}. \end{align*}

The existence of $\bar{a}$ follows from a direct application of the implicit function theorem, provided that ξ satisfies (5.2). With this choice of $\bar{a},$ we then define

\begin{equation*} \bar{h}=\xi-\left(\bar{u}_{0}\left( x,y_{\varepsilon};\bar{a}\right) -\bar{u} _{0}\left( x,y_{\varepsilon};0\right)\right) . \end{equation*}

This is the required function.

Now, for each $h,$ consider the function

\begin{equation*} \zeta:=\left[ S\left( h\right) -u_{0}\right] |_{y=y_{\varepsilon}}. \end{equation*}

Utilizing lemma 5.1, we can find $\bar{h}$ and $\bar{a}$ satisfying (5.3) such that

\begin{equation*} \zeta\left( x\right) =\bar{u}_{0}\left( x,y_{\varepsilon};\bar{a}\right) -\bar{u}_{0}\left( x,y_{\varepsilon};0\right) +\bar{h}\left( x\right) . \end{equation*}

We then define the transition map

\begin{equation*} J\left( h\right) :=\left( \bar{h},\bar{a}\right) . \end{equation*}

With these definitions, the corresponding solutions S and $\bar{S}$ of the inner and outer regions have the same boundary values. That is,

\begin{equation*} S\left( h\right) =\bar{S}\left( J\left( h\right) ,\bar{b}\right) ,\text{ on }\partial\Lambda^{+}. \end{equation*}

In fact, we now have

\begin{align*} \begin{cases} S(h)\mid_{y=y_\varepsilon} =u_0(x,y_\varepsilon)+a\left( H'(x_1)-H'(x_2) \right) +h\\ \bar{S}\left(\bar{h},\bar{a},\bar{b}\right)\mid_{y=y_\varepsilon}=\bar{u}_0\left( x,y_\varepsilon;\bar{a} \right)+\bar{h}, \end{cases} \end{align*}

where

\begin{align*} u_0(x,y_\varepsilon)=\bar{u}_0\left( x,y_\varepsilon;0 \right). \end{align*}

Then, by the definition of $\zeta(x)$ explained above, there holds

\begin{align*} a\left( H'(x_1)-H'(x_2) \right) +h = \bar{u}_0\left( x,y_\varepsilon;\bar{a} \right)-\bar{u}_0\left( x,y_\varepsilon;0 \right)+\bar{h}. \end{align*}

As a result,

\begin{align*} S(h)=\bar{S}\left( J(h),\bar{b} \right)\ \text{on }\partial\Lambda^+. \end{align*}

For ξ with the form (5.1), we also define the operators

(5.5)\begin{equation} \Theta\left( \xi\right) =a,\ \ \ \Theta^{\perp}\left( \xi\right) =h. \end{equation}

To find an entire solution of the Allen–Cahn equation on the whole plane, we need to match the derivatives of the inner and outer solutions at their common boundary. For this purpose, we need to introduce a weight function in the following way. For δ < 0, we define

\begin{equation*} \bar{\Gamma}_{\delta}\left( x\right) :=\cosh^{\delta}x_{1}+\cosh^{\delta }x_{2}, \end{equation*}

while for δ > 0, we let

\begin{equation*} \bar{\Gamma}_{\delta}\left( x\right) :=\left( \cosh^{-\delta}x_{1}+\cosh^{-\delta}x_{2}\right) ^{-1}, \end{equation*}

where at this moment,

\begin{equation*} x_{1}=x-\tilde{f}\left( 0\right) ,\ \ x_{2}=x+\tilde{f}\left( 0\right) . \end{equation*}

Using this weight function, we then define the space

\begin{equation*} \mathcal{F}_{\delta}:\mathcal{=}\left\{ \bar{\Gamma}_{\delta}\phi:\phi\in C^{2,\mu}\left( \mathbb{R}\right) \right\} , \end{equation*}

with the norm

\begin{equation*} \left\Vert \varphi\right\Vert _{\mathcal{F}_{\delta}}=\left\Vert \frac {\varphi}{\bar{\Gamma}_{\delta}}\right\Vert _{C^{2,\mu}\left( \mathbb{R} \right) }. \end{equation*}

The next proposition states that we can first of all match the derivatives of the solutions for their components orthogonal to $H^{\prime}.$

Proposition 5.2. Let δ < 0 be a fixed constant sufficiently close to 0. For each $b\in\mathbb{R}$ with $\left\vert b\right\vert \lt C\varepsilon^{\frac{21}{20}},$ there exists $h^{{\hat{~}}}=h^{{\hat{~}}}\left( b\right) \in\mathcal{F}_{\delta},$ with

\begin{equation*}\left\Vert h\right\Vert _{\mathcal{F}_{\delta}}\leq C\varepsilon^{\frac{11}{10}},\end{equation*}

such that

\begin{equation*} \Theta^{\perp}\left[ \partial_{y}S\left( h^{{\hat{~}}}\right) -\partial_{y}\bar{S}\left( J\left( h^{{\hat{~}}}\right) ,b\right) \right] =0,\ \text{at }y=y_{\varepsilon}, \end{equation*}

where the projection operator $\Theta^{\perp}$ is defined through (5.5).

Proof. For fixed b with $\left\vert b\right\vert \lt C\varepsilon^{\frac{21}{20}},$ we consider the map

\begin{equation*} N:h\rightarrow\Theta^{\perp}\left[ \left( \partial_{y}S\left( h\right) -\partial_{y}\bar{S}\left( J\left( h\right) ,b\right) \right) |_{y=y_{\varepsilon}}\right] . \end{equation*}

In view of proposition 3.4, if

\begin{equation*}\left\Vert h\right\Vert _{\mathcal{F}_{\delta}}\leq C\varepsilon^{\frac{11}{10}},\end{equation*}

then S(h) is well defined and the relevant norm of S(h) is bounded by $C\varepsilon^{\frac{21}{20}}$, and at the boundary $y=y_{\varepsilon}$,

\begin{equation*} \left\Vert \partial_yS(h)\right\Vert_{\mathcal{F}_{\delta}} \leq C\varepsilon^{\frac{11}{10}}. \end{equation*}

Moreover, under this assumption, we have

\begin{equation*}\left\Vert J(h)\right\Vert \leq C\varepsilon^{\frac{11}{10}}.\end{equation*}

Therefore, it follows from proposition 4.2 that, $\bar{S}(J(h),b)$ is also well defined, whose norm has the estimate stated in proposition 4.2. This in turn implies that the map N is well defined.

We would like to find h such that

(5.6)\begin{equation} N\left( h\right) =0. \end{equation}

For this purpose, we define the function

\begin{equation*} \tilde{u}:=H\left( x_{1}\right) -H\left( x_{2}\right) -1. \end{equation*}

This function is independent of the y variable. Consider the linearized operator

\begin{equation*} \mathcal{L}\varphi=-\varphi^{\prime\prime}+\left( 3\tilde{u}^{2}-1\right) \varphi,\varphi\in H^{2}\left( \mathbb{R}\right) . \end{equation*}

The spectrum of $\mathcal{L}$ consists of the essential spectrum $[2,+\infty)$. This follows from the fact that $|\tilde{u}|$ converges to 1 away from its zeros. For the spectrum below 2, it has an eigenvalue $\lambda_{0,\varepsilon}$ close to 0 and another eigenvalue $\lambda_{1,\varepsilon}$ close to $\sqrt{\frac{3}{2}}.$ One way to see the existence of these two eigenvalues is to use the perturbation argument, starting from the fact that the spectrum of the linearized operator around the one-dimensional heteroclinc solution H, defined by

\begin{equation*} L_H:\varphi\rightarrow -\varphi^{\prime\prime}+\left( 3H^{2}-1\right) \varphi,\varphi\in H^{2}\left( \mathbb{R}\right), \end{equation*}

has exactly two eigenvalues below 2: one is 0 and the other is $\sqrt{\frac{3}{2}}.$ This fact is pointed out in example 1.2 of [Reference del Pino, Kowalczyk and Pacard9]. To show that $\lambda_{0,\varepsilon}$ and $\lambda_{1,\varepsilon}$ are the only eigenvalues for ɛ small, we can analyse the asymptotic behaviour of corresponding eigenfunctions. We can prove that they are ‘localized’ around the zeros of $\tilde{u}$ and hence converge to an eigenfunction of the operator LH.

Now, let φ 0 be an eigenfunction associated with the eigenvalue $\lambda_{0,\varepsilon}.$ We define the projection operators

\begin{equation*} \tilde{\Pi}g=\frac{\int_{\mathbb{R}}g\varphi_{0}ds}{\int_{\mathbb{R}} \varphi_{0}^{2}ds}\varphi_{0},\ \ \tilde{\Pi}^{\perp}g=g-\tilde{\Pi }g.\ \ \end{equation*}

Consider solution W of the problem

\begin{equation*} \left\{ \begin{array} [c]{l} L_{\tilde{u}}W=0,\ \text{in }y \lt y_{\varepsilon},\\ W=\tilde{\Pi}^{\perp}h,\ \text{for }y=y_{\varepsilon},\\ W\left( x,y\right) \rightarrow0,\ \text{as }y\rightarrow-\infty. \end{array} \right. \end{equation*}

We claim

(5.7)\begin{equation} \partial_{y}W\left( h\right) -\Pi^{\perp}\left[ \partial_{y}S\left( h\right) \right] =\mathcal{M}_{1}\left( h\right) =O\left( \varepsilon ^{\frac{11}{10}}\right) ,\ \text{at }\partial\Lambda^{+}. \end{equation}

Indeed, using the similar decomposition as that of (3.19) and (3.20), W can be written as $w_{1}+w_{1}^{e},$ where

\begin{align*} L_{H\left( x_{1}\right) }w_{1} & =\left[ L_{H\left( x_{1}\right) } w_{1}-L_{\tilde{u}}w_{1}+L_{H\left( x_{2}\right) }w_{1}^{e}-L_{\tilde{u} }w_{1}^{e}\right] \eta,\\ L_{H\left( x_{2}\right) }w_{1}^{e} & =\left[ L_{H\left( x_{1}\right) }w_{1}-L_{\tilde{u}}w_{1}+L_{H\left( x_{2}\right) }w_{1}^{e}-L_{\tilde{u} }w_{1}^{e}\right] \eta^{e}. \end{align*}

Moreover, at $\partial\Lambda^{+},$ $w_1=\eta\tilde{\Pi}^\bot h$. On the other hand, the solution $S\left( h\right) =u_{0}+\phi+\phi^{e}$ satisfies

\begin{equation*} L_{U_{1}}\phi=\left[ E\left( u_{0}\right) +T\left( \Psi\right) +L_{U_{1} }\phi-L_{u_{0}}\phi+L_{U_{2}}\phi^{e}-L_{u_{0}}\phi^{e}\right] \eta. \end{equation*}

Checking the proof of proposition 3.2, the error between $W\left( h\right) $ and $S\left( h\right) $ mainly comes from the effect of the following two parts: $E\left( u_{0}\right) +T\left( \Psi\right) $ and $L_{U_{1}}-L_{H\left( x_{1}\right) }.$ Both of them are in $C_{\gamma,\delta}^{0,\mu}\left( \Omega_{\left( x_{1},y_{1}\right) }\right) $, and their norms are bounded by $C\varepsilon^{\frac{11}{10}}.$ Note that the error also depends on $ L_{H\left( x_{1}\right) } w_{1}-L_{\tilde{u}}w_{1}$ and $L_{U_{1} }\phi-L_{u_{0}}\phi$. However, they are smaller. The claim then follows from these estimates.

Similarly, for all fixed b with $\left\vert b\right\vert \lt C\varepsilon ^{\frac{21}{20}},$ solution $\bar{W}$ of the problem

\begin{equation*} \left\{ \begin{array} [c]{l} L_{\tilde{u}}\bar{W}=0,\ \text{in }y \gt y_{\varepsilon},\\ \bar{W}=\tilde{\Pi}^{\perp}h,\ \text{for }y=y_{\varepsilon},\\ \bar{W}\rightarrow0,\ \text{as }y\rightarrow+\infty \end{array} \right. \end{equation*}

has the estimate

(5.8)\begin{equation} \partial_{y}\bar{W}\left( h\right) -\Pi^{\perp}\left[ \partial_{y}\bar {S}\left( J\left( h\right) ,b\right) \right] =\mathcal{M}_{2}\left( h,b\right) =O\left( \varepsilon^{\frac{11}{10}}\right) ,\ \text{at } \partial\Lambda^{+}. \end{equation}

From the estimates (5.7) and (5.8), we find that equation (5.6) can be written as

(5.9)\begin{equation} \partial_{y}W\left( h\right) -\partial_{y}\bar{W}\left( h\right) =\mathcal{M}_{1}\left( h\right) -\mathcal{M}_{2}\left( h,b\right) . \end{equation}

Let us consider the linear operator

\begin{equation*} \mathcal{N}:h\rightarrow\partial_{y}W\left( h\right) -\partial_{y}\bar {W}\left( h\right) . \end{equation*}

If we introduce the weighted Sobolev space

\begin{equation*} \mathcal{F}^*_{\delta}:\mathcal{=}\left\{ \bar{\Gamma}_{\delta}\phi:\phi\in H^{2,2}\left( \mathbb{R}\right) \right\}, \end{equation*}
\begin{equation*} \mathcal{G}^*_{\delta}:\mathcal{=}\left\{ \bar{\Gamma}_{\delta}\phi:\phi\in H^{1,2}\left( \mathbb{R}\right) \right\}, \end{equation*}

then $\mathcal{N}$ will be an operator from $\mathcal{F}^*_{\delta}$ to $\mathcal{G}^*_{\delta}.$ Here, δ can be positive or negative, but we will assume that its absolute value is sufficiently small. We want to show that it is an invertible operator. If $h\in\ker\mathcal{N},$ then $W\left( h\right) $ and $W\left( \bar{h}\right) $ will patch together to yield an entire bounded solution w of the equation $L_{\tilde{u}}w=0.$ Since $|\delta|$ is chosen to be small, this find that w = 0, and hence, h = 0. Therefore, $\ker\mathcal{N}=0.$ On the other hand, integrating by parts tells us that

\begin{equation*} \int_{\mathbb{R}}\left[ \partial_{y}W\left( h_{1}\right) h_{2}-\partial _{y}W\left( h_{2}\right) h_{1}\right] dx=0. \end{equation*}

This implies that ‘formally’ $\mathcal{N}$ is a symmetric operator. A more rigorous conclusion using duality argument can be stated in the following way: Since the map

\begin{equation*}\mathcal{N}:\mathcal{F}^*_{-\delta} \rightarrow \mathcal{G}^*_{-\delta}\end{equation*}

is injective, its dual map

\begin{equation*}\mathcal{N}:\mathcal{F}^*_{\delta} \rightarrow \mathcal{G}^*_{\delta}\end{equation*}

is surjective. With this result at hand, one can use similar arguments as that of lemma 2.3 to conclude that the map

\begin{equation*}\mathcal{N}:\mathcal{F}_{\delta} \rightarrow \mathcal{G}_{\delta}\end{equation*}

is surjective and invertible.

Equation (5.9) then becomes

\begin{equation*} h=\mathcal{N}^{-1}\left( \mathcal{M}_{1}\left( h\right) -\mathcal{M} _{2}\left( h,b\right) \right) . \end{equation*}

Note that both $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ are contraction mapping of h in the set

\begin{equation*} \left\{ \zeta\in\mathcal{F}_{\delta}:\left\Vert \zeta\right\Vert _{\mathcal{F}_{\delta}}\leq M_{0}\varepsilon^{\frac{11}{10}}\right\} , \end{equation*}

where M 0 is a fixed large constant. This follows from the fact that the main order of $\mathcal{M}_{1}$ is mainly determined by $L_{U_{1}} w_{1}-L_{H\left( x_{1}\right) }w_{1}.$ We then get the desired solution $h^{{\hat{~}}}.$ This completes the proof.

We now proceed to match the derivatives parallel to H ʹ.

Proposition 5.3. There exists $b_{0}\in\mathbb{R}$ with $\left\vert b_{0} \right\vert \lt C\varepsilon^{\frac{21}{20}}$ such that

(5.10)\begin{equation} \Theta\left[ \partial_{y}S\left( h^{{\hat{~}}}\left( b_{0}\right) \right) -\partial_{y}\bar{S}\left( h^{{\hat{~}}}\left( b_{0}\right) ,b_{0}\right) \right] =0,\ \text{on }\partial\Lambda^{+}. \end{equation}

Proof. Consider the function

\begin{equation*} q\left( b\right) :=\Theta\left[ \partial_{y}S\left( h^{{\hat{~}}}\left( b\right) \right) -\partial_{y}\bar{S}\left( h^{{\hat{~}}}\left( b\right) ,b\right) \right] . \end{equation*}

We wish to prove that there exists $M_{0} \gt 0$ such that

(5.11)\begin{equation}q\left( -M_{0}\varepsilon^{\frac{21}{20}}\right) \lt 0\ \text{and }q\left( M_{0}\varepsilon^{\frac{21}{20}}\right) \gt 0. \end{equation}

Once this is proved, the existence of b 0 satisfying (5.10) will follow immediately from the continuity of q.

To show (5.11), it will be suffice to prove the following estimate on $\partial\Lambda^{+}$:

(5.12)\begin{equation} \left\vert \Theta\left[ \partial_{y}S\left( h^{{\hat{~}}}\left( b\right) \right) \right] \right\vert \lt C\varepsilon^{\frac{11}{10}},\ \text{if }\left\vert b\right\vert \lt \varepsilon^{\frac{21}{20}}, \end{equation}

and

(5.13)\begin{equation} \left\vert \Theta\left[ \partial_{y}\bar{S}\left( h^{{\hat{~}}}\left( b\right) ,b\right) \right] -b\right\vert \lt C\varepsilon^{\frac{11}{10} },\ \text{if }\left\vert b\right\vert \lt \varepsilon^{\frac{21}{20}}, \end{equation}

To see (5.12), we simply use the estimate (3.23). We now explain why $\left( 5.13\right) $ holds true. Recall that by (4.2), we have

\begin{equation*}\bar{u}_{0}+\left( \psi+\psi^{e}\right) \theta+\omega. \end{equation*}

At $\partial\Lambda^{+},$ in view of the definition of $\bar{u}_{0}$ in terms of b and $\bar{\rho}_{\varepsilon},$ for $\left\vert b\right\vert \lt \varepsilon^{\frac{21}{20}}$, there holds

\begin{equation*} \left\vert \Theta\left[ \partial_{y}\bar{u}_{0}\right] -b\right\vert \leq C\varepsilon^{\frac{11}{10}}. \end{equation*}

On the other hand, by the estimates obtained in proposition 4.2, there holds

\begin{align*} \left\vert \Theta\left[ \partial_{y}\psi\right] -b\right\vert & \leq C\varepsilon^{\frac{11}{10}},\\ \left\vert \Theta\left[ \partial_{y}\omega\right] -b\right\vert & \leq C\varepsilon^{\frac{11}{10}}. \end{align*}

Combining all these estimates together, we obtain, at $\partial\Lambda^{+},$

\begin{equation*} \left\vert \Theta\left[ \partial_{y}\bar{S}\left( h^{{\hat{~}}}\left( b\right) ,b\right) \right] -b\right\vert \leq C\varepsilon^{\frac{11}{10}}. \end{equation*}

This is the required estimate $\left( 5.13\right)$. The proof of this proposition is thus completed.

We are ready to prove our main result.

Proof of theorem 1.1

With proposition 5.2 and proposition 5.3 at hand, we deduce that the solution $S\left( h^{{\hat{~}}}\left( b_{0}\right) \right) $ for the inner region and the solution $\bar{S}\left( h^{{\hat{~}}}\left( b_{0}\right) ,b_{0}\right) $ will have the same boundary value and y-derivative at $\partial\Omega.$ Therefore, using the fact that they solve the same second-order elliptic equation, we conclude that $S\left( h^{{\hat{~}}}\left( b_{0}\right) \right) $ and $\bar{S}\left( h^{{\hat{~}}}\left( b_{0}\right) ,b_{0}\right) $ patch together, yielding an entire solution of the Allen–Cahn equation. This finishes the proof of our main theorem.

Acknowledgements

Y. Liu was supported by the National Key R&D Program of China 2022YFA1005400, National Natural Science Foundation of Chinagrant No. 11971026 and No. 12141105.

References

Alberti, G., Ambrosio, L. and Cabre, X.. On a long-standing conjecture of E. De Giorgi: symmetry in 3D for general nonlinearities and a local minimality property. Special issue dedicated to Antonio Avantaggiati on the occasion of his 70th birthday. Acta Appl. Math. 65 (2001), 933.CrossRefGoogle Scholar
Ambrosio, L. and Cabre, X.. Entire solutions of semilinear elliptic equations in $\mathbb{R}^{3}$ and a conjecture of De Giorgi. J. Amer. Math. Soc. 13 (2000), 725739.CrossRefGoogle Scholar
Cabre, X.. Uniqueness and stability of saddle-shaped solutions to the Allen-Cahn equation. J. Math. Pures Appl. (9) 98 (2012), 239256.CrossRefGoogle Scholar
Cabre, X. and Terra, J.. Saddle-shaped solutions of bistable diffusion equations in all of $\mathbb{R}^{2m}$. J. Eur. Math. Soc. (JEMS). 11 (2009), 819843.CrossRefGoogle Scholar
Cabre, X. and Terra, J.. Qualitative properties of saddle-shaped solutions to bistable diffusion equations. Comm. Partial Differ. Equ. 35 (2010), 19231957.CrossRefGoogle Scholar
Chodosh, O. and Mantoulidis, C.. Minimal surfaces and the Allen-Cahn equation on 3-manifolds: index, multiplicity, and curvature estimates. Ann. Math. 2 (2020), 213328.Google Scholar
Chodosh, O. and Mantoulidis, C.. The p-widths of a surface. Publ. Math. IHES. 137 (2023), 245342.CrossRefGoogle Scholar
Dang, H., Fife, P. and Peletier, L. A.. Saddle solutions of the bistable diffusion equation. Z. Angew. Math. Phys. 43 (1992), 984998.CrossRefGoogle Scholar
del Pino, M., Kowalczyk, M. and Pacard, F.. Moduli space theory for the Allen-Cahn equation in the plane. Trans. Amer. Math. Soc. 365 (2013), 721766.CrossRefGoogle Scholar
del Pino, M., Kowalczyk, M., Pacard, F. and Wei, J.. Multiple-end solutions to the Allen-Cahn equation in $\mathbb{R}^{2}$. J. Funct. Anal. 2 (2010), 458503 58, no.CrossRefGoogle Scholar
del Pino, M., Kowalczyk, M. and Wei, J.. On De Giorgi’s conjecture in dimension $N\geq9$. Ann. Math. 2 (2011), 14851569.CrossRefGoogle Scholar
del Pino, M., Kowalczyk, M. and Wei, J.. Entire solutions of the Allen-Cahn equation and complete embedded minimal surfaces of finite total curvature in $\mathbb{R}^{3}$. J. Differ. Geom. 93 (2013), 67131.CrossRefGoogle Scholar
Dey, A.. A comparison of the Almgren–Pitts and the Allen–Cahn min–max theory. Geom. Funct. Anal. 32 (2022), 9801040.CrossRefGoogle Scholar
Gaspar, P. and Guaraco, M. A. M.. The Allen-Cahn equation on closed manifolds. Calc. Var. Partial Differ. Equ. 57 (2018), .CrossRefGoogle Scholar
Ghoussoub, N. and Gui, C.. On a conjecture of De Giorgi and some related problems. Math. Ann. 311 (1998), 481491.CrossRefGoogle Scholar
Guaraco, M. A. M.. Minmax for phase transitions and the existence of embedded minimal hypersurfaces. J. Differ. Geom. 108 (2018), 91133.Google Scholar
Gui, C.. Hamiltonian identities for elliptic partial differential equations. J. Funct. Anal. 254 (2008), 904933.CrossRefGoogle Scholar
Gui, C.. Symmetry of some entire solutions to the Allen-Cahn equation in two dimensions. J. Differ. Equ. 252 (2012), 58535874.CrossRefGoogle Scholar
Gui, C., Liu, Y. and Wei, J.. On variational characterization of four-end solutions of the Allen-Cahn equation in the plane. J. Funct. Anal. 271 (2016), 26732700.CrossRefGoogle Scholar
Jerison, D. and Monneau, R.. Towards a counter-example to a conjecture of De Giorgi in high dimensions. Ann. Mat. Pura Appl. (4). 183 (2004), 439467.CrossRefGoogle Scholar
Karcher, H.. Embedded minimal surfaces derived from Scherk’s examples. Manuscripta Math. 62 (1988), 83114.CrossRefGoogle Scholar
Kowalczyk, M., Liu, Y. and Pacard, F.. The space of 4-ended solutions to the Allen-Cahn equation in the plane. Ann. Inst. H. Poincaré Anal. Non LinéAire. 29 (2012), 761781.Google Scholar
Kowalczyk, M., Liu, Y. and Pacard, F.. The classification of four-end solutions to the Allen-Cahn equation on the plane. Anal. PDE. 6 (2013), 16751718.CrossRefGoogle Scholar
Kowalczyk, M., Liu, Y., Pacard, F. and Wei, J.. End-to-end construction for the Allen-Cahn equation in the plane. Calc. Var. Partial Differ. Equ. 52 (2015), 281302.CrossRefGoogle Scholar
Liu, Y., Wang, K. and Wei, J.. Global minimizers of the Allen-Cahn equation in dimension $n\geq8$. J. Math. Pures Appl. (9) 108 (2017), 818840.CrossRefGoogle Scholar
Liu, Y. and Wei, J.. Classification of finite Morse index solutions to the elliptic sine-Gordon equation in the plane. Rev. Mat. Iberoam. 38 (2022), 355432.CrossRefGoogle Scholar
Mazzeo, R. and Pacard, F.. Constant mean curvature surfaces with Delaunay ends. Comm. Anal. Geom. 9 (2001), 169237.CrossRefGoogle Scholar
Mazzeo, R., Pacard, F. and Pollack, D.. Connected sums of constant mean curvature surfaces in Euclidean 3 space. J. Reine Angew. Math. 536 (2001), 115165.Google Scholar
Pacard, F. and Wei, J.. Stable solutions of the Allen-Cahn equation in dimension 8 and minimal cones. J. Funct. Anal. 264 (2013), 11311167.CrossRefGoogle Scholar
Savin, O.. Regularity of flat level sets in phase transitions. Ann. Math. 2 (2009), 4178.CrossRefGoogle Scholar
Wang, K. and Wei, J.. Finite Morse index implies finite ends. Comm. Pure Appl. Math. 72 (2019), 10441119.CrossRefGoogle Scholar