Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-05T10:36:43.475Z Has data issue: false hasContentIssue false

The DESC stellarator code suite. Part 2. Perturbation and continuation methods

Published online by Cambridge University Press:  24 May 2023

Rory Conlin*
Affiliation:
Princeton University, Princeton, NJ 08544, USA
Daniel W. Dudt
Affiliation:
Princeton University, Princeton, NJ 08544, USA
Dario Panici
Affiliation:
Princeton University, Princeton, NJ 08544, USA
Egemen Kolemen*
Affiliation:
Princeton University, Princeton, NJ 08544, USA
*
Email addresses for correspondence: [email protected]; [email protected]
Email addresses for correspondence: [email protected]; [email protected]
Rights & Permissions [Opens in a new window]

Abstract

A new perturbation and continuation method is presented for computing and analysing stellarator equilibria. The method is formally derived from a series expansion about the equilibrium condition $\boldsymbol {F} \equiv \boldsymbol {J}\times \boldsymbol {B} - \boldsymbol {\nabla } p = 0$, and an efficient algorithm for computing solutions to second- and third-order perturbations is developed. The method has been implemented in the DESC stellarator equilibrium code, using automatic differentiation to compute the required derivatives. Examples are shown demonstrating its use for computing complicated equilibria, perturbing a tokamak into a stellarator and performing parameter scans in pressure, rotational transform and boundary shape in a fraction of the time required for a full solution.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

In the search for controlled nuclear fusion, three-dimensional (3-D) magnetic confinement devices such as stellarators have been shown to have several advantages over two-dimensional (2-D) magnetic geometries such as tokamaks, such as lower risk of disruption (Helander et al. Reference Helander, Beidler, Bird, Drevlak, Feng, Hatzky, Jenko, Kleiber, Proll and Turkin2012) and current-free steady-state operation (Helander Reference Helander2014). However, achieving good performance in a stellarator often requires significant optimization of the plasma equilibrium. An additional advantage is that because of the generally lower plasma current and consequently fewer instabilities, more of the design and optimization can be done computationally, without requiring the building and testing of full-scale devices (Boozer Reference Boozer2015). Despite this, designing and optimizing 3-D magnetic equilibria that have good properties is still a computationally intensive task, for which a number of codes and software packages have been developed (Hirshman & Whitson Reference Hirshman and Whitson1983a; Spong et al. Reference Spong, Hirshman, Whitson, Batchelor, Carreras, Lynch and Rome1998; Drevlak et al. Reference Drevlak, Beidler, Geiger, Helander and Turkin2019; Dudt & Kolemen Reference Dudt and Kolemen2020; Lazerson et al. Reference Lazerson, Schmitt, Zhu and Breslau2020; Landreman et al. Reference Landreman, Medasani, Wechsung, Giuliani, Jorge and Zhu2021; Hudson et al. Reference Hudson, Dewar, Hole and McGann2012).

Perturbation methods have been used heavily in tokamak plasma physics, primarily to analyse the stability of axisymmetric magnetohydrodynamic (MHD) equilibria by searching for a perturbation that minimizes the energy of the plasma, as in Bernstein's energy principle (Bernstein et al. Reference Bernstein, Frieman, Kruskal and Kulsrud1958). This has been extended to a wide range of of codes for analysing fusion devices under small perturbations to a MHD equilibrium, such as the GPEC suite of codes (Park, Boozer & Glasser Reference Park, Boozer and Glasser2007; Park et al. Reference Park, Boozer, Menard, Garofalo, Schaffer, Kaye, Gerhardt and Sabbagh2009; Glasser Reference Glasser2016; Glasser, Kolemen & Glasser Reference Glasser, Kolemen and Glasser2018) for analysing tokamak configurations under 3-D perturbations. A common feature of perturbation methods is using local approximation methods (commonly Taylor series) to examine solutions nearby to some equilibrium in an infinitesimal limit. In the present work, we extend this to include finite-magnitude perturbations, and extend the approximation to second and higher order to achieve increased accuracy for finite step sizes. While a perturbation usually refers to a single step in parameter space, a sequence of perturbations can be combined into a continuation method to further explore the phase space, where a single perturbation step is used to approximate a nearby solution, and Taylor approximation is recomputed at the new point before performing another finite-size perturbation and so on.

Continuation methods have received less attention in the fusion community, though they have seen extensive use in other fields (e.g. Richter & DeCarlo Reference Richter and DeCarlo1983; Howell Reference Howell2009). For the purposes of the present work, continuation methods can be used to solve parameterized equations of the form $F(x, \eta )=0$, where we identify $x$ as the solution vector and $\eta$ as a continuation parameter. When $\eta$ is varied we find a family of solutions connected in parameter space, as well as possible branches and bifurcations of this family, indicated by points where the Jacobian of $F$ is singular. Starting from the initial value $(x_0, \eta _0)$ where $F$ is zero, we continuously vary $\eta$ while simultaneously varying $x$ such that the equation $F(x,\eta )=0$ is satisfied. Various methods exist for finding the solution curve (the locus of points $(x, \eta )$ where $F(x,\eta )=0$ is satisfied) such as piecewise linear (simplex) continuation and psuedo-arclength methods (Allgower & Georg Reference Allgower and Georg1990). In an abstract sense, the standard multigrid method for solving partial differential equations can be though of as a discrete continuation method, where the continuation parameter $\eta$ governs the level of numerical resolution (i.e. grid spacing, number of basis functions, etc.).

In general, existing codes for computing stellarator equilibria (Hirshman & Whitson Reference Hirshman and Whitson1983a; Hudson et al. Reference Hudson, Dewar, Hole and McGann2012) only find discrete equilibria, and solving for a new equilibrium requires running the code from scratch (possibly with a different starting guess). Several questions that may then be asked are:

  1. (i) Can these equilibria be computed more efficiently?

  2. (ii) Given a single equilibrium solution, can we find other similar solutions?

  3. (iii) What does the full phase space of 3-D MHD equilibria look like?

This is Part 2 of a three-part series of papers on the DESC stellarator optimization code suite. Part 1 details the DESC equilibrium solver in comparison with the VMEC (Hirshman & Whitson Reference Hirshman and Whitson1983b) 3-D MHD equilibrium code. Computing 3-D MHD equilibria is the preliminary step in stellarator analysis, so computing these equilibria quickly and accurately is important for further studies. In this paper, we describe a new continuation method for computing complicated stellarator equilibria using perturbations that attempts to resolve these questions. In § 2 we describe the DESC code and why it is the ideal code for implementation of the methods described in this paper, while in § 3 we describe the mathematical background to the perturbation method. In § 4 we demonstrate how to use these perturbations in a continuation method for computing stellarator equilibria. In § 5 we demonstrate other applications of the perturbation method for computing and analysing stellarator equilibria. Part 3 (Dudt et al. Reference Dudt, Conlin, Panici and Kolemen2022) presents DESC's unique stellarator optimization capabilities made possible by the efficient equilibrium solver and the perturbation method, resulting in orders-of-magnitude speed-up in optimization. These advantages are shown in the context of quasi-symmetry optimization, where results are compared with conventional tools (Spong et al. Reference Spong, Hirshman, Whitson, Batchelor, Carreras, Lynch and Rome1998). Three different quasi-symmetry objective formulations are also shown, with the relative advantages of each compared, highlighting the flexibility of DESC as an optimization code.

2. The DESC code

DESC is a recently developed (Dudt & Kolemen Reference Dudt and Kolemen2020) pseudospectral code for computing 3-D MHD equilibria. DESC computes 3-D MHD equilibria by solving the force balance equations $\boldsymbol {J} \times \boldsymbol {B} - \boldsymbol {\nabla } p = 0$ as opposed to the more common variational method which minimizes the MHD energy $\int _V (B^2/2\mu _0 + p) \,{\rm d}V$. The independent variables in the equation are the positions of the flux surfaces $(R,Z)$ as well as the stream function $\lambda$, defined as the difference between the boundary poloidal angle $\theta$ and the straight-field-line poloidal angle $\vartheta = \theta + \lambda$. These quantities are discretized in a Fourier–Zernike basis, using a Fourier series in the toroidal direction and Zernike polynomials in the poloidal/radial directions. After discretization, the equilibrium equation is expressed as a set of nonlinear algebraic equations $\boldsymbol {f}(\boldsymbol {x}, \boldsymbol {c}) = 0$, where $\boldsymbol {x}$ is a vector containing the spectral coefficients of $R$, $Z$ and $\lambda$, while $\boldsymbol {c}$ contains fixed parameters that define the equilibrium problem, such as the pressure and rotational transform profiles and the fixed boundary shape.

Since its original publication (Dudt & Kolemen Reference Dudt and Kolemen2020), DESC has undergone a major upgrade that involved porting it from MATLAB to Python in order to take advantage of the JAX library (Bradbury et al. Reference Bradbury, Frostig, Hawkins, Johnson, Leary, Maclaurin, Necula, Paszke, VanderPlas and Wanderman-Milne2018) for automatic differentiation and just-in-time compilation. Initially developed for machine learning applications, JAX provides a NumPy-like (Van Der Walt, Colbert & Varoquaux Reference Van Der Walt, Colbert and Varoquaux2011; Harris et al. Reference Harris, Millman, van der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg and Smith2020) application programming interface for common mathematical operations and allows arbitrary functions to be differentiated using forward- or reverse-mode automatic differentiation. This allows the calculation of exact derivatives of the objective function automatically, rather than having to code them by hand which is time-consuming and error prone, or using finite differences which are computationally expensive and can be inaccurate. It is also much more flexible as new objective functions can be added and optimized by defining only the forward pass, which is of great use in stellarator optimization where new objectives may be added in the future. JAX also allows for just-in-time compiling of code to both central and graphics processing units which significantly speeds up calculation, approaching speeds of traditional compiled languages, avoiding one of the primary limitations of Python for scientific computing. Additionally, given that the vast majority of new supercomputers heavily leverage graphics processing units, and this trend is likely to continue, using JAX allows DESC to take full advantage of all the compute capability available, rather than being limited to central-processing-unit-only parallelization like many legacy codes. This allows a ‘best of both worlds’ approach where the code is easy to use, maintain, adapt and upgrade, while still being fast enough for production applications.

Several aspects also make DESC the ideal code for implementing the perturbation method outlined in § 3:

  1. (i) Using a pseudospectral discretization significantly reduces the number of independent variables, resulting in smaller Jacobian matrices.

  2. (ii) Using JAX allows fast and accurate computation of the required derivatives and Jacobian–vector products.

  3. (iii) Formulating the problem as a system of nonlinear equations and solving them in a least-squares sense also effectively gives an extra order of derivative for free.

This last point can be seen by considering the least-squares problem

(2.1)\begin{equation} \min_{\boldsymbol{x}} y(\boldsymbol{x}) \equiv \frac{1}{2} \boldsymbol{f}(\boldsymbol{x})^{\rm T} \boldsymbol{f}(\boldsymbol{x}), \end{equation}

where $\boldsymbol {f}$ is a vector valued function and $y$ is the sum of squares of the residuals of $\boldsymbol {f}$. The gradient of $y$ is given by

(2.2)\begin{equation} \frac{{\rm d} y}{{\rm d} \boldsymbol{x}} = \boldsymbol{f}^{\rm T} \frac{{\rm d} \boldsymbol{f}}{{\rm d} \boldsymbol{x}}, \end{equation}

and its Hessian (matrix of second partial derivatives) is given by

(2.3)\begin{equation} \frac{{\rm d}^2y}{{\rm d} \boldsymbol{x}^2} = \frac{{\rm d} \boldsymbol{f}}{{\rm d} \boldsymbol{x}}^{\rm T} \frac{{\rm d} \boldsymbol{f}}{{\rm d} \boldsymbol{x}} + \boldsymbol{f}^{\rm T} \frac{{\rm d}^2 \boldsymbol{f}}{{\rm d} \boldsymbol{x}^2}. \end{equation}

Given that $\boldsymbol {f}$ is the function we are trying to minimize in the least-squares sense, we can generally assume that the second term in the Hessian is negligible compared with the first, the so-called ‘small residual approximation’ (Nocedal & Wright Reference Nocedal and Wright2006). This gives an approximate Hessian ${{\rm d}^2 y}/{{\rm d} \boldsymbol {x}^2} \sim ({{\rm d} \boldsymbol {f}}/{{\rm d} \boldsymbol {x}}^{\rm T}) ({{\rm d} \boldsymbol {f}}/{{\rm d} \boldsymbol {x}})$, meaning that using only first-derivative information about $\boldsymbol {f}$ gives us both first- and second-derivative information about $y$. In addition, the approximate Hessian that this gives is always positive semi-definite, leading to a convex subproblem which is easy to solve.

3. Perturbations

A general fixed-boundary equilibrium problem can be described by a set of parameters $\boldsymbol {c} = \{R_b, Z_b, p, \iota, \varPsi \}$, where $R_b, Z_b$ are the $R,Z$ coordinates of the boundary surface, $p$ is the pressure profile, $\iota$ is the rotational transform and $\varPsi$ is the total toroidal flux through the torus. In many spectral equilibrium codes (Hirshman & Whitson Reference Hirshman and Whitson1983a; Dudt & Kolemen Reference Dudt and Kolemen2020), the independent variables that define the equilibrium can be given by $\boldsymbol {x} = [R_{lmn},Z_{lmn},\lambda _{lmn}]$, where $R_{lmn}$, $Z_{lmn}$ and $\lambda _{lmn}$ are spectral coefficients of the flux surface positions and the poloidal stream function (indexed by $l$ in the radial direction, $m$ in the poloidal direction and $n$ in the toroidal direction). The condition of MHD equilibrium can then be written as a (possibly vector-valued) nonlinear algebraic equation involving the fixed parameters and independent variables, $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c})=0$. The function $f$ is the discretized form of the general MHD force balance $\boldsymbol {J} \times \boldsymbol {B} - \boldsymbol {\nabla } p = 0$ (Dudt & Kolemen Reference Dudt and Kolemen2020), or a condition on the gradient of the energy functional $W = \int _V (B^2/2\mu _0 + p) \,{\rm d}V$ (Hirshman & Whitson Reference Hirshman and Whitson1983a).

Given a set of parameters $\boldsymbol {c}$ and a solution vector $\boldsymbol {x}$ which satisfy $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c}) = 0$, we wish to find how the equilibrium would change if the parameters $\boldsymbol {c}$ are perturbed to $\boldsymbol {c} + \Delta \boldsymbol {c}$. For instance, we may have a solution for a vacuum equilibrium and want to see how adding finite pressure changes it, or we may start with a 2-D tokamak solution and add a 3-D perturbation to the boundary shape to form a stellarator.

We assume that the new equilibrium is given by $\boldsymbol {x} + \Delta \boldsymbol {x}$, and expand $f$ in a Taylor seriesFootnote 1 :

(3.1)\begin{gather} \boldsymbol{f}(\boldsymbol{x}+\Delta \boldsymbol{x},\boldsymbol{c}+ \Delta \boldsymbol{c}) = \boldsymbol{f}(\boldsymbol{x},\boldsymbol{c}) + \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}\Delta \boldsymbol{x} + \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{c}}\Delta \boldsymbol{c} + \frac{1}{2}\frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}^2}\Delta \boldsymbol{x}\Delta \boldsymbol{x}\nonumber\\ +\, \frac{1}{2} \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{c}^2}\Delta \boldsymbol{c}\Delta \boldsymbol{c} + \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}\partial \boldsymbol{c}}\Delta \boldsymbol{x}\Delta \boldsymbol{c} + O(\Delta x^3) . \end{gather}

We wish to solve this equation for $\Delta \boldsymbol {x}$ such that $\boldsymbol {f}(\boldsymbol {x}+\Delta \boldsymbol {x},\boldsymbol {c}+ \Delta \boldsymbol {c}) = 0$. At first order this is a straightforward algebraic equation, but at higher orders it becomes a tensor polynomial equation which can be difficult or impossible to solve efficiently (for example, if $\boldsymbol {f}$ is a vector-valued function, just storing the second-derivative tensor in memory could require upwards of 100 GB). Instead of seeking a direct solution, we can try a perturbative approach, were we introduce an arbitrary small parameter $\epsilon$ and further expand $\Delta \boldsymbol {x}$ and $\Delta \boldsymbol {c}$ in a perturbation series in powers of $\epsilon$ (we assume that $\Delta \boldsymbol {c}$ is known a priori and so only a first-order term is required):

(3.2)\begin{gather} \Delta \boldsymbol{x} = \epsilon \boldsymbol{x}_1 + \epsilon^2 \boldsymbol{x}_2 + \cdots , \end{gather}
(3.3)\begin{gather}\Delta \boldsymbol{c} = \epsilon \boldsymbol{c}_1. \end{gather}

Plugging this into (3.1) (and setting $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c}) = \boldsymbol {f}(\boldsymbol {x}+\Delta \boldsymbol {x}, \boldsymbol {c} + \Delta \boldsymbol {c})=0$) we get

(3.4)\begin{gather} 0 = \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}(\epsilon \boldsymbol{x}_1 + \epsilon^2 \boldsymbol{x}_2) + \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{c}}\epsilon \boldsymbol{c}_1 + \frac{1}{2} \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}^2}(\epsilon \boldsymbol{x}_1 + \epsilon^2 \boldsymbol{x}_2)(\epsilon \boldsymbol{x}_1 + \epsilon^2 \boldsymbol{x}_2) \nonumber\\ + \frac{1}{2} \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{c}^2}\epsilon \boldsymbol{c}_1 \epsilon \boldsymbol{c}_1 + \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}\partial \boldsymbol{c}}(\epsilon \boldsymbol{x}_1 + \epsilon^2 \boldsymbol{x}_2) \epsilon \boldsymbol{c}_1 + O(\Delta x^3) . \end{gather}

We can then collect powers of $\epsilon$ and set each order of $\epsilon$ to zero in turn. The first-order equation gives

(3.5)\begin{gather} {0} =\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}\epsilon \boldsymbol{x}_1 + \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{c}}\epsilon \boldsymbol{c}_1 , \end{gather}
(3.6)\begin{gather}x_1 ={-}\left(\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}\right)^{{-}1} \left( \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{c}} c_1 \right). \end{gather}

The second-order term gives

(3.7)\begin{gather} {0} = \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}\epsilon^2 \boldsymbol{x}_2 + \frac{1}{2}\frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}^2}\epsilon^2 \boldsymbol{x}_1 \boldsymbol{x}_1 + \frac{1}{2}\frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{c}^2}\epsilon^2 \boldsymbol{c}_1 \boldsymbol{c}_1 + \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}\partial \boldsymbol{c}}\epsilon^2 \boldsymbol{x}_1 \boldsymbol{c}_1 , \end{gather}
(3.8)\begin{gather}\boldsymbol{x}_2 ={-}\left(\frac{\partial \boldsymbol{f}}{\partial \boldsymbol{x}}\right)^{{-}1} \left( \frac{1}{2}\frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}^2} \boldsymbol{x}_1 \boldsymbol{x}_1 + \frac{1}{2}\frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{c}^2} \boldsymbol{c}_1 \boldsymbol{c}_1 + \frac{\partial^2 \boldsymbol{f}}{\partial \boldsymbol{x}\partial \boldsymbol{c}} \boldsymbol{x}_1 \boldsymbol{c}_1 \right) . \end{gather}

In general, the second-derivative terms will be large, dense, rank 3 tensors (recall that can be a vector-valued function), which may be extremely expensive to compute and may not even fit in memory for high-resolution cases. However, it is important to note that the full second derivatives are never needed (or indeed full first derivatives apart from the Jacobian matrix $\partial \boldsymbol {f}/\partial \boldsymbol {x}$). All that is needed are directional derivatives or Jacobian vector products. In the DESC code (§ 2) these are calculated using forward mode automatic differentiation at a cost a few times greater than the cost of a single evaluation of $\boldsymbol {f}$. These terms could also be approximated using finite differences at a similar cost. This means that at each order the most expensive operation is solving a linear system of the form

(3.9)\begin{equation} \boldsymbol{J} \boldsymbol{x}_i = \boldsymbol{b}, \end{equation}

where $\boldsymbol {J} \equiv \partial \boldsymbol {f} / \partial \boldsymbol {x}$ is the Jacobian matrix and is the same at each order, so only needs to be decomposed once, and $i$ is the order of the term in the perturbation series. Because of these computation and memory savings, the method has been extended to third order in the DESC code (see § 2), though in most situations first- or second-order perturbations give acceptable results (see figure 2).

It is instructive to note that the first-order term in the perturbation series for $\Delta \boldsymbol {x}$ is effectively the Newton step, while the second-order term is commonly referred to as the Halley step in optimization literature (Gander Reference Gander1985; Gundersen & Steihaug Reference Gundersen and Steihaug2010). While Newton's method converges quadratically (Nocedal & Wright Reference Nocedal and Wright2006) when started sufficiently close to a solution, if started far away it may diverge and so in practice the method must be globalized using either a line search or a trust region framework. Although we assume that we begin near or at an equilibrium so that $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c}) = 0$, depending on the size of the perturbation the solution landscape may change such that this is no longer the case, and we find in practice in several cases the unconstrained Newton step causes the solution to diverge. There is also an additional constraint inherent in the approach taken in that the second-order term should be smaller than the first-order term by a factor $\epsilon \ll 1$.

This suggests that a natural way to enforce these conditions and ensure reasonable convergence properties is to use a trust region method. When expanding the objective function in a Taylor series we are effectively approximating it by a linear or quadratic function. Instead of seeking the global minimizer to these model functions, the trust region approach instead recognizes that the Taylor approximation is only valid in some small neighbourhood and restricts the step size accordingly. Mathematically, instead of finding the exact solution $\boldsymbol {x}_i^*$ to the linear system

(3.10)\begin{equation} \boldsymbol{J} \boldsymbol{x}_i^* = \boldsymbol{b}, \end{equation}

we instead seek a solution to the following optimization problem:

(3.11)\begin{equation} \min_{\boldsymbol{x}_i} \|\boldsymbol{J} \boldsymbol{x}_i - \boldsymbol{b}\|^2 \quad {\rm s.t.}\ \|\boldsymbol{x}_i\| \le r, \end{equation}

where $r$ is the radius of the trust region. This subproblem has two possible solutions (Nocedal & Wright Reference Nocedal and Wright2006): either the true solution $\boldsymbol {x}_i^*$ lies within the trust region or else that there is a scalar $\alpha > 0$ such that

(3.12)\begin{equation} (\boldsymbol{J} + \alpha I)\boldsymbol{x}_i = \boldsymbol{b}, \quad \|\boldsymbol{x}_i\|=r. \end{equation}

The constrained value for $\boldsymbol {x}_i$ can be efficiently found using a one-dimensional root-finding method in $\alpha$. In the DESC code we use a bracketed Newton method. The cost of solving this subproblem is roughly the cost of one singular value decomposition of the Jacobian $\boldsymbol {J}$, or two to three Cholesky factorizations of the approximate Hessian matrix $\boldsymbol {B} \equiv \boldsymbol {J}^{\rm T} \boldsymbol {J}$.

Selection of the trust region radius $r$ is traditionally done adaptively within an optimization loop, by comparing the actual reduction in the residual at each step with that predicted by the model function. In a perturbation, only a single ‘step’ is taken, and we have found empirically that setting the trust region at a small fraction of $\|\boldsymbol {x}\|$ generally gives good results, typically $r=0.1 \|\boldsymbol {x}\|$. For second- and higher-order perturbations, such a large trust region can lead to inaccurate results. This can be understood by considering the expansion made in (3.2), where it was implicitly assumed that the second-order correction was a factor of $\epsilon$ smaller than the first-order term. Using a large trust region for the first-order term ($r=0.1 \|\boldsymbol {x}\|$) and a smaller trust region for the higher-order terms correctly enforces this assumed scaling. In practice we obtained good performance using $r_i=0.1 \|\boldsymbol {x}_{i-1}\|$, so the second-order term is an order of magnitude less than the first-order term and so on for higher orders.

4. Continuation method

Many MHD equilibrium codes (Hirshman & Whitson Reference Hirshman and Whitson1983a; Hudson et al. Reference Hudson, Dewar, Hole and McGann2012) use a multigrid approach, where an initial guess is specified on a coarse grid, and the error is minimized on that grid before being interpolated to a finer grid and re-solved, continuing until the finest resolution level has been reached. This is done both to speed computation by doing more calculations on coarser grids, and also to make it more robust to poor initial guesses and avoid additional saddle points that can appear in high-dimensional spaces (Dauphin et al. Reference Dauphin, Pascanu, Gulcehre, Cho, Ganguli and Bengio2014). DESC offers this as well, allowing the resolution to be varied in the radial, poloidal and toroidal directions independently.

In addition to the standard multigrid approach, DESC has also implemented a new continuation method using the perturbation techniques described in § 3. In general, continuation methods seek to find how the solution to an equation varies as parameters are changed. Using the notation of § 3 we can view $\boldsymbol {x}$ to be an implicit function of $\boldsymbol {c}$, related by the constraint equation $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c})=0$, and try to find the map that relates $\boldsymbol {x}$ and $\boldsymbol {c}$ along lines of $\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c})=0$.

In this work, we note two properties that make continuation methods especially relevant:

  1. (i) Two-dimensional (axisymmetric) equilibria are much easier to compute than 3-D equilibria, due to the guaranteed existence of flux surfaces in axisymmetric equilibria, and the significant reduction in the number of variables needed to represent the solution.

  2. (ii) Vacuum (vanishing beta) equilibria are easier to compute than finite-pressure equilibria, due to the lack of Shafranov shift.

We can take advantage of these properties to compute complicated stellarator equilibria by first solving an ‘easy’ problem, and then introduce a continuation parameter that transforms our initial solution into the solution to a ‘hard’ problem.

To do this, we introduce two scalar continuation parameters: $\eta _b$, a multiplier on all of the non-axisymmetric boundary modes, and $\eta _p$, a scaling factor for the pressure profile. Setting $\eta _b=0$ would give a 2-D axisymmetric boundary that is ‘close’ to the desired 3-D equilibrium, while setting $\eta _p=0$ would give the zero pressure equilibrium with the desired boundary shape. By varying these two parameters, we find a ‘family’ of solutions that are connected continuously.

To vary the parameters, we apply a sequence of perturbation steps as outlined in § 3. With each perturbation we take a small step in the desired direction in parameter space (e.g. varying $\boldsymbol {c} = \{\eta _b, \eta _p\}$). Depending on the step size $\Delta \boldsymbol {c}$ it may be necessary to refine the solution with a small number (2–5) of Newton iterations of (2.1) to ensure a good equilibrium is reached. We then re-linearize about the new position and perform the next step, until the final desired parameters are reached.

A standard method for solving continuation problems, known as ‘natural parameter continuation’, would be analogous to a zeroth-order perturbation followed by several Newton iterations of (2.1), while a first-order perturbation would be similar to Gauss–Newton continuation. The higher-order perturbations discussed above do not seem to have been explored in the more general continuation method literature, but can be seen as a version of Halley's method applied to the combined system $\{\boldsymbol {f}(\boldsymbol {x},\boldsymbol {c})=0, \boldsymbol {c} - \boldsymbol {c}_{{\rm desired}}=0\}$.

In most traditional continuation methods, the step size in the continuation parameter is determined adaptively based on local error estimates and the ratio of predicted to achieved error reduction. In practice we have found that this is often not necessary, and fixing the step sizes a priori provides sufficiently accurate results. In most cases going from a zero-pressure equilibrium to a moderate beta of ${\sim }3\,\%$ can be done in a single step, and boundary perturbations in anywhere from one to four steps, depending on the desired accuracy and complexity of the boundary. This does require the user to specify the desired perturbation steps explicitly, though a future upgrade to the code is planned to allow adaptive perturbations where the user need only supply an initial guess and the final desired parameters.

As a first example (figure 1), we solve for a zero-pressure heliotron by first solving for a simple circular tokamak, then applying a 3-D perturbation to the boundary. After the perturbation, a small number of regular Newton iterations of (2.1) are applied to ensure convergence. As shown in figure 1 and table 1, the first order perturbation step captures the dominant effects of the shaping, while the second order correction has a smaller effect.

Figure 1. Comparison of perturbed tokamak equilibrium versus true solution for a heliotron-like stellarator. The initial tokamak (grey) is axisymmetric. By applying a perturbation (first order in red, second order in green) to the non-axisymmetric boundary modes, we obtain an approximation to the true 3-D solution (blue), obtained by solving from the start with the full 3-D boundary shape. The first-order perturbation captures the majority of the differences between the initial and true solutions. The second-order effects bring the perturbed solution even closer to the true one.

Table 1. Mean flux surface position error versus perturbation order for boundary perturbation from a circular tokamak to a heliotron.

Similarly, when computing finite-pressure equilibria it can be difficult to estimate the Shafranov shift a priori for generating a good initial guess. DESC avoids this by first computing a zero-pressure solution, for which a good initial guess can generally be found by simply scaling the boundary surface. The finite pressure is then added back in as a perturbation, which then often only requires a small number of further iterations to reach convergence as shown in figure 2. In these and the following examples, the rotational transform profile is held fixed during the perturbations, resulting in a change in the toroidal current as the pressure and boundary are varied. In this example, the toroidal current before and after (not shown) is largely similar, with a small amount of additional current required to support the increased pressure.

Figure 2. Comparison of perturbed equilibrium versus true solution for a change in pressure from $\beta =0\,\%$ to $\beta =3\,\%$. The first-order perturbation (red) captures the majority of the differences between the initial (grey) and true (blue) solutions. Second- and third-order effects (not shown for clarity) are visually very similar but significantly reduce the resulting force balance error as shown in table 2.

Table 2. Normalized force balance error versus perturbation order for a pressure increase from $\beta =0\,\%$ to $\beta =3\,\%$.

While in general the two parameters could be varied in any order, or simultaneously when solving for a 3-D finite-beta equilibrium, we have found that first varying $\eta _p$ followed by $\eta _b$ to be more efficient. Varying $\eta _p$ while holding $\eta _b$ fixed at $0$ allows the pressure perturbations to be done on a 2-D axisymmetric configuration, which reduces the computational cost and ensures the existence of good flux surfaces. After reaching a high-resolution finite-beta axisymmetric equilibrium, 3-D modes are added to the basis functions and the boundary is perturbed to give the final desired 3-D finite-beta equilibrium.

This procedure is demonstrated in figure 3, where the initial solution is an axisymmetric zero-pressure tokamak. The pressure is then increased, as demonstrated by the Shafranov shift, followed by 3-D deformation of the boundary shape to arrive at a W7-X-like configuration. After each large perturbation, a small number of regular Newton iterations of (2.1) are performed to ensure that an equilibrium has been reached (recall that in the derivation of the perturbations, it was assumed that the initial state before perturbing was an equilibrium). This method was also used to compute the solutions shown in Part 1 (Panici et al. Reference Panici, Conlin, Dudt and Kolemen2022), where a detailed analysis of the force error is given.

Figure 3. Continuation solution for W7-X-like equilibrium. Starting from a zero-pressure 2-D equilibrium, the pressure is increased to $\beta \sim 3\,\%$ in a series of two steps. Then a 3-D perturbation is applied to the boundary, broken up into four steps (two shown) to arrive at the final finite-beta 3-D solution.

Further insight can be gained by looking at the toroidal current profile of the W7-X-like equilibrium as the boundary ratio is varied in figure 4. At each step we keep the rotational transform profile fixed, and so as expected the axisymmetric case ($\eta _b=0$) requires a large toroidal current to generate the poloidal field. As we increase the 3-D shaping, more of the rotational transform is generated by axis torsion and boundary shaping, reducing the required plasma current to near zero.

Figure 4. Toroidal current and rotational transform for W7-X-like equilibrium with varying boundary ratios. When $\eta _b=0$, corresponding to an axisymmetric solution, a large toroidal current is required to generate the rotational transform (which is held fixed as the boundary is varied). Increasing the 3-D shaping allows more of the rotational transform to come from external fields, reducing the plasma current. Note that the current in the final step is still non-zero, as the rotational transform being held fixed was chosen for illustration, and does not correspond to the vacuum rotational transform on W7-X.

An important feature to note about the aforementioned continuation method is that the solution at each step (after any necessary Newton iterations of (2.1) to refine the solution) is in fact the ‘exact’ solution to the equilibrium problem with perturbed parameters, and is hence still a valid equilibrium that satisfies MHD force balance and may have desirable physics properties worth studying that may be absent in the final solution. By applying different perturbations and varying different parameters, one can explore whole families of solutions that are ‘nearby’ to a starting equilibrium.

5. Other applications

5.1. Parameter scans

Another important feature of continuation methods is revealing how solutions change as parameters of the problem are varied. In the previous section we used this to find single equilibria, but the method can also be used to explore families of equilibria related by a parameter or group of parameters. In this, we find equilibria ‘nearby’ to an equilibrium already found, such as examining the same boundary shape at different values of $\beta$ or boundary perturbations of varying magnitude and shape such as resonant magnetic perturbations in a tokamak. Traditionally this requires re-solving the equilibrium for each new value of the parameter. An alternative approach is to apply a perturbation to an initial equilibrium and find the corresponding changes in the flux surfaces. After applying the first perturbation, we re-linearize about the new state and perturb again, and so step through different equilibria for the cost of a single linear system solve at each step. This represents significant computational savings compared with solving the full nonlinear problem for each value of the parameter.

Figure 5 shows the flux surfaces on the $\zeta =0$ and $\zeta ={\rm \pi}$ planes for a helical stellarator for the same pressure profile scaled to different values of $\beta$. The initial solution was at $\beta = 0\,\%$, and second-order perturbations were applied sequentially to step the pressure up to $\beta =10\,\%$. As in previous examples, the rotational transform is held fixed as the pressure is increased, leading to an increase in the toroidal current. The flux surfaces are indistinguishable from those obtained by solving from scratch at each value of $\beta$, and the computational cost is significantly reduced, as shown in table 3. Performing such a scan requires only a few lines of code, as shown in Listing 1.

Listing 1 Code example for β scan.

Figure 5. Flux surfaces on the $\zeta =0$ and $\zeta ={\rm \pi}$ planes for a helical stellarator. The initial equilibrium had $\beta =0\,\%$, and a series of second-order perturbations were applied to increase the pressure rather than solving a new equilibrium from scratch each time.

Table 3. Computation times for different values of $\beta$ in the pressure scan shown in figure 5, both with and without perturbations, starting from a zero-pressure solution ($\beta =0$, not shown). The first perturbation takes longer as the time includes just-in-time compilation for the specific problem being solved, resulting in significantly faster times for the subsequent perturbation steps which reuse the same compiled code. The ‘without perturbations’ method is already able to take advantage of compiled code from the initial solve and does not see any significant speed-up at subsequent iterations. All computation was done on an AMD Ryzen 7 PRO 4750 U with 32 GB memory.

As another example (figure 6), a D-shaped tokamak was used as the starting point for variations in the rotational transform profile. The initial equilibrium had a rotational transform on axis of $\iota _0=1$, and perturbations were applied to reduce this down to $\iota _0=0.34$. As in the previous examples, the prescribed profiles at each step are pressure and rotational transform. As the rotational transform is decreased, we see increased Shafranov shift and deformation of the flux surfaces as the reduced poloidal field struggles to contain the pressure ($\beta \sim 3\,\%$ in this example).

Figure 6. Top: Flux surfaces on the $\zeta =0$ plane for a D-shaped tokamak with varying axis rotational transform. The initial equilibrium (blue) corresponds to $\iota _0=1$, and perturbations were used to reduce the axis rotational transform without recomputing the entire equilibrium solution. Bottom: Rotational transform profiles corresponding to flux surfaces shown above.

As in the pressure scan example, the solution obtained by perturbation is indistinguishable by eye from the solution obtained by solving the full equilibrium problem, and significantly faster. Once an initial equilibrium is solved, parameter scans can be performed an order of magnitude faster using perturbations than other approaches which require a full solution at each step. The code to perform such a scan is shown in Listing 2.

Listing 2 Code example for perturbing ɩ profile.

5.2. Optimization

The perturbation techniques described previously can also be adapted for optimization, where instead of choosing a particular change in the parameters $\Delta \boldsymbol {c}$ we instead let $\Delta \boldsymbol {c}$ be a free parameter that is chosen to minimize a cost function $\boldsymbol {g}(\boldsymbol {x},\boldsymbol {c})$, such as quasi-symmetry error (Boozer Reference Boozer1983; Helander Reference Helander2014), coil complexity (Zhu et al. Reference Zhu, Hudson, Song and Wan2018b, Reference Zhu, Hudson, Lazerson, Song and Wan2018a; McGreivy, Hudson & Zhu Reference McGreivy, Hudson and Zhu2020) or fast particle confinement (Nemov et al. Reference Nemov, Kasilov, Kernbichler and Leitold2008; Velasco et al. Reference Velasco, Calvo, Mulas, Sanchez, Parra and Cappa2021):

(5.1)\begin{equation} \Delta \boldsymbol{c}^* = {\rm arg\,min}_{\Delta \boldsymbol{c}} \boldsymbol{g}(\boldsymbol{x}+\Delta \boldsymbol{x}, \boldsymbol{c} + \Delta \boldsymbol{c}) \quad {\rm s.t.}\ \boldsymbol{f}(\boldsymbol{x}+\Delta \boldsymbol{x}, \boldsymbol{c}+\Delta \boldsymbol{c}) = 0, \end{equation}

where as before $\Delta \boldsymbol {x}$ is an implicit function of $\Delta \boldsymbol {c}$.

This finds the step in parameter space that most decreases the cost function $\boldsymbol {g}$ while maintaining approximate force balance. After applying this perturbation step, a small number of Newton iterations of (2.1) are used to reconverge to the correct equilibrium, without the need for a full ‘cold start’ equilibrium solve. This single ‘warm start’ equilibrium solve in DESC can be contrasted with the method of STELLOPT (Lazerson et al. Reference Lazerson, Schmitt, Zhu and Breslau2020) or SIMSOPT (Landreman et al. Reference Landreman, Medasani, Wechsung, Giuliani, Jorge and Zhu2021) where at each optimization step a series of $N$ cold start equilibrium solves must be performed to find the descent direction, where $N$ is the number of variables being optimized.

In the optimization literature, this is part of a more general class of methods for constrained optimization. Like the perturbations described in § 3, this method can also be extended to higher order. This extension and further details of this method of optimization and applications to quasi-symmetry are given in Part 3 (Dudt et al. Reference Dudt, Conlin, Panici and Kolemen2022).

6. Conclusions

We have demonstrated a new technique for computing stellarator equilibria, and for exploring how those equilibria change as parameters are varied. The methods are computationally efficient, and offer significant speed-ups compared to existing techniques, and in many cases offer possibilities that have not existed before. An important future application of these methods is in exploring the connections between different classes of equilibria, such as how tokamaks bifurcate into stellarators, and how different classes of quasi-symmetric stellarators may be related.

Acknowledgements

Editor Per Helander thanks the referees for their advice in evaluating this article.

Funding

This work was supported by the US Department of Energy under contract numbers DE-AC02-09CH11466 and DE- SC0022005 and Field Work Proposal No. 1019. The United States Government retains a non-exclusive, paid-up, irrevocable, world-wide licence to publish or reproduce the published form of this paper, or allow others to do so, for United States Government purposes.

Declaration of interests

The authors report no conflict of interest.

Data availability statement

The source codes to generate the results and plots in this study are openly available in DESC at https://github.com/PlasmaControl/DESC or http://doi.org/10.5281/zenodo.4876504.

Footnotes

1 Some care must be taken here, as we are implicitly assuming that the function $f$ is at least $C^2$ continuous. The DESC code assumes the existence of nested flux surfaces so that there is an analytic mapping between real space and magnetic coordinates. In cases where the assumption of nested surfaces is violated, this mapping may not exist. However, we note that the force balance equations are still analytic functions of the dependent variables (it can be shown that they form a high-order polynomial), though their physical meaning will be somewhat unclear in regions where nested surfaces do not exist.

References

Allgower, E.L. & Georg, K. 1990 Numerical Continuation Methods: An Introduction. Springer.CrossRefGoogle Scholar
Bernstein, I.B., Frieman, E.A., Kruskal, M.D. & Kulsrud, R.M. 1958 An energy principle for hydromagnetic stability problems. Proc. R. Soc. Lond. A 244 (1236), 1740, citation Key: Bernstein1958.Google Scholar
Boozer, A.H. 1983 Transport and isomorphic equilibria. Phys. Fluids 26 (2), 496499.CrossRefGoogle Scholar
Boozer, A.H. 2015 Stellarator design. J. Plasma Phys. 81 (6), 515810606.CrossRefGoogle Scholar
Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al. 2018 JAX: composable transformations of Python+NumPy programs.Google Scholar
Dauphin, Y., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S. & Bengio, Y. 2014 Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Adv. Neural Inf. Process. Syst. 27.Google Scholar
Drevlak, M., Beidler, C.D., Geiger, J., Helander, P. & Turkin, Y. 2019 Optimisation of stellarator equilibria with rose. Nucl. Fusion 59 (1), 016010.CrossRefGoogle Scholar
Dudt, D., Conlin, R., Panici, D. & Kolemen, E. 2022 The desc stellarator code suite part iii: quasi-symmetry optimization.CrossRefGoogle Scholar
Dudt, D.W. & Kolemen, E. 2020 Desc: a stellarator equilibrium solver. Phys. Plasmas 27 (10), 102513.CrossRefGoogle Scholar
Gander, W. 1985 On Halley's iteration method. Am. Math. Mon. 92 (2), 131134.CrossRefGoogle Scholar
Glasser, A.H. 2016 The direct criterion of newcomb for the ideal MHD stability of an axisymmetric toroidal plasma. Phys. Plasmas 23 (7), 072505, citation Key: Glasser2016.CrossRefGoogle Scholar
Glasser, A., Kolemen, E. & Glasser, A.H. 2018 A Riccati solution for the ideal MHD plasma response with applications to real-time stability control. Phys. Plasmas 25 (3), 032507, citation Key: Glasser2018.CrossRefGoogle Scholar
Gundersen, G. & Steihaug, T. 2010 On large-scale unconstrained optimization problems and higher order methods. Optim. Meth. Softw. 25 (3), 337358.CrossRefGoogle Scholar
Harris, C.R., Millman, K.J., van der Walt, S.J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N.J., et al. 2020 Array programming with numpy. Nature 585 (7825), 357362.CrossRefGoogle ScholarPubMed
Helander, P. 2014 Theory of plasma confinement in non-axisymmetric magnetic fields. Rep. Prog. Phys. 77 (8), 087001.CrossRefGoogle ScholarPubMed
Helander, P., Beidler, C.D., Bird, T.M., Drevlak, M., Feng, Y., Hatzky, R., Jenko, F., Kleiber, R., Proll, J.H.E., Turkin, Y., et al. 2012 Stellarator and tokamak plasmas: a comparison. Plasma Phys. Control. Fusion 54 (12), 133.CrossRefGoogle Scholar
Hirshman, S.P. & Whitson, J.C. 1983 a Steepest-descent moment method for three-dimensional magnetohydrodynamic equilibria. Phys. Fluids 26 (12), 35533568.CrossRefGoogle Scholar
Hirshman, S.P. & Whitson, J.C. 1983 b Steepest-descent moment method for three-dimensional magnetohydrodynamic equilibria. Phys. Fluids 26 (12), 35533568.CrossRefGoogle Scholar
Howell, J.S. 2009 Computation of viscoelastic fluid flows using continuation methods. J. Comput. Appl. Maths 225 (1), 187201.CrossRefGoogle Scholar
Hudson, S.R., Dewar, R.L., Hole, M.J. & McGann, M. 2012 Non-axisymmetric, multi-region relaxed magnetohydrodynamic equilibrium solutions. Plasma Phys. Control. Fusion 54 (1), 014005.CrossRefGoogle Scholar
Landreman, M., Medasani, B., Wechsung, F., Giuliani, A., Jorge, R. & Zhu, C. 2021 Simsopt: a flexible framework for stellarator optimization. J. Open Sour. Softw. 6 (65), 3525.CrossRefGoogle Scholar
Lazerson, S., Schmitt, J., Zhu, C., Breslau, J. All STELLOPT Developers & USDOE Office of Science 2020 Stellopt, version 2.7.5.Google Scholar
McGreivy, N., Hudson, S.R. & Zhu, C. 2020 Optimized finite-build stellarator coils using automatic differentiation. Nucl. Fusion 61 (2), 026020.CrossRefGoogle Scholar
Nemov, V.V., Kasilov, S.V., Kernbichler, W. & Leitold, G.O. 2008 Poloidal motion of trapped particle orbits in real-space coordinates. Phys. Plasmas 15 (5), 052501.CrossRefGoogle Scholar
Nocedal, J. & Wright, S. 2006 Numerical Optimization. Springer Science & Business Media.Google Scholar
Panici, D., Conlin, R., Dudt, D.W. & Kolemen, E. 2022 The desc stellarator code suite part i: quick and accurate equilibria computations.CrossRefGoogle Scholar
Park, J.-K., Boozer, A.H., Menard, J.E., Garofalo, A.M., Schaffer, M.J., Kaye, S.M., Gerhardt, S.P. & Sabbagh, S.A. 2009 Importance of plasma response to nonaxisymmetric perturbations in tokamaks. Phys. Plasmas 16 (5), 056115.CrossRefGoogle Scholar
Park, J.K., Boozer, A.H. & Glasser, A.H. 2007 Computation of three-dimensional tokamak and spherical torus equilibria. Phys. Plasmas 14 (5), 09.CrossRefGoogle Scholar
Richter, S.L. & DeCarlo, R.A. 1983 Continuation methods: theory and applications. IEEE Trans. Syst. Man Cybern. 4, 459464.CrossRefGoogle Scholar
Spong, D.A., Hirshman, S.P., Whitson, J.C., Batchelor, D.B., Carreras, B.A., Lynch, V.E. & Rome, J.A. 1998 J$^*$ optimization of small aspect ratio stellarator/tokamak hybrid devices. Phys. Plasmas 5 (5), 17521758.CrossRefGoogle Scholar
Van Der Walt, S., Colbert, S.C. & Varoquaux, G. 2011 The numpy array: a structure for efficient numerical computation. Comput. Sci. Engng 13 (2), 2230.CrossRefGoogle Scholar
Velasco, J.L., Calvo, I., Mulas, S., Sanchez, E., Parra, F.I., Cappa, A. & the W7-X team 2021 A model for the fast evaluation of prompt losses of energetic ions in stellarators. Nucl. Fusion 61 (11), 116059.CrossRefGoogle Scholar
Zhu, C., Hudson, S.R., Lazerson, S.A., Song, Y. & Wan, Y. 2018 a Hessian matrix approach for determining error field sensitivity to coil deviations. Plasma Phys. Control. Fusion 60 (5), 054016.CrossRefGoogle Scholar
Zhu, C., Hudson, S.R., Song, Y. & Wan, Y. 2018 b New method to design stellarator coils without the winding surface. Nucl. Fusion 58, 016008.CrossRefGoogle Scholar
Figure 0

Figure 1. Comparison of perturbed tokamak equilibrium versus true solution for a heliotron-like stellarator. The initial tokamak (grey) is axisymmetric. By applying a perturbation (first order in red, second order in green) to the non-axisymmetric boundary modes, we obtain an approximation to the true 3-D solution (blue), obtained by solving from the start with the full 3-D boundary shape. The first-order perturbation captures the majority of the differences between the initial and true solutions. The second-order effects bring the perturbed solution even closer to the true one.

Figure 1

Table 1. Mean flux surface position error versus perturbation order for boundary perturbation from a circular tokamak to a heliotron.

Figure 2

Figure 2. Comparison of perturbed equilibrium versus true solution for a change in pressure from $\beta =0\,\%$ to $\beta =3\,\%$. The first-order perturbation (red) captures the majority of the differences between the initial (grey) and true (blue) solutions. Second- and third-order effects (not shown for clarity) are visually very similar but significantly reduce the resulting force balance error as shown in table 2.

Figure 3

Table 2. Normalized force balance error versus perturbation order for a pressure increase from $\beta =0\,\%$ to $\beta =3\,\%$.

Figure 4

Figure 3. Continuation solution for W7-X-like equilibrium. Starting from a zero-pressure 2-D equilibrium, the pressure is increased to $\beta \sim 3\,\%$ in a series of two steps. Then a 3-D perturbation is applied to the boundary, broken up into four steps (two shown) to arrive at the final finite-beta 3-D solution.

Figure 5

Figure 4. Toroidal current and rotational transform for W7-X-like equilibrium with varying boundary ratios. When $\eta _b=0$, corresponding to an axisymmetric solution, a large toroidal current is required to generate the rotational transform (which is held fixed as the boundary is varied). Increasing the 3-D shaping allows more of the rotational transform to come from external fields, reducing the plasma current. Note that the current in the final step is still non-zero, as the rotational transform being held fixed was chosen for illustration, and does not correspond to the vacuum rotational transform on W7-X.

Figure 6

Listing 1 Code example for β scan.

Figure 7

Figure 5. Flux surfaces on the $\zeta =0$ and $\zeta ={\rm \pi}$ planes for a helical stellarator. The initial equilibrium had $\beta =0\,\%$, and a series of second-order perturbations were applied to increase the pressure rather than solving a new equilibrium from scratch each time.

Figure 8

Table 3. Computation times for different values of $\beta$ in the pressure scan shown in figure 5, both with and without perturbations, starting from a zero-pressure solution ($\beta =0$, not shown). The first perturbation takes longer as the time includes just-in-time compilation for the specific problem being solved, resulting in significantly faster times for the subsequent perturbation steps which reuse the same compiled code. The ‘without perturbations’ method is already able to take advantage of compiled code from the initial solve and does not see any significant speed-up at subsequent iterations. All computation was done on an AMD Ryzen 7 PRO 4750 U with 32 GB memory.

Figure 9

Figure 6. Top: Flux surfaces on the $\zeta =0$ plane for a D-shaped tokamak with varying axis rotational transform. The initial equilibrium (blue) corresponds to $\iota _0=1$, and perturbations were used to reduce the axis rotational transform without recomputing the entire equilibrium solution. Bottom: Rotational transform profiles corresponding to flux surfaces shown above.

Figure 10

Listing 2 Code example for perturbing ɩ profile.