1. Introduction
The Rayleigh–Taylor instability (RTI) occurs when a perturbed interface between two fluids of differing density undergoes an acceleration directed from the more dense to the less dense fluid. This instability was first described by Rayleigh (Reference Rayleigh1883) and was later expanded upon by Taylor (Reference Taylor1950). The RTI is relevant in a number of natural contexts such as atmospheric inversions and cloud physics (Schultz et al. Reference Schultz2006), astrophysics (Arnett et al. Reference Arnett, Bahcall, Kirshner and Woosley1989; Cabot & Cook Reference Cabot and Cook2006) and art (Zhou Reference Zhou2017a). The RTI also appears in a number of industrial contexts such as combustion (Biferale et al. Reference Biferale, Mantovani, Sbragaglia, Scagliarini, Toschi and Tripiccione2011) and in inertial confinement fusion (ICF) (Lindl et al. Reference Lindl, Landen, Edwards and Moses2014). These are only a few of the areas in which the RTI appears, and a more thorough summary is presented by Zhou (Reference Zhou2017a,Reference Zhoub).
The RTI is frequently considered in the context of the mixing of two components, but in many practical applications, including ICF, the RTI can occur in the presence of more than two components. Prior investigations of the RTI with multiple fluid layers has been conducted through experiment (Jacobs & Dalziel Reference Jacobs and Dalziel2005; Adkins et al. Reference Adkins, Shelton, Renoult, Carles and Rosenblatt2017; Suchandra & Ranjan Reference Suchandra and Ranjan2023), simulation (Youngs Reference Youngs2009, Reference Youngs2017; Morgan Reference Morgan2022a,Reference Morganb) and modelling (Mikaelian Reference Mikaelian1983, Reference Mikaelian1990, Reference Mikaelian1996, Reference Mikaelian2005; Parhi & Nath Reference Parhi and Nath1991; Yang & Zhang Reference Yang and Zhang1993). Three-layer RTI configurations differ in several ways from the more commonly considered two-layer case. Of particular note, the addition of a third layer results in two interfaces, and the stability of each interface is controlled by the density of the fluids used in each layer. Jacobs & Dalziel (Reference Jacobs and Dalziel2005) considered the special case where the upper and lower layers consist of the same fluid with a density $\rho _1$, and the middle layer consists of a fluid with density $\rho _2$ such that $\rho _1 > \rho _2$. This results in an unstable interface between the upper and middle layers, and a stable interface between the middle and lower layers. They then showed that a self-similar three-layer RTI mixing layer in such a configuration will grow linearly with time as opposed to the quadratic growth predicted in the two-layer case. Reference Jacobs and DalzielJacobs & Dalziel find the slope, $\gamma$, of this linear growth to be $\gamma =0.49\pm 0.03$ in their experiments with an Atwood number of $0.002$. Suchandra & Ranjan (Reference Suchandra and Ranjan2023) found $\gamma = 0.41\pm 0.01$ for their three-layer experiments with Atwood numbers of $0.3$ and $0.6$.
In ICF applications, Rayleigh–Taylor (and Richtmyer–Meshkov) induced mixing between the layers of capsule material and the deuterium–tritium (DT) fuel contributes to degradation of capsule yield (Haan et al. Reference Haan2011). The CD Symcap experiments, for example, were a set of separated reactant experiments that were fielded on the National Ignition Facility (NIF) with the goal of studying the amount of mixing that occurs during a capsule implosion (Casey et al. Reference Casey2014). The CD Symcap experiments consisted of a recessed layer of deuterated plastic (CD) separated from tritium gas in the centre of the capsule by an inert plastic layer (CH). As a result, the measured DT yield signal represents a measure of the amount of mixing between the two reactant layers. However, in experiments such as CD Symcap as well as other ICF capsules with multiple layers, the thermonuclear (TN) output is not simply a function of mixing between the two reactant materials, and the impact of the inert layer must be considered as well. Understanding the factors that influence instability-driven mixing in ICF targets, as well as the effects of mixing on capsule performance, has been cited as a significant technical challenge in the pursuit of fusion ignition and greater fusion yields at the NIF (Lindl Reference Lindl1998; Smalyuk et al. Reference Smalyuk2019; Abu-Shawareb et al. Reference Abu-Shawareb2024).
Experiments such as these have motivated the development of models for use in the computational codes used to design ICF capsules to better predict the influence of turbulence and mixing on capsule yield. Generally, an equation for the average reaction rate can be written as
where ${\overline {\dot {r}_{\gamma,\alpha \beta }}}$ is the average binary reaction rate with product $\gamma$ and reactants $\alpha$ and $\beta$, $R_{atomic}$ is the average reaction rate for atomically mixed reactants and $(\cdots )$ represents an augmentation of the reaction rate due to turbulent mixing. One important aspect to consider in developing such a model is the behaviour of that model in the ‘no-mix limit’, or whether the model returns to the correct physical limit for immiscible mixing of components. Existing models for reacting flow have treated this augmentation term in different ways, and with different results for the no-mix limit. Ristorcelli (Reference Ristorcelli2017) presents a model for reacting flow with mixing that recovers the no-mix limit based on the asssumption that the mixing conforms to a beta distribution. However, this model is not extensible to mixtures with more than two components. Morgan et al. (Reference Morgan, Olson, Black and McFarland2018a) presents another model describing the reaction rate in binary mixtures based on a truncated expansion of the reaction rate equation. A model that is applicable to mixing between an arbitrary number of components is presented by Morgan (Reference Morgan2022b), though this model does not recover the no-mix limit. Therefore, it is desirable to construct a model for an arbitrary number of mixing components that also recovers the no-mix limit. A principal motivation for the present work is thus to generate a validated dataset against which an improved reaction rate model can be evaluated.
The CD Symcap experiments represent a valuable dataset with experimental data available for a three-component reacting mixing problem. However, these experiments were quite complicated in design and with somewhat limited diagnostic data available. Additionally, due to the challenging range of physical scales involved, direct numerical simulation (DNS) of ICF capsules such as CD Symcap remains intractable (Bender et al. Reference Bender2021), and many computational efforts to simulate ICF capsules employ the use of turbulent mix models or simplify the problem to one or two dimensions (Raman et al. Reference Raman2012; Casey et al. Reference Casey2014; Smalyuk et al. Reference Smalyuk2014; Weber et al. Reference Weber2014; Khan et al. Reference Khan2016; Gatu Johnson et al. Reference Gatu Johnson2017, Reference Gatu Johnson2018). The present work therefore adopts a simpler approach based on the non-reacting three-component RTI experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) and Jacobs & Dalziel (Reference Jacobs and Dalziel2005). Direct numerical simulation of this simplified configuration is first validated through comparison with available experimental data. The benchmarked simulation is then extrapolated to a reacting configuration.
This work consists of two parts that will be discussed separately. The first part considers a DNS of a Rayleigh–Taylor mixing layer with three components. The physical configuration and fluid properties of this simulation are based on the experiments of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) as well as Suchandra & Ranjan (Reference Suchandra and Ranjan2023). Confidence is established in the present simulations through comparison to experimental data as well as through a rigorous numerical convergence study. The computational dataset is then analysed to extract characteristic length scales using turbulence spectra and two-point spatial correlation techniques. Additionally, joint probability statistics of mixing concentration are examined and compared against several multivariate beta distributions of increasing complexity. While similar analyses have been performed in the past for two-component RTI mixing (Ristorcelli & Clark Reference Ristorcelli and Clark2004), as well as for three-component passive scalar mixing (Perry & Mueller Reference Perry and Mueller2018), to the best of the authors’ knowledge, this work is the first time these statistics have been examined for a three-component RTI mixing problem.
The second part of this work focuses on development of an improved model for the average reaction rate in a multicomponent mixing layer. Analysis is first presented to demonstrate that the model of Morgan (Reference Morgan2022a) can be extended to include higher-order statistical moments that were previously neglected while also respecting the no-mix limit. This improved model is then evaluated against the simulation data generated in the first part of this work. A single time instant from the DNS calculation representing significant, although not complete, mixing of the three components is numerically transformed to treat the mixing components as either ‘premixed’ or ‘non-premixed’ deuterium and tritium. The flow field is computationally frozen and a TN burn calculation is performed. The results from the high-fidelity calculation are then compared with a one-dimensional calculation utilizing the improved model.
This work is presented in the following sections. Section 2 discusses the configuration of the simulation, including a description of the numerical methods used, the computational domain, the fluids used and the initial conditions. Next, several sections are then focused on the results generated from these simulations. First, § 3 presents comparison of these simulation results with experimental data as well as verification that these simulations have converged and achieved DNS resolution. Section 4 presents additional analysis of this flow beyond that which was presented in the experiment and focuses on the turbulent aspects of the flow, including analysis of turbulent length scales utilizing spectra and two-point correlations. An evaluation of the joint probability distribution of the concentration of the mixing components is also presented. Section 5 introduces the proposed model for reacting flow in this configuration, and a comparison between the model and a calculation based on the DNS data is made. Lastly, § 6 summarizes the conclusions of this work.
2. Problem set-up
2.1. Numerical methods
The simulations presented in this work are conducted in two stages. The first stage involves simulating a non-reacting three-layer RTI mixing layer in time and validating the simulation through comparison with available experimental data from Jacobs & Dalziel (Reference Jacobs and Dalziel2005) and Suchandra & Ranjan (Reference Suchandra and Ranjan2023). As such, the material properties in this stage are chosen to be similar to those of the experiments. The second stage is conducted to validate and assess an improved model for the influence of mixing on reaction rates. This is accomplished in the present study by transforming the simulation state from the first stage into an ICF-relevant configuration and simulating the mixing layer as it undergoes TN burn. This calculation is performed utilizing both the simulation data directly as well as through the improved reaction rate model. Hydrodynamic evolution of the mixing layer is disabled in this second stage so that the mass fraction covariances do not evolve as a result of hydrodynamic motion during the TN burn process. This approach is not meant to represent the physics of ICF targets, but is a useful approach for evaluating the reaction rate model under the idealized case where second-moment concentration statistics that are known exactly (i.e. without the need for a coupled model for hydrodynamic evolution), thus simplifying the comparison between the simulation data and the model. In the first stage of the simulation, a high-order numerical scheme is desirable to capture all of the scales of turbulence with minimal numerical dissipation. In the second stage of the simulation, TN burn physics, radiation diffusion and tabular equations of state are required. To accommodate these differing computational requirements, separate codes are utilized for each stage of the simulation. This two-stage approach has been used previously in simulations of a reacting Rayleigh–Taylor mixing layer (Morgan et al. Reference Morgan, Olson, Black and McFarland2018a; Morgan Reference Morgan2022b), and the following paragraphs outline the computational codes utilized in each stage.
In the first stage, the Miranda hydrodynamics code (Cook Reference Cook2007, Reference Cook2009; Cabot & Cook Reference Cabot and Cook2006; Morgan et al. Reference Morgan, Olson, White and McFarland2017) is utilized to simulate an RTI mixing layer with three components in a configuration similar to the experiments of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) and Suchandra & Ranjan (Reference Suchandra and Ranjan2023) as it evolves with time. Miranda has seen extensive use in compressible, multicomponent turbulent mixing problems (Cook, Cabot & Miller Reference Cook, Cabot and Miller2004; Cabot & Cook Reference Cabot and Cook2006; Olson & Cook Reference Olson and Cook2007; Olson et al. Reference Olson, Larsson, Lele and Cook2011; Olson & Greenough Reference Olson and Greenough2014a,Reference Olson and Greenoughb; Tritschler et al. Reference Tritschler, Olson, Lele, Hickel, Hu and Adams2014; Morgan et al. Reference Morgan, Olson, Black and McFarland2018a; Campos & Morgan Reference Campos and Morgan2019; Morgan Reference Morgan2022a,Reference Morganb; Ferguson, Wang & Morgan Reference Ferguson, Wang and Morgan2023). Miranda solves the compressible Navier–Stokes equations for a non-reacting, multicomponent mixture,
where $\rho$ is the density, $t$ is the time, $u_i$ is the velocity along axis $i$, $x_i$ is the spatial coordinate in axis $i$, $Y_\alpha$ is the mass fraction of species $\alpha$, $J_{\alpha,i}$ is the diffusive mass flux of species $\alpha$, $p$ is the pressure, $\sigma _{ij}$ is the viscous stress tensor, $g_j$ is the gravitational body force in axis $j$, $E$ is the total energy and $q_i$ is the heat flux in axis $i$. These governing equations are solved using a tenth-order compact finite differencing scheme in space and a fourth-order explicit Runge–Kutta scheme in time. Miranda models the subgrid transfer of energy using an artificial fluid large-eddy simulation (AFLES) approach, where an eighth-order filter is applied to selectively add artificial contributions to the viscosity, diffusivity and thermal conductivity. Further information on Miranda, including specifics of the AFLES approach, is presented in Appendix A.1. Section 3.1 demonstrates that the influence of Miranda's artificial fluid approach is negligible in the present study.
In the second stage, a single time instant where there is significant, although not complete, mixing between all three layers is imported into the Ares code (Sharp Reference Sharp1978; Darlington, McAbee & Rodrigue Reference Darlington, McAbee and Rodrigue2001), where the mixing layer is simulated as it undergoes TN burn. Ares has been used extensively in the simulation of ICF targets and experiments (Raman et al. Reference Raman2012; Casey et al. Reference Casey2014; Smalyuk et al. Reference Smalyuk2014; Weber et al. Reference Weber2014; Khan et al. Reference Khan2016; Gatu Johnson et al. Reference Gatu Johnson2017, Reference Gatu Johnson2018; Bender et al. Reference Bender2021). In this stage, the mixing components are replaced with ICF-relevant materials. The materials considered in this stage include non-reactive CH plastic as well as deuterium (D) and tritium (T) in the form of either CD and tritium gas in a non-premixed configuration, or a mixture of deuterium and tritium (DT) in a premixed configuration. As such, only a single TN reaction is considered,
The rate of reaction with products $\gamma$ and reactants $\alpha$ and $\beta$ is described by
where $\langle \sigma v \rangle _{\alpha \beta }$ is the reaction cross-section, and $n_\alpha$ and $n_\beta$ are the particle number densities. The reaction cross-section is interpolated using the TDFv2.3 library (Warshaw Reference Warshaw2001). Each reaction has an average thermal energy of $17.59\,{\rm MeV}$ for the ${\rm D} +{\rm T}$ reaction considered here. Local deposition of this energy is assumed such that the average thermal energy is removed from the ion energy field. Charged particle energy is deposited in the same volume with a split between the ion and electron energies, determined according to the Corman–Spitzer model (Corman et al. Reference Corman, Loewe, Cooper and Winslow1975). Neutrons are assumed to immediately escape the problem, and energy carried by neutron products is removed from the system. Thermal effects and the apportionment of average thermal energy between reactants is determined following the method of Warshaw (Reference Warshaw2001), and the ion–electron coefficient is determined via the method of Brysk (Reference Brysk1974). Further details on the Ares code are presented in Appendix A.2.
The proposed reaction rate model is solved using the Ares code coupled with the modular RANSBox library (Morgan et al. Reference Morgan, Osawe, Marinak and Olson2024). Reynolds-averaged Navier–Stokes (RANS) calculations are performed on a one-dimensional computational mesh with grid spacing in the $z$ dimension set to be identical to the DNS grid spacing in order to compare with DNS data. First and second moments of species concentrations (i.e. mean mass fractions and mass fraction covariances) in the RANS calculations are taken directly from DNS at the same time instant chosen for TN burn. In this way, the reaction rate model may be assessed independently of the accuracy of any coupled model for the evolution of the mass fraction covariances, such as the $k$–$L$–$a$–$\mathcal {C}$ model (Morgan Reference Morgan2022b) or the $R$–$2L$–$a$–$\mathcal {C}$ model (Morgan, Ferguson & Olson Reference Morgan, Ferguson and Olson2023).
2.2. Computational set-up
The present work aims to study the RTI in a three-layered configuration. The experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) and Jacobs & Dalziel (Reference Jacobs and Dalziel2005) provide useful experimental data for a three-layer Rayleigh–Taylor driven flow, and so the present simulations aim to be similar in configuration to those experiments to permit reasonable comparison. However, the present simulations do not attempt to exactly replicate either experiment. The experiments consist of three layers of fluid with an acceleration due to Earth's gravity, with the upper and lower layers being more dense than the middle layer. This results in an unstable interface between the upper and middle layers, and a stable interface between the middle and lower layers. The experiments of Reference Jacobs and DalzielJacobs & Dalziel stabilize the upper unstable interface through the use of a splitter plate that is withdrawn to initiate the experiment. Reference Suchandra and RanjanSuchandra & Ranjan, in contrast, utilize three initially separated streams of gas flowing with a mean velocity that meet at the entrance to a test section where they are allowed to mix. The initial conditions of the simulation, discussed in detail in § 2.3, are chosen to approximate the perturbations that these approaches induce on the interface.
The computational domain is set up to be similar to the experimental configuration. This consists of three layers of fluid, with layer 1 located at the top of the domain, layer 3 located at the bottom of the domain and layer 2 located between layers 1 and 3. Gravitational acceleration is applied in the $-z$ direction. This results in two interfaces being formed, with one between layers 1 and 2, and the other between layers 2 and 3. The fluid properties in each layer are set such that the upper (1–2) interface is initially unstable, and the lower (2– 3) interface is initially stable. Additionally, the fluid properties used in layers 1 and 3 are chosen to be identical. A schematic of the domain, including the location of each layer of fluid and the direction of gravity, is presented in figure 1.
The problem domain is rectangular in shape, with dimensions of $L_x=40\, {\rm cm}$, $L_y = 40\, {\rm cm}$ and $L_z=80\, {\rm cm}$ in the $x$, $y$ and $z$ axes, respectively. These dimensions are chosen to be similar to the test section utilized in the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023), though a few modifications are made to account for numerical limitations. In particular, the length of the experimental test section used by Reference Suchandra and RanjanSuchandra & Ranjan is several metres long, rendering simulation of the entire test section computationally intractable. To simplify this constraint, the frozen turbulence hypothesis of Taylor (Reference Taylor1938) is invoked to transform the spatially developing mixing layer into a temporally developing one. This approach was also used by Reference Suchandra and RanjanSuchandra & Ranjan in the presentation of their results. Conceptually, this approach considers a box of fluid that starts attached to the trailing edge of the splitter plate used to separate the fluids. The box then moves at a constant velocity equal to the mean flow velocity relative to the splitter plate. Thus, a spatially developing mixing layer with a mean flow relative to the splitter plate instead appears as a temporally developing layer with no mean flow from the perspective of the box. This allows the streamwise axis to be shortened to match the cross-stream direction, with this choice made to result in a domain with a square horizontal cross-section. The vertical axis in the simulation is set to be $80\, {\rm cm}$, which is somewhat taller than the experimental test section height of $60\, {\rm cm}$. This change is made to address numerical stability issues associated with the mixing layer approaching the upper domain edge.
The domain boundaries are located at $\pm 20\, {\rm cm}$ in the $x$ and $y$ axes, and at $-20$ and $+60\,{\rm cm}$ in the vertical axis. The boundary conditions used for this problem are periodic on the $\pm 20\,{\rm cm}$ faces in $x$ and $y$, and no penetration on the top ($+60\,{\rm cm}$) and bottom ($-20\,{\rm cm}$) faces in $z$. The middle layer is initially located with its midpoint at $z=0$, with an initial layer thickness of $h_{2,0} = 3.2\,{\rm cm}$. This results in a greater amount of vertical space above the initial middle layer location than below it. This asymmetry, as with the increased length of the domain in the vertical axis discussed previously, is chosen to eliminate numerical stability issues as the mixing layer approaches the upper domain edge and to allow the developing mixing layer to remain approximately equidistant from the upper and lower domain boundaries.
The fluid properties in the present simulations are chosen to match those used in the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023). Atmospheric pressure and temperature were not reported in the experiment, and so a temperature of $T_{atm}=290\,{\rm K}$ and a pressure of $P_{atm}=101.3\,{\rm kPa}$ are assumed. The upper and lower layers of fluid in the experiment consisted of air, and the middle layer consisted of a mixture of air and helium with the ratio of these two components controlled to achieve a desired Atwood number. The densities of the pure gas components in the simulation are calculated from their molecular weights and ratio of specific heats as given in table 1 utilizing ideal gas relationships. The properties of the middle layer are calculated from the volume fractions of air and helium required to achieve a specified Atwood number using Miranda's mixture equation of state (Cook Reference Cook2009).
The values of the fluid contributions to the total viscosity, diffusivity and thermal conductivity are found from the fluid properties of each layer. The dynamic viscosity of the upper and lower fluid layers is set directly from the values in table 1, and Miranda's mixture equation of state is used to calculate an effective viscosity for the middle layer mixture. The fluid contribution to the bulk viscosity is neglected. The fluid contribution to the thermal conductivity is calculated as
where ${\textit {Pr}} = 0.7$ is the Prandtl number. Finally, the fluid contribution to the diffusivity is calculated as a binary species diffusivity
where ${Sc} = 0.22$ is the Schmidt number. Note that all of these properties are further adjusted to maintain dynamic similarity between simulation and experiment owing to restrictions imposed by the compressible nature of the Miranda code. These adjustments are discussed in greater detail in § 2.4.
Two sets of simulation data are generated in the present work with the goal of studying different aspects of the simulation. The first dataset, termed the ‘single realization’ set, are simulations run on successively refined computational meshes with an identical initial condition in order to assess convergence of the simulation solution. Information about these meshes are outlined in table 2. The base resolution for this problem is $60\times 60\times 120$ cells in the $x$, $y$, and $z$ axes, respectively, with these zone counts chosen to result in square zones. Each refined mesh is then generated via integer multiplication of the base mesh resolution, therefore ensuring that square zones are maintained for all computational meshes. Each mesh is labelled $R_N$, where $N$ indicates the multiplication factor used.
The second set of simulations, termed the ‘ensemble’ dataset, arises by noting that the experimental statistics from Suchandra & Ranjan (Reference Suchandra and Ranjan2023) and Jacobs & Dalziel (Reference Jacobs and Dalziel2005) were captured over independent realizations of the flow with inherent randomness in their initial state. It is thus worthwhile to assess the influence of the randomness of the initial condition by conducting multiple runs with varied initial conditions and comparing the results. This dataset consists of the ‘single realization’ dataset results from the $R_4$ mesh, plus eight additional simulations also run on the $R_4$ mesh. The random number generator seed values used to generate the upper and lower interfaces are set to different values for each run, with these values chosen such that a given seed value is not repeated across runs on either interface. This ensures that nine independent initial states are generated for each of the nine simulations. A mean value of a parameter of interest, together with a 95 % confidence bound of that mean, is calculated from all runs to provide a measure of the uncertainty in the calculated mean values owing to the influence of initial conditions.
2.3. Initial conditions
The interfaces in the experiments of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) as well as Suchandra & Ranjan (Reference Suchandra and Ranjan2023) are unforced, and so the initial perturbations to the interface are not well quantified. Experiments in a similar configuration, albeit with only two fluid layers, were conducted by Davies Wykes & Dalziel (Reference Davies Wykes and Dalziel2014) that also utilized a splitter plate to initially separate the mixing fluids similar to the aforementioned experiments. A photograph of the development of the perturbations in space is provided in figure 5 of that work, providing a useful visual depiction of general shape of the interface perturbations. Observation of this photograph reveals that the initial perturbations appear to be composed of two components. One is a two-dimensional low-mode component parallel to the axis of the plate withdrawal, most likely induced by vortex shedding off of the back of the splitter plate. Second is a three-dimensional, high-mode component that is likely induced by random imperfections in the splitter plate and other factors that result in breaking of the symmetry of the streamwise and otherwise two-dimensional component of the perturbation. In the present work, it is assumed that the initial perturbations for the experiments of Reference Jacobs and DalzielJacobs & Dalziel and Reference Suchandra and RanjanSuchandra & Ranjan is likely similar to the case of Reference Davies Wykes and DalzielDavies Wykes & Dalziel owing to the similarity in how the initial interface is formed. This suggests that an appropriate initial perturbation for this configuration consists of a two-dimensional low-mode component in the streamwise ($x$) direction and a three-dimensional, high-mode component imposed uniformly across the interface.
Bender et al. (Reference Bender2021) present a method to specify an initial condition that is the combination of a specified initial perturbation combined with unknown defects present in the design and construction of an ICF capsule. This results in an initial perturbation profile consisting of a two-dimensional principal component and a three-dimensional noise component. Accordingly, the functional form of the initial perturbations that are utilized in the present work are based on the specification of Reference BenderBender et al. Initial conditions of a similar form were also discussed by Thornber et al. (Reference Thornber, Drikakis, Youngs and Williams2010), Schilling & Latini (Reference Schilling and Latini2010) and Thornber et al. (Reference Thornber2017). The general calculation procedure used to define the initial interface is outlined in the following paragraphs, and the reader is referred to appendix D of Bender et al. (Reference Bender2021) for in-depth information.
First, it is useful to define the wavenumber components in $x$ and $y$ as
such that the magnitude of the wavevector is $k_{i,j} = \sqrt { k_{x,i}^2 + k_{y,j}^2 }$. Here $i$ and $j$ are integers in the range $[N_{min}, N_{max}]$, with this range set independently for the principal ($p$) and noise ($n$) components of the perturbation spectrum. The amplitude of the principal and noise components of the initial perturbation spectrum are defined as $A_p$ and $A_n$, respectively, and the total perturbation height is $A = A_p + A_n$.
The streamwise, or principal, perturbation component is the component of the perturbation that arises from the influence of the splitter plate initially separating the fluid layers. This is calculated as a one-dimensional Fourier series in $x$ that is extruded along $y$ to generate a two-dimensional interface,
where $A_p$ is the principal perturbation amplitude, $a_{p,i}$ is the mode amplitude of mode number $i$, $\phi _i$ is a random phase offset and the summation is over the integers, $i$, in the range $[N_{p,min} = 10,~N_{p,max} = 20]$. The standard deviation of the perturbation height across the resulting two-dimensional interface is calculated, and the amplitude of the profile is scaled to have a specified standard deviation. The standard deviation of the principal component was chosen to be $0.09\, {\rm cm}$ for this work. The range of mode numbers used, as well as the standard deviation of the principal component of the interface profile, was chosen to result in mixing layer growth that best matched available experimental data.
The noise component of the initial perturbation is inherently two-dimensional and is imposed uniformly across the initial interface. Functionally, this perturbation component has the form
where $\eta _{i,j}^{(\cdots )}$ are amplitude coefficients drawn from a normal distribution with zero mean and a variance defined as
The factor $P$ is chosen to be 1 if $k_{min} \leq k \leq k_{max}$ and 0 otherwise, where $k_{min}$ and $k_{max}$ are the wavenumbers associated with $N_{n,min}$ and $N_{n,max}$, respectively. The summation takes place over the integers $i,j$ in the range $[N_{n,min}=30,~N_{n,max}=35]$. These values were likewise chosen to obtain good agreement with available experimental data. The full two-dimensional random noise perturbation profile, $A_n$, is then scaled similarly to that of the principal perturbation profile, with the profile amplitude scaled to achieve a specified standard deviation in interface perturbation height. This standard deviation was chosen to be $0.09\, {\rm cm}$ for the noise component of the perturbation profile. As with the principal part of the perturbation profile, the range of mode numbers used as well as the specified standard deviation were chosen to result in mixing layer growth that best matched available experimental data.
The principal and noise components of the initial perturbation spectrum are calculated independently and then added together to produce a single perturbation amplitude. This process is repeated separately for the upper and lower interfaces with the random number generator seed value used to generate the random phase offsets chosen to be different values for each interface. Figure 2 depicts the initial state of the middle layer of fluid for the single realization simulation, also depicting the initial perturbations on the upper and lower interfaces as well as the difference in the perturbations in the streamwise ($x$) and cross-stream ($y$) axes.
Finally, the present simulations assume an initially quiescent state, $\boldsymbol {U}(\boldsymbol {x})=0$. This choice, together with the form of the initial perturbations described in this section, results in the initial state of the present simulations most closely resembling the state immediately after plate withdrawal in the experiments of Jacobs & Dalziel (Reference Jacobs and Dalziel2005), or the case where the mean flow velocities of all three streams of gas are exactly matched in the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023). It should be noted, however, that it is practically difficult for the latter experiment to exactly match the mean velocities of the three gas streams, and a mismatch may cause the resulting statistics to contain an influence from shear. The present simulations do not attempt to reproduce this effect, and some differences from the experimental results may be expected depending on the strength of this influence.
2.4. Non-dimensionalization
For the purposes of comparison with experimental data, the non-dimensional scalings utilized by Suchandra & Ranjan (Reference Suchandra and Ranjan2023) as well as Jacobs & Dalziel (Reference Jacobs and Dalziel2005) are utilized here. Reference Jacobs and DalzielJacobs & Dalziel find that the mixing width of a self-similar three-layer mixing layer grows linearly as a function of time according to
where $A_{12}$ is the Atwood number between the upper and middle layers, $h_{2,0}$ is the initial middle layer thickness, $g$ is gravity, $t$ is time and $\gamma$ is an unknown coefficient. Equation (2.13) can then be non-dimensionalized by the initial layer thickness to find
where the right-hand side of (2.14) is found by considering the conservation of a passive scalar, $C$, with an initial value of 1 in the middle layer and 0 elsewhere such that
Therefore, the non-dimensional mixing layer width grows linearly with a non-dimensional time defined as
such that $h / h_{2,0} = \gamma \tau$.
Miranda is a compressible code with an explicit fourth-order Runge–Kutta time integration scheme, and so has a time-step limit for numerical stability that is related to the acoustic Courant–Friedrichs–Lewy condition (Cook Reference Cook2007). At the same time, the experiment took place over several seconds. The consequence of these two factors is that performing the present simulation over the same physical time range as the experiment is intractable. Therefore, it is desired to compress the physical time range to be simulated in order to make this problem more computationally feasible. Equation (2.16) indicates that this can be accomplished by increasing the Atwood number, decreasing the middle layer thickness or increasing the magnitude of gravity. The Atwood number is fixed to facilitate comparison with experiment and grid resolution restricts the amount by which the middle layer thickness can be decreased. This leaves the gravity as the only parameter that can be varied. The gravity is therefore scaled by a factor of $N$ such that $g_{sim} = N g_{exp}$.
Reducing the physical time over which the simulation takes place requires consideration of how other fluid parameters, in particular viscosity and diffusivity, must be scaled to maintain dynamic similarity with the experiment. The Reynolds number is a natural choice to establish a scaling factor between the two cases in order to maintain dynamic similarity. Suchandra & Ranjan (Reference Suchandra and Ranjan2023) define a characteristic Reynolds number in their experiments as
where $h$ is the mixing layer width, $u'_{z,{rms}}$ is the maximum of the vertical velocity fluctuations and $\nu$ is an average mixture viscosity. However, the effect of compressing the simulation time on $u'_{z,{rms}}$ is not clear a priori, making establishing a scaling relationship using (3.6) difficult. Instead, a generic Reynolds number may be defined as
where $L$, $U$ and $T$ are characteristic length, velocity and time scales. The right-hand side of (2.18) is arrived at by noting $U = L / T$, allowing the dependence on the characteristic velocity scale to be removed in favour of the characteristic length and time scales that are better known a priori. Requiring that this generic Reynolds number is similar between the experiment and simulation and assuming that the characteristic length scale is the same in the two cases yields
A relevant time scale to compare the two cases is now required, for which (2.16) may be used. Substituting this into the above expression and noting that $g_{sim} = N g_{exp}$ yields
Therefore, the kinematic viscosity for each fluid is increased by a factor of $\sqrt {N}$ in the simulation, and the time over which the instability develops is similarly compressed by a factor of $\sqrt {N}$. A value of $N=800$ is chosen for the present work to reduce the physical time simulated to a computationally feasible range. A posteriori analysis finds the maximum Mach number to be $M \lessapprox 0.05$, indicating that this time compression does not introduce significant compressibility effects. This time compression also implicitly assumes that the mean flow velocity used for Taylor's frozen turbulence hypothesis discussed in § 2.2 is similarly increased by a factor of $\sqrt {N}$. Additional a posteriori analysis finds that the maximum value of $u_{x,{rms}}$ is less than $10\,\%$ of this scaled mean flow velocity. Thus, this time compression also does not meaningfully impact the frozen flow assumption.
Scaling the kinematic viscosity in this way will also scale the dynamic viscosity, diffusivity and thermal conductivity of the fluids as well. A priori application of the scaling relationship in (2.20) to the present simulations resulted in a Reynolds number calculated using (2.17) that was approximately twice that of the experiment. Accordingly, the ratio $\nu _{sim} / \nu _{exp}$ was set to $2 \sqrt {N}$ to maintain a similar Reynolds number to the experiment.
2.5. Averaging
Variables in turbulent flows may be decomposed in to a form that consists of an average value plus fluctuations of the variable about that mean. The Reynolds and Favre decompositions will be considered in the present work. An arbitrary scalar, $f$, may be decomposed according to
where ${\bar {f}}$ is the Reynolds-averaged value of $f$, $f'$ is the fluctuations of $f$ about ${\bar {f}}$, ${\tilde {f}}$ is the Favre average and $f''$ is the fluctuations of $f$ about ${\tilde {f}}$. The Favre and Reynolds averages are related through the density, $\rho$, according to
Averages in both cases are computed over all $x,y$ for a fixed value of $z$ owing to the doubly periodic nature of this problem. This results in the averaged variables being a function of $z$ only.
3. Results: validation
3.1. Direct numerical simulation
This study seeks to perform DNS of a three-layer Rayleigh–Taylor unstable flow. Therefore, a useful first step in the validation of these results is to establish whether these simulations have sufficient resolution to achieve this goal. The DNS regime can be broadly described as the regime where all scales of turbulence are resolved, the flow physics are governed by the physical properties of the fluid, and purely numerical contributions are negligible. This section will demonstrate the latter two points, and additional analysis demonstrating that all relevant scales of this flow are fully resolved by the computational mesh will be presented in § 4.1.
Olson & Greenough (Reference Olson and Greenough2014a) present a set of parameters that can be used to identify the transition from well-resolved large-eddy simulation (LES) to DNS. In particular, the transition from LES to DNS in these simulations may be identified by the total contributions to the viscosity and diffusivity arising from the numerics becoming smaller than the physical contributions to these quantities. This is defined in Reference Olson and GreenoughOlson & Greenough as $\langle ({\cdot })_a / ({\cdot })_f \rangle < 1$, where $({\cdot })_a$ are the artificial contributions arising from purely numerical sources and $({\cdot })_f$ is the physical contribution arising from the properties of the fluids.
The present work considers three contributions to the total fluid properties. The first is the physical contribution, $({\cdot })_f$, that arises from the material properties of the fluid. Second is the explicit contribution arising from Miranda's artificial fluid approach, $({\cdot })_a$, with these contributions calculated according to the method described in Appendix A.1. The sum of the physical and AFLES terms at each grid point is provided as an output from Miranda, and this output is used for these first two terms. Last are implicit contributions to viscosity and diffusivity arising from the use of a numerical method to solve the governing equations. These contributions are related to how well the computational grid resolves the flow field and must be calculated from the simulation data a posteriori.
A method for calculating an effective viscosity and diffusivity arising from the use of a numerical method is described by Olson & Greenough (Reference Olson and Greenough2014a), and this method is utilized here to provide an estimate for this contribution. The grid viscosity is approximated as
where $\mu _G$ is the grid-dependent viscosity, $\rho$ is the fluid density, $S$ is the magnitude of the strain rate tensor and $\Delta x$ is the grid spacing. Here $C_\mu$ is a code-dependent coefficient, with a value of $8.11$ determined by Reference Olson and GreenoughOlson & Greenough for the Miranda code, corresponding to the transition from LES to DNS when $\mu _G / \mu _f = 1$. A similar definition is given for grid-dependent diffusivity as
where $c_s$ is the local speed of sound and $Y$ is a mass fraction, chosen to be the middle layer mass fraction, $Y_2$, in this work. Here $C_D$ is again a code-dependent coefficient with a value of $0.039$ determined by Olson & Greenough (Reference Olson and Greenough2014a) for the Miranda code, corresponding to the transition from LES to DNS when $D_G / D_f = 1$. These contributions lead to a natural definition of the total fluid viscosity and diffusivity as
and
which then allow for a similar expression of the DNS transition criteria of Olson & Greenough (Reference Olson and Greenough2014a) as
where the right-hand side of the inequality is now $2$ instead of $1$ owing to the inclusion of the molecular fluid property in the definition of the total fluid property in the present work. The maximum value of ${\overline { ( {\cdot } )_t / ( {\cdot } )_f }}$ is selected at each time instant in order to calculate a single value of this ratio for each fluid property.
The ratios of the total to fluid properties for viscosity and diffusivity as a function of time as calculated through (3.5) on successively refined simulation grids is plotted in figure 3. It is clear that these simulations satisfy this criteria in the mechanical fields between the $R_4$ and $R_8$ meshes, and subsequently refined meshes demonstrate convergence of the effective viscosity towards the physical viscosity. This criterion is similarly satisfied for the scalar fields starting around the $R_4$ mesh, with successive refinements resulting in the effective diffusivity approaching the physical diffusivity.
The convergence of these simulations may also be established by noting the factor of $\Delta x^4$ in (3.1) and (3.2). The values of these two equations should decrease proportionally to $\Delta x^4$ if the flow gradients are fully resolved. The maximum values of ${\overline {\mu _G / \mu _f}}$ and ${\overline {D_G / D_f}}$ at a non-dimensional time of $\tau =8.2$ are presented in figure 4. It can be observed in these figures that there is a clear change in the slope of decay between coarse and fine resolutions. This occurs as the gradients of the flow begin to become fully resolved, rather than being influenced by the numerical method. Furthermore, a rate of decay of grid viscosity and diffusivity that is very close to $\Delta x^4$ is observed at the highest resolution meshes. This is further evidence that these simulations have reached a state where all flow gradients are fully resolved and are not influenced by the numerical method.
3.2. Qualitative comparison
It is useful to qualitatively compare the experimental results to those obtained from these simulations as this provides a useful measure for the similarity between the numerical and physical results, particularly with regards to whether the simulation is in a similar flow regime as the experiment. The present simulations do not attempt to exactly reproduce the experimental state of either Reference Jacobs and DalzielJacobs & Dalziel or Reference Suchandra and RanjanSuchandra & Ranjan, and so it is unlikely that a directly comparable simulation image can be found. Instead, multiple planes from the simulation are presented to provide a general sense of the state of the mixing layer, including large-scale behaviour as well as small-scale structures. This also provides a useful sense of how the randomness of the initial perturbation influences the qualitative appearance of the mixing layer. Figure 5 depicts five $x$–$z$ planes of middle mass fraction from the simulation at (a,d) $y=\pm 9\, {\rm cm}$, (b,e) $y=\pm 3.7\, {\rm cm}$ and (c) $y=0\, {\rm cm}$, as well as (f) the Mie scattering visualization image from figure 5(a) of Suchandra & Ranjan (Reference Suchandra and Ranjan2023). The simulation images depict a subset of the simulation domain in order to produce images with a similar physical scale to the experimental image. The horizontal extents for the simulation images are $x=[-15, 15]\, {\rm cm}$ and the vertical extents are $z=[-10,20]\, {\rm cm}$. These images are presented at a non-dimensional time of $\tau =8.2$ in both simulation and experiment.
One immediately notable difference between the experimental and simulation images is that the experimental images demonstrate a high degree of streakiness, suggesting a greater degree of small-scale entrainment and less diffusive mixing of the unseeded gas than is observed in the simulation. This is likely due to the Mie scattering diagnostic used to generate the experimental image, and arises due to the tracer particle used in Mie scattering behaving as a Lagrangian tracer rather than exactly following and diffusing with the gas it is seeded into (Anderson et al. Reference Anderson, Vorobieff, Truman, Corbin, Kuehner, Wayne, Conroy, White and Kumar2015).
Good qualitative agreement is observed between experiment and simulation in terms of overall mixing layer extents and large-scale flow structure size. Generally good agreement in the medium-to-small scales is also observed, though the size and quantity of small-scale structures in the simulation appears qualitatively slightly different than the experiment depending on the image considered. A significant amount of small-scaling mixing can nonetheless be observed by noting the fluctuations in the mass fraction fields, particularly within the large structures. Reynolds number analysis in § 3.3.2 and length scale analysis in § 4.1 also indicates good overall agreement between experiment and simulation at this time however, and so qualitative differences in scale size are likely simply an artifact of the specific planes considered. Nevertheless, the range of scales observed indicate that the present simulations are in a similar flow regime to the experiment.
3.3. Experimental comparison
This section focuses on comparison with available experimental data from Jacobs & Dalziel (Reference Jacobs and Dalziel2005) and Suchandra & Ranjan (Reference Suchandra and Ranjan2023) to validate the present simulations. It should be emphasized that the present simulations consider a similar configuration to those experiments but do not attempt to exactly reproduce either experiment. Of particular note is the potential influence from shear in the experiments of Reference Suchandra and RanjanSuchandra & Ranjan discussed in § 2.3. Thus, some of the experimental results may include an influence from shear that are not reproduced in the present simulations, and so some disagreement in these cases may be expected.
3.3.1. Mixing layer width
The first metric to be considered is the mixing layer width as a function of time. Equation (2.14) shows that the mixing layer width can be defined in terms of the maximum value of a conserved passive scalar, $C$, with a value of 0 outside of the middle layer and an initial value of 1 inside the middle layer. The definition $C = \rho Y_2 / \rho _2$, where $\rho$ is the density, $Y_2$ is the mass fraction of the middle fluid and $\rho _2$ is the partial density of the middle fluid, is used for the present work. This definition satisfies the assumptions of (2.14) with the exception that $\rho Y_2$ is not strictly a passive scalar.
Figure 6 depicts the mixing layer width as a function of time for (a) the single realization case as a function of mesh resolution and (b) the ensemble averaged case with a mean mixing layer width and associated 95 % confidence interval of the mean. Also shown are the experimental data points from Suchandra & Ranjan (Reference Suchandra and Ranjan2023) and Jacobs & Dalziel (Reference Jacobs and Dalziel2005). The data points from the experiments of Reference Suchandra and RanjanSuchandra & Ranjan arise from four separate experimental runs. These consist of two runs utilizing a Mie scattering diagnostic at $\tau \approx 8.2$ and $\tau \approx 15.8$, and two additional runs utilizing a particle image velocimetry (PIV)/planar laser induced fluorescence (PLIF) diagnostic at $\tau =11.1$ and $\tau =17.3$. The PIV/PLIF data are indicated by a black dot surrounded by a red circle in figure 6 to distinguish them from the Mie scattering data. Finally, a line corresponding to a value of $\gamma =0.5$ is shown for reference.
The ensemble averaged mixing layer width and associated 95 % confidence interval encompasses almost all of the Mie scattering experimental data of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) even at late time, indicating good agreement between the present simulations and those experiments. Agreement with the PIV/PLIF data is not observed, with the mixing layer width in both cases lying outside the 95 % confidence band of the simulation ensemble. It should be noted, however, that the PIV/PLIF data points also lay fairly far from the linear growth trendline that well describes the other experimental data. It is possible that some aspect of the PIV/PLIF experiments resulted in differences in the initial condition that in turn resulted in a different mixing layer growth over time. Another potential source of this disagreement could stem from differences in the behaviour of (2.14) when using a PLIF diagnostic versus a Mie scattering diagnostic. Given that these data points are not a time series, the temporal trend of these two cases and how they compare to the present simulations is unknown.
The present simulations generally overpredict the mixing layer width values from the $At = 0.002$ experiments of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) at early times. This early time overprediction is likely due to the fact that the perturbations in the experiments of Reference Jacobs and DalzielJacobs & Dalziel may differ in amplitude and mode content due to the differing Atwood numbers as well as the use of liquids versus gases, even though the functional form of the perturbations is likely similar. As a result, differences from the data of Reference Jacobs and DalzielJacobs & Dalziel, particularly at early times, are expected. This may also explain why differences in mixing layer width between the experiments of Reference Jacobs and DalzielJacobs & Dalziel and Reference Suchandra and RanjanSuchandra & Ranjan is also observed at early times. The simulation data agrees well with the experimental data of Reference Jacobs and DalzielJacobs & Dalziel starting around $\tau \approx 8$, with the experimental data and 95 % confidence bound of the ensemble averaged simulations overlapping.
The single realization case from the present simulations also demonstrates a similar level of agreement with the experimental data as the ensemble averaged case. The mixing layer width as a function of time shows very little change with increasing mesh resolution, indicating good convergence of this metric with increasing grid resolution. A slight underprediction of the mixing layer width is observed at the latest times, though the fact that the ensemble average data does not demonstrate a similar underprediction suggests that this behaviour is likely an artifact of the specific initial condition used for the single realization simulation.
Finally, it can be observed that the ensemble average mixing layer width grows linearly for $\tau \gtrapprox 11$. Fitting the data after this time finds a slope of $\gamma =0.46 \pm 0.006$. This agrees well with the value of $\gamma =0.49 \pm 0.03$ found by Jacobs & Dalziel (Reference Jacobs and Dalziel2005), and slightly greater than the value of $\gamma =0.41\pm 0.01$ found by Suchandra & Ranjan (Reference Suchandra and Ranjan2023). This agreement with the slope of linear growth, as well as the good agreement with the majority of experimental data described previously, gives confidence that the present simulations are similar to the experiments and are accurately describing the physics of a three-layer RTI mixing layer.
3.3.2. Reynolds number
It is useful to estimate the value of the Reynolds number of this flow, as this metric provides a useful way to establish how turbulent the flow is. The value of the Reynolds number from the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) is also available, making this a useful metric to compare against the experiment. For this three-layer Rayleigh–Taylor configuration, Reference Suchandra and RanjanSuchandra & Ranjan utilize the definition
where $u'_{z,rms}$ is the maximum of the root mean square (r.m.s.) of the vertical velocity fluctuations at each time instant, $h_c$ is the mixing layer width as defined in (2.14) and $\nu _{avg}$ is an average mixture viscosity of the three fluids in the mixing layer.
A plot of Reynolds number versus time using this definition is presented in figure 7 for both the single realization and ensemble average cases. The Reynolds numbers reported from the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) at $\tau =8.2$ and $\tau =17.3$ are indicated by the black circles. A 95 % confidence interval of the experimental Reynolds numbers were estimated based on the reported 95 % confidence intervals of $u'_{z,rms}$ at each time instant, and are indicated by the error bars on the experimental data points. It can be observed that excellent agreement between the experiment and the present simulations is observed at $\tau =8.2$, with similar mean values and overlapping 95 % confidence bands. There is a larger difference between the mean values of the Reynolds number at $\tau =17.3$, though the 95 % confidence bounds of the two cases overlap, suggesting the two values are within the uncertainty of the measurement. Nevertheless, there is a notable difference between the experiment and simulation values at late time.
There are several potential reasons for the underprediction of the late time Reynolds number observed in the present simulations. First, a decrease in the growth rate of the mixing later at late times can be observed in the ‘single realization’ simulations, and this likely also translates into a similar behaviour for the Reynolds number. Thus, this behaviour is likely an artifact of the specific initial condition used for the ‘single realization’ data, and explains why a similar behaviour is not observed in the ensemble averaged Reynolds number data. Second, the choice of the factor of two to scale the viscosity likely also contributes to this disagreement. It is possible that a different scaling factor could improve late time agreement with the experiment, but this would also simultaneously degrade the observed agreement at $\tau =8.2$. Model validation discussed in § 5 will be conducted at this earlier time, meaning it is preferable to maintain the level of agreement observed at this time. Finally, it is possible that differing boundary effects exist at late time due to differences in the experimental dimensions and the simulation domain, and this could result in statistical uncertainty becoming large at late times. Thornber (Reference Thornber2016) showed that statistical uncertainty becomes important once the integral scale of the flow reaches ${\approx }10\,\%$ of the domain width. As will be shown in § 4.1, the integral scale in the present simulations is approximately $5\,\%$ of the domain width at $\tau =8.2$, and so saturation of the simulation domain and greater statistical uncertainty is likely at later times. Thus, the underprediction of the late-time Reynolds number is also potentially a consequence of the saturation of the computational domain by large modes as well as confinement effects in the experiment.
3.3.3. Density profiles
Next, it is useful to assess agreement with experimental data through comparison with spatial profiles of density. The profiles of density from the experiments of Suchandra & Ranjan (Reference Suchandra and Ranjan2023) are presented at non-dimensional times of $\tau =11.1$ and $\tau =17.3$. The mixing layer widths associated with these runs are indicated in figure 6 as data points surrounded by a circle. Notably, these data points do not lie on the same $\gamma \sim 0.5\tau$ line that the other experimental runs, as well as the simulation data, do. Given that it is expected that the profiles of density are related to mixing layer width, comparisons between experimental and simulation data will be made at a time where the mixing layer width is matched between simulation and experiment. For the present simulations, this corresponds to comparing the experimental data at $\tau =11.1$ in the experiment with simulation data at $\tau =13.75$. Profiles of density at these times are presented in figure 8.
Good agreement between experiment and simulation, in particular the maximum and minimum values of density, as well as the extents of the mixing layer, is observed. A shift in the location of the centre of the mixing layer is observed between simulation and experiment however, with the location of minimum average density in the simulation appearing to be slightly lower than the experiment. Jacobs & Dalziel (Reference Jacobs and Dalziel2005) note that a rise in the centreline of the mixing layer over time due to buoyancy effects is expected. While a slight rise in the mixing layer centreline in the simulation can be observed, it is possible that the mechanisms driving this behaviour did not scale exactly with the time compression utilized in the present simulations, resulting in slightly less vertical shift in the mixing layer centreline.
A slight difference in the width of the upper and lower interfaces is also observed, though the overall width is well matched. This may be due to differences in the wavelength and amplitude of the initial perturbations between the experiment and simulation resulting in slightly different amounts of growth of the upper interface and erosion of the lower interface. Also of note is that figure 7 of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) demonstrates skewed concentration profiles similar to what is observed in the present simulations, though those experiments do not extend to the same non-dimensional time, thus making direct comparison not possible. As discussed in § 2.3, the experiments of Reference Suchandra and RanjanSuchandra & Ranjan may include an influence from shear that is not reproduced in the present simulations. Approximation of a Richardson number using figures 4.10 and 4.15 in Suchandra (Reference Suchandra2022) to determine ${\rm d}\rho /{\rm d}z$ and ${\rm d}u_x/{\rm d}z$ finds ${Ri} = ( g [{\rm d}\rho /{\rm d}z ] ) / ( \rho [ {\rm d}u_x/{\rm d}z ]^2 ) \approx 0.5$ at a non-dimensional time of $\tau =11.1$. This indicates that while buoyancy effects are dominant at this time, shear effects may also have an influence. It is therefore possible that this influence results in the more Gaussian density profile observed in the experimental data owing to shear-driven mixing on the lower interface. Thus, a difference may be expected given that this influence is not reproduced in the present simulations. Finally, excellent agreement between the profiles of average density is observed with increasing grid resolution. This indicates that this metric is well converged with respect to grid resolution for the highest resolution runs.
It is also informative to look at spatially averaged profiles of r.m.s. density fluctuations as a function of vertical position. These profiles are presented in figure 9 at $\tau =11.1$ and $\tau =13.75$ in the experiment and the simulation, respectively. Good agreement between simulation and experiment is observed for the profiles of density fluctuations, with some minor differences. In general, the width of the region with elevated density fluctuations appears to be larger than the experimental observations, although the peak magnitude of the fluctuations is similar between the two cases. Additionally, a ‘shoulder’ near $z\approx -5$, corresponding to the approximate location of the lower interface, is observed in both the experiment and ensemble averaged simulation data, though this shoulder is more notable in the simulation than the experiment. This shoulder is associated with entrained fluid between the lower and middle layers as opposed to fully mixed fluid owing to the stable $(Y_2,Y_3)$ interface. The overall extents of the region with elevated r.m.s. density fluctuations is slightly larger in the simulation than the experiment, particularly in the region above the interface. This is in agreement with the observation that the upper part of the mixing layer is slightly wider in the simulation than the experiment based on the mean density profiles in figure 8. Finally, as with the profiles of density, good agreement between the profiles is observed for increasing mesh resolution, indicating that these simulations are well converged at the highest resolutions.
3.3.4. Velocity profiles
The profiles of horizontal ($x$) and vertical ($z$) r.m.s. velocity fluctuations are presented in figure 10. Comparison plots are again presented at non-dimensional times of $\tau =11.1$ in the experiment and $\tau =13.75$ in the simulation in order to compare at a time when the mixing layer widths are matched. The data are non-dimensionalized by $A_{12} \sqrt {h_{0,2} g}$ to facilitate comparison with the experimental data, with this scaling chosen as it worked well to collapse the velocity profiles presented by Suchandra & Ranjan (Reference Suchandra and Ranjan2023). Good agreement between the profiles of horizontal velocity fluctuations in the simulation and experiment are observed. The experiment demonstrates a slightly flatter profile of $u'_{x,rms}$ than the simulation data, though the magnitudes of the data are well matched between experiment and simulation.
A greater disagreement between experiment and simulation is observed for the vertical component of the velocity fluctuations, with the peak values of $u'_{z,rms}$ approximately 30 % greater in the simulation than the experiment at the time instant considered here. An aspect of note in explaining this difference is that the vertical component of the Reynolds stress anisotropy tensor, defined as $b_{ij} = {\overline {u'_i u'_j}} / {\overline {u'_k u'_k}} - 1/3$, has a value of $b_{zz} \approx 0.28$ for the present simulations and a value of $b_{zz} \approx 0.18$ in the experiment. Previous studies have shown that a typical value for a Rayleigh–Taylor mixing layer is $b_{zz} \approx 0.3$, albeit with these results found for the two-layer configuration (Livescu et al. Reference Livescu, Ristorcelli, Gore, Dean, Cabot and Cook2009, Reference Livescu, Ristorcelli, Petersen and Gore2010). On the other hand, a lesser degree of anisotropy of $b_{zz} \approx 0.12$ may be expected for a Kelvin–Helmholtz mixing layer (Bell & Mehta Reference Bell and Mehta1990; Morgan, Schilling & Hartland Reference Morgan, Schilling and Hartland2018b; Morgan et al. Reference Morgan, Ferguson and Olson2023). An anisotropy between these two values may be expected in a combined Rayleigh–Taylor/Kelvin–Helmholtz mixing layer depending on the relative strength of the two mechanisms (Akula et al. Reference Akula, Suchandra, Mikhaeil and Ranjan2017; Morgan et al. Reference Morgan, Schilling and Hartland2018b). As discussed in § 3.3.3, while buoyancy effects are dominant in the experiment, shear effects may also have an influence at this time. As a result, the experimental anisotropy may include an influence from shear effects that are not reproduced in the present simulations, and so a disagreement in the level of anisotropy may be expected.
4. Results: turbulence parameters
Now that reasonable confidence in the grid convergence and physical accuracy of the present simulations has been established, the following sections focus on analysis of additional turbulent quantities unavailable in the experimental data reported by Suchandra & Ranjan (Reference Suchandra and Ranjan2023) and Jacobs & Dalziel (Reference Jacobs and Dalziel2005). Specifically, § 4.1 examines the size and directional anisotropy of various turbulent length scales in this flow. Section 4.2 then evaluates potential statistical descriptions of three-component mixing in an RTI mixing layer through analysis of the probability density functions (p.d.f.s) of species concentration. While both of these analyses have been performed for a two-component mixing layer, similar analysis has not been performed for the three-component case. It it thus worthwhile to examine how the more well-understood results for two-component systems may change in this three-component case.
This analysis is presented at a time instant corresponding to $\tau =8.2$, with this time chosen as there is significant, although not complete, three-component mixing occurring, with a peak value of the averaged middle layer mass fraction of ${\tilde {Y}}_{2,max} \approx 0.2$. An image depicting several $x$–$z$ planes of data at the time instant was presented previously in figure 5. Profiles of ${\tilde {Y}}_1$, ${\tilde {Y}}_2$ and ${\tilde {Y}}_3$, as well as corresponding second moments at this time are shown in figure 11. It is interesting to note that the ${\widetilde {Y''_1 Y''_2}}$ mass fraction covariance undergoes a sign change around $z=0$. This is a behaviour not observed in two-component mixing layers as a decrease in one component must necessarily correspond to an increase in the other, and thus, the covariances must be strictly less than or equal to zero. The covariances, including the impact of the observed sign change, will also be discussed in the context of concentration probability in § 4.2.
4.1. Turbulent length scales
One metric of interest in this flow is the determination of turbulent length scales in both the gravity-aligned and perpendicular axes. This has been examined in two-component Rayleigh–Taylor flows (Cook et al. Reference Cook, Cabot and Miller2004; Ristorcelli & Clark Reference Ristorcelli and Clark2004; Boffetta et al. Reference Boffetta, Mazzino, Musacchio and Vozella2010; Cabot & Zhou Reference Cabot and Zhou2013; Morgan et al. Reference Morgan, Olson, White and McFarland2017; Zhou & Cabot Reference Zhou and Cabot2019; Zhao, Betti & Aluie Reference Zhao, Betti and Aluie2022), but less analysis on length scales in three-component RTI flows has been conducted. The size of the integral length scale, $\lambda _L$, the Taylor microscale, $\lambda _T$, the Batchelor scale, $\lambda _B$, and the Kolmogorov scale, $\lambda _\eta$, are considered in the present work. Suchandra & Ranjan (Reference Suchandra and Ranjan2023) also presents several length scale estimates in that work, which are useful for comparison with the present simulations. Length scales in the present simulations are calculated utilizing the Reynolds number, turbulent spectra and two-point correlations. The following paragraphs outline the methodologies used to determine the turbulent spectra and two-point correlations.
Spectra in the present work are found by following the procedure described by Tritschler et al. (Reference Tritschler, Olson, Lele, Hickel, Hu and Adams2014). The Fourier transform of the parameter of interest is taken over $x$–$y$ planes in the region where ${\tilde {Y}}_2 \geq 0.1$, with this region chosen to represent the central part of the mixing layer. These two-dimensional spectra are averaged over all planes in the central mixing layer. The magnitude of the Fourier coefficients for the species mass fraction spectra are then calculated as
and, for the turbulent kinetic energy (TKE) spectrum, as
where $\langle {\cdot } \rangle$ denotes an average over the region where ${\tilde {Y}}_2 \geq 0.1$, $\widehat {({\cdot })}$ denotes the Fourier transform and $( {\cdot } )^*$ indicates a complex conjugate. The magnitude of each Fourier coefficient is then radially binned to produce a single one-dimensional spectrum for each variable at the chosen time instant. The average over the region where ${\tilde {Y}}_2 > 0.1$ is chosen to consider only the central mixing layer, and to be similar to the region selected for the two-point correlations, below.
The spectrum of the TKE, $Y''_1$, $Y''_2$ and $Y''_3$ at $\tau =8.2$ for multiple grid resolutions is presented in figure 12. These spectra do not exhibit a notable inertial range, which is unsurprising given the relatively low Reynolds number of this flow. One notable aspect in these plots is that the species spectra do not appear to collapse with increasing resolution as may be expected, even though the spectra of the TKE do. There are a number of possible explanations for this behaviour, though this is most likely due to persistence of very low amplitude Gibbs’ oscillations of the order of the grid scale. Very little additional energy is added at each subsequent refinement of the computational mesh however, indicating that the solution is nevertheless well converged.
Another useful method by which a turbulent flow may be assessed is the two-point spatial correlation. Pope (Reference Pope2000) defines the normalized two-point correlation, $f$, of a fluctuating quantity, $g$, in one dimension as
where $r$ is the correlation distance and $\langle {\cdot } \rangle$ indicates an average over all $x$. Similar definitions exist for data with greater than one dimension, in which case the correlation is instead a function of the radial distance from the point $\boldsymbol {x}$.
The two-point spatial correlation is calculated in the present work in the horizontal ($x$–$y$) axis as well as the vertical ($z$) axis to examine the difference in length scales in the axis perpendicular to and aligned with gravity. In the horizontal case, the two-point correlations are calculated using $x$–$y$ planes at each $z$ position. The correlations are averaged across all $z$ locations where ${\tilde {Y}}_2 \geq 0.1$ to represent the state of the middle of the mixing layer, and to exclude the influence of intermittency of the large structures at the edge of the layer (Watanabe & Gotoh Reference Watanabe and Gotoh2004; Boffetta et al. Reference Boffetta, Mazzino, Musacchio and Vozella2010). In the vertical case, the two-point correlations are computed along $z$ at each $x,y$ location, and the resulting correlations are averaged together across all $x$ and $y$.
Two-point correlations are calculated utilizing the Favre fluctuations of the three species fractions, $Y''_1$, $Y''_2$ and $Y''_3$, as well as the fluctuating part of the vertical velocity, $u''_z$, as an approximation to the TKE. The results of these correlations are shown in figure 13. It can be observed that the correlations of the three species fractions are generally similar to each other in both axes. The vertical velocity correlation, on the other hand, demonstrates slightly different behaviour than the species mass fraction correlations. This is particularly notable in the vertical axis, where the velocity remains correlated for a longer distance than the species mass fractions do. The velocity correlation in the horizontal axis however, appears to demonstrate a similar correlation distance to the species mass fractions. The change in the correlation distance of $u''_z$ between the horizontal and vertical axes is consistent with the anisotropy observed in figure 10. Additionally, the difference in the vertical correlation distances of the velocity and mass fractions is unsurprising as the fluctuations in mass fraction will vanish rapidly outside the mixing layer, whereas the velocity fluctuations are not constrained to vanish outside the mixing layer and will decay more slowly. This also explains why the vertical correlation distance of the species mass fractions is approximately one half of the mixing layer width, $h_c = 11.26\, {\rm cm}$, at this time.
The Reynolds number calculated in § 3.3.2, as well as the spectra and two-point correlations described in the previous paragraphs can now be used to estimate the size of various turbulent length scales utilizing the methods described in Appendix B. These results are summarized in table 3. In general, all of the integral length scale values are slightly less than the value estimated through the Reynolds number. Only the estimate of the vertical integral length scale based on the two-point velocity correlation is greater than this estimate, with this also being the greatest estimate of the integral length scale overall. The size of the integral length scale determined through the spectrum generally agrees well with the sizes found using the vertical two-point correlation. The integral length scales based on mass fraction are all smaller than the integral length scale estimates based on velocity. Also of note is that the integral length scale based on two-point correlation of the vertical velocity increases from the horizontal to the vertical axis, but the opposite behaviour is observed in the integral length scale based on the two-point correlation of species mass fraction.
$^{a}$Value based on vertical velocity, $u''_z$, rather than TKE.
The size of the Taylor microscale from the present simulations tend to be smaller than the estimates based on the Reynolds number or the experiment, though it should be noted that the vertical Taylor microscale based on velocity is greater than values estimated from the Reynolds number or from the experiment. This is similar to the behaviour of the vertical integral length scale based on velocity noted in the previous paragraph. Also of note is that all of the estimates of the Taylor microscale increase in size from the horizontal to the vertical axis. The opposite trend was observed in the size of the integral length scale based on species mass fraction, which demonstrates a decrease in size from the horizontal to vertical axis. This indicates that the integral scales associated with species mass fractions are greater in the horizontal than the vertical axis, while the magnitude of the Taylor microscale is less in the horizontal axis as compared with the vertical axis.
Finally, good agreement is found between the size of the smallest scales of the flow found based on dissipation, the estimates of those scales based on the Reynolds number and the reported values from experiment. The size of the Kolmogorov scale is between $3.2$ and $3.5$ grid cells in length on the $R_{24}$ mesh, indicating that these simulations fully resolve the flow down to the Kolmogorov scales of motion. Similarly, good agreement is found for the estimate of the Batchelor length scale. This scale is between $6.5$ and $6.9$ grid cells in size on the $R_{24}$ mesh depending on the method used to find the scale size, similarly indicating that these simulations fully resolve the scalar field down to the smallest motions of the flow.
4.2. Statistical description of three-component mixing
Finally, it is worthwhile to examine statistical descriptions of the mixing in this three-layer configuration as this can be useful for formulating models to describe this class of flows. Probability density functions have been used to describe mixing in turbulent flows (Pope Reference Pope1985; Juneja & Pope Reference Juneja and Pope1996; Wilson & Andrews Reference Wilson and Andrews2002; Sawford & de Bruyn Kops Reference Sawford and de Bruyn Kops2008; Cai et al. Reference Cai, Dinger, Li, Carter, Ryan and Tong2011). Approaches utilizing p.d.f.s have also proven to be useful in the modelling of turbulent mixing. Two-component mixing can reasonably be modelled using a beta distribution to describe the mixture fraction p.d.f. (Girimaji Reference Girimaji1991; Cook & Riley Reference Cook and Riley1994; Ihme & See Reference Ihme and See2011; Ristorcelli Reference Ristorcelli2017). The treatment of greater than two components is more complicated however, and less analysis on potentially suitable p.d.f.s has been conducted.
Perry & Mueller (Reference Perry and Mueller2018) consider four unique p.d.f.s to describe three-component mixing, including the three parameter Dirichlet distribution, the four-parameter Connor–Mosimann (CM) distribution (Connor & Mosimann Reference Connor and Mosimann1969), a five-parameter bivariate beta (BVB5) distribution (Doran Reference Doran2011) and a six-parameter bivariate beta (BVB6) distribution (Perry & Mueller Reference Perry and Mueller2018). They also considered the influence of the neutrality of these p.d.f.s or the sensitivity of these distributions to whether there is favoured mixing between different pairs of components, and present variations of each p.d.f. that allows for neutrality of different mixing components. They also present a set of recommendations to guide the most appropriate choice of p.d.f. for a given configuration. More details on the p.d.f.s considered in the present work are presented in Appendix C. Notably, these analyses considered isotropic turbulence, while the present configuration is not isotropic. It is therefore useful to also consider the p.d.f.s suggested by Reference Perry and MuellerPerry & Mueller to examine how well they may describe this flow. Thus, this section will focus on determining the p.d.f. of species mass fraction from the DNS data, and comparing against the model p.d.f.s proposed by Reference DoranDoran and Reference Perry and MuellerPerry & Mueller.
An approximate p.d.f. representing the DNS data is found by first calculating the histogram of the DNS data by binning according to the amount of $Y_1$ and $Y_2$ in each zone on the computational mesh, noting that $Y_3 = 1 - Y_1 - Y_2$. The histogram is then normalized by $\iint C(Y_1, Y_2) \,{\rm d}Y_1 \,{\rm d}Y_2$, where $C$ is the number of samples in each bin to calculate an approximate p.d.f. of the data.
The parameters for each model distribution are calculated by first finding the values of the moments to be enforced from the DNS data. A nonlinear least squares minimization routine is used to minimize the $L_2$ norm of the difference between the values of the means and covariances from the data and p.d.f. descriptions. The error associated with a given moment is calculated as $M_{pdf}/M_{DNS} - 1$, where $M$ is the enforced moment and the subscripts indicate the source of the moment value. This method is chosen to ensure that all enforced moments are equally weighted regardless of their relative magnitudes. In all cases, the first moments ${\tilde {Y}}_1$ and ${\tilde {Y}}_2$ are enforced. In the cases where a p.d.f. cannot specify all of the unique first and second moments simultaneously, such as with the Dirichlet and CM distributions, the moments to be enforced must be selected. For the Dirichlet distribution, the ${\widetilde {Y''_2 Y''_2}}$ variance is additionally enforced. For the CM distributions, the ${\widetilde {Y''_1 Y''_2}}$ and ${\widetilde {Y''_2 Y''_3}}$ covariances are the second moments that are enforced, with these chosen due to their relationship to the $Y_2$ variance used for the Dirichlet distribution while retaining information on the covariances (Morgan Reference Morgan2022b). Finally, the BVB6 distribution requires enforcement of a third moment and ${\widetilde {Y''_1 Y''_2 Y''_3}}$ is chosen for this purpose.
Each model p.d.f. is evaluated on five subsets of data within the simulation at a non-dimensional time of $\tau =8.2$. The results of these evaluations is shown in figure 14. Each row in this figure corresponds to one subset of data, with the text at the left of the row indicating which subset is shown in that row.
The first subset corresponds to a single $x$–$y$ plane of data at $z=1\, {\rm cm}$, with this location chosen to be at the approximate location of the maximum value of ${\tilde {Y}}_2$ at this time. The approximate p.d.f. arising from the DNS data, as well as each of the candidate distributions, is calculated according to the method described above. The result of this process is shown in figure 14(a). Generally, it appears that the BVB5, particularly $-$12 and $-$23, and BVB6 distributions perform best for this case. The Dirichlet and CM-1 distributions appear to approximately capture the $Y_3 \approx 0$ behaviour in the data, though they do not qualitatively capture the shape of the distribution. Finally, the CM-2 and CM-3 distributions do not appear to describe this case well, with the generated p.d.f.s having little similarity to the DNS data. It should be noted that the CM-2 and CM-3 cases also have a relatively high $L_2$ error, suggesting that they could not accurately capture all of the enforced moments as well.
Second, each of these p.d.f.s is evaluated across the entire mixing layer where ${\tilde {Y}}_2 \geq 0.01$. Each p.d.f., as well as the approximate p.d.f. from the DNS data, are generated at each $z$ position within the layer using the procedure outlined for the single plane data. Each of the p.d.f.s are then summed over the mixing layer and the result renormalized such that $\iint P(Y_1, Y_2) \,{\rm d}Y_1 \,{\rm d}Y_2 = 1$. In this case, the Dirichlet distribution appears to do reasonably well to describe this case, as do the BVB5 and BVB6 distributions. None of the CM distributions appear to describe this data well. Only the CM-1 distribution roughly captures the shape of the data, though with notable differences, while the CM-2 and CM-3 cases bear little resemblance to the data.
The last three subsets of data to be considered here arise by noting that the heavy-light-heavy configuration considered in the present work suggests that the mixing, particularly in terms of the distribution of mixing components, may be different depending on the location within the mixing layer. Figure 15 depicts $x$–$y$ planes of the mixing layer at $z=-2\, {\rm cm}$, $z=1\, {\rm cm}$ and $z=8\, {\rm cm}$, with the red, green and blue values in the image set by the normalized mass fraction of each of the three species present in a given zone. These planes of data demonstrate that different relative proportions of $Y_1$, $Y_2$ and $Y_3$ are present depending on the position within the mixing layer, which suggests the p.d.f. that best describes the data may also be a function of position within the mixing layer. To examine this possibility, the mixing layer is broken into three regions representing the upper, middle and lower portions of the layer, and the model p.d.f.s are evaluated in each of these regions. The upper part of the mixing layer is defined as the region where $0.01 \leq {\tilde {Y}}_2 < 0.1$ and $z \geq z_{max}$, where $z_{max}$ is the location of the maximum value of ${\tilde {Y}}_2$. The middle part of the mixing layer is defined as the region where ${\tilde {Y}}_2 \geq 0.1$. Finally, the lower part of the mixing layer is defined as $0.01 \leq {\tilde {Y}}_2 < 0.1$ and $z \leq z_{max}$. These comparisons are depicted in figure 14(c–e).
All of the candidate p.d.f.s struggle to describe the upper layer data, with qualitative disagreements observed between the DNS data and each of the p.d.f.s. The BVB5-12 and BVB5-13 distributions appear to best capture the $Y_3\approx 0$ behaviour from the data, though these p.d.f.s also predict a greater than observed amount of $Y_2$ and underpredict $Y_1$. The BVB6 distribution also appears to capture the general trends of the data, though it appears to overpredict the amount of $Y_3$ present in the data. The CM-1 distribution also qualitatively captures the data trends, though again appears to overestimate the amount of $Y_3$ and underestimate the amount of $Y_2$ present in the data, as well as overpredicting the variance in $Y_1$.
The middle layer data appears to be reasonably described by the Dirichlet distribution, as well as all of the BVB5 and BVB6 distributions. The Dirichlet distribution appears to overestimate the amount of $Y_1$ present, while the BVB5 and BVB6 distributions appear to better match this behaviour. The CM-1 distribution does not appear to predict the data well, generally overpredicting $Y_1$ and $Y_3$, though it appears to capture the range of $Y_2$ relatively well. The CM-2 and CM-3 distributions better capture the range of $Y_1$ and $Y_2$, though they appear to predict more variance in $Y_3$, with an increased amount of $Y_3=0$ and $Y_3=1$ predicted versus the data.
Finally, the lower layer data appears to be best described by the BVB5 and BVB6 distributions, with the BVB5-13 and BVB5-23 distributions matching particularly well. The CM-3 distribution also appears to do well in describing the DNS data. The Dirichlet and CM-1 distributions appear to capture the location of maximum probability relatively well, though they do not appear to capture the shape of the distribution. Lastly, the CM-2 distribution does not appear to accurately describe the data.
In summary, these results suggest that there is considerable complexity in describing three-component mixing in a non-isotropic mixing layer using p.d.f.s. First, the Dirichlet distribution does not appear to accurately describe this case in general, suggesting that the marginal distributions of this case do not conform to a beta distribution. This represents a notable change from the two-component case where a beta distribution has been previously shown to work well. Second, the BVB5 and BVB6 p.d.f.s generally appear to perform the best overall of all the model p.d.f.s considered. Within the BVB5 case, the location within the mixing layer appears to influence which neutral permutation ($-$12, $-$13 or $-$23) best matches the data. This is likely related to the observed sign change of ${\widetilde {Y''_1 Y''_2}}$ observed in figure 11. Some of the model distributions require that some or all of the covariances are negative (Perry & Mueller Reference Perry and Mueller2018). Furthermore, the neutral variations of each model p.d.f. alter which covariances are required to be negative, and so different neutral variations of the same p.d.f. may also better match the data depending on the location within the mixing layer. These results also do not consider the influence of the choice of enforced means and covariances, and the optimal choice of means and covariances may also be a function of location within the mixing layer. Finally, it is also noteworthy that no single p.d.f. appears to be able to accurately describe the mixing through the entire mixing layer. This suggests that a p.d.f.-based model to describe three-component RT mixing must consider not only which p.d.f. best describes the data, but also the neutrality of that p.d.f. and how these choices should change through the mixing layer.
5. Results: an improved model for the impact of turbulence on TN reaction rate
In previous work by Morgan (Reference Morgan2022b), a model was developed to predict the impact of turbulence on average reaction rate in a reacting N-component mixture. This 2022 model was developed to be used in conjunction with RANS models in which reactant mass fraction covariances $\mathcal {C}_{\alpha \beta } \equiv \widetilde {Y''_\alpha Y''_\beta }$ are transported as a model turbulence variable. A shortcoming of the 2022 model however, is that it only considers second-moment closures in the expansion of the average reaction rate and higher-moment contributions are neglected. As a result of this simplification, it can be shown that the 2022 model does not reproduce the correct physical behaviour in the so-called ‘no-mix’ limit of perfectly segregated materials. Consider the instantaneous rate of reaction for the production of species $\gamma$ from the binary reaction of species $\alpha$ and $\beta$,
In (5.1) the instantaneous reaction rate $\dot {r}_{\gamma,\alpha \beta }$ is given in terms of the reaction cross-section, $\langle \sigma v\rangle _{\alpha \beta }$, and the particle number densities, $n_{\alpha }$ and $n_{\beta }$. The expressions $m_\alpha n_\alpha = \rho Y_\alpha$ and $m_\beta n_\beta = \rho Y_\beta$ are then utilized to relate number densities to mass fractions through the species masses $m_\alpha$ and $m_\beta$. By substituting into (5.1) and performing a Reynolds decomposition, (5.1) is then expanded as
Note that in deriving (5.2), a homogeneous $\langle \sigma v\rangle _{\alpha \beta }$ has been assumed. Since TN reaction rates are typically strong functions of temperature, this approximation will only be valid for reactions in a homogeneous temperature field, such as within a mixing layer with tight coupling to the radiation field. Such coupling might be expected in igniting ICF capsules and in other situations with high gas opacity such as the argon–tritium filled capsules of the MARBLE campaign (Albright et al. Reference Albright2022). Recall, it is assumed that a closure model for the fluctuating moments in (5.2) is to be used in conjunction with a RANS model that solves a transport equation for the mass fraction second moments. Thus, to derive an improved reaction rate model, the task remains to close the remaining moments in (5.2) in terms of the mass-fraction covariances and lower-order moments. In this regard, it is relatively straightforward to show for a variable-density, N-component mixture
and
To close the remaining moments, the approach of Ristorcelli (Reference Ristorcelli2017) is adopted in which higher-order moments are approximated using products of lower-order moments and then scaled to conform to the no-mix limit. For instance,
and
In (5.5) and (5.6), the notation $({\cdot })_{nm}$ is used to indicate the no-mix value of a given statistic, which can be derived exactly, even for higher-order moments such as those in (5.5) and (5.6). Appendix D discusses how these moments are derived and provides expressions for the no-mix statistics appearing in this section.
Through extensive algebra, it is possible to write the density variance term in (5.2) as
where $v \equiv \rho ^{-1}$ is the specific volume and $b$ is the density-specific-volume covariance,
One approach to closing the density variance that would follow the beta p.d.f. approach of Ristorcelli (Reference Ristorcelli2017) would be to assume the mixture conforms to a Dirichlet distribution, which then would allow the third moments $\widetilde {Y_iY_jY_k}$ to be expressed in terms of lower-order moments through the Dirichlet distribution moment generating function. However, as shown in § 4.2, this p.d.f. may not accurately describe the mixing in this case, and in fact the correct choice of p.d.f. may be quite complicated. Additionally, the fourth moment in (5.7) would then still require further closure in a manner similar to (5.6). An alternative approach that does not require an assumption about the form of the mixing distribution would be to simply approximate
Equation (5.9) is motivated by the observation that for an incompressible, variable-density mixing layer, the non-dimensional density variance is simply approximated by $b$. Indeed, this simple approximation is utilized to close density variance in the 2022 model by Morgan (Reference Morgan2022b). Additional scaling in (5.9) is included to ensure the model conforms to the no-mix limit. For the results presented in the remainder of this work, (5.9) will be the closure utilized. Then, the model can be completed by writing the final third-moment closure:
Equations (5.2), (5.3), (5.4), (5.5), (5.6), (5.9) and (5.10), along with the no-mix relations contained in Appendix D thus constitute a complete and improved model for the impact of turbulence on TN reaction rate that is expected to reproduce the correct physical behaviour in the no-mix limit. In the no-mix limit, mixing components are considered completely separated, no molecular mixing has occurred and mass fraction covariances achieve maximum magnitude, $(\mathcal {C}_{\alpha \beta })_{nm} = -\tilde {Y}_\alpha \tilde {Y}_\beta$. To illustrate expected behaviour and how this represents an improvement over previously proposed models, consider that a general reaction rate model may be written as
In (5.11), $R_{model}$ indicates the modification to the TN reaction rate provided by the model, while $R_{atomic}$ is the reaction rate that would be computed for fully atomically mixed materials in which the magnitude of turbulent statistics in (5.2) is identically 0. Furthermore, for binary premixed reactants in a light material, the no-mix limit of the reaction rate can be expressed in terms of the heavy and light volume fractions $\bar {V}_\alpha$,
Figure 16 compares behaviour of the new model against older models for a hypothetical $At=0.4$ mixture in the no-mix limit. In this figure, $R_{max}$ is the average reaction rate predicted using each model mass fraction covariance $\mathcal {C}_{\alpha \beta } = (\mathcal {C}_{\alpha \beta })_{nm}$. As observed in figure 16(a), the new model appropriately goes to $R_{nm}$ for premixed reactants at all magnitudes of light mass fraction, while the 2022 and 2018 models are shown to diverge, particularly as $\tilde {Y}_L$ approaches 0. In figure 16(b) the new model is additionally shown to reduce the reaction rate to zero for completely separated non-premixed reactants, while the older models shownon-physical divergence away from zero.
5.1. Evaluation of closure relations
Figure 17 presents a comparison of each of the higher-order terms in (5.2) between data extracted from the present DNS calculations and the model proposed in the previous section. The circles indicate the value of each term as found from the DNS data, and the solid lines refer to the value as calculated from the proposed closures as shown in (5.5), (5.6), (5.9) and (5.10). Each term is plotted with a constant colour, with red, green, blue, cyan, magenta, orange and black referring to terms 3, 4, 5, 6, 7, 8 and 9 in (5.2), respectively. For plotting, each term is pre-multiplied by a factor of ${\bar {\rho }}^2 {\tilde {Y}}_\alpha {\tilde {Y}}_\beta$, representing the leading factor in (5.2), and additionally pre-multiplied by another factor of ${\tilde {Y}}_\alpha {\tilde {Y}}_\beta$ to restrict the model comparison to the regions where the two fluids are present. These comparisons are presented for all combinations of $\alpha$ and $\beta$ that result in a unique comparison.
The comparison presented in figure 17 is generally favourable, with good agreement for the largest terms in each case observed between the DNS and model closures, particularly in the $\alpha =\beta =1$ and $\alpha =\beta =3$ comparisons that represent the greatest contributions to the total reaction rate. The most notable disagreement observed between the DNS data and proposed model closures is observed in the orange line, representing the ${\overline {\rho ' Y''_\alpha Y''_\beta }}$ term, with this disagreement particularly visible in the $\alpha =1,\beta =3$ and $\alpha =\beta =2$ comparisons. However, it should be noted that this term is generally small versus the largest terms in each comparison and so the influence of this disagreement is generally small. One exception is the $\alpha =1$, $\beta =3$ case, where the model significantly overpredicts ${\overline {\rho ' Y''_\alpha Y''_\beta }}$. This suggests that the greatest model error may be for the non-premixed case of separated reactants in materials one and three. However, as will be shown in § 5.2, good agreement in the reaction rate between the model and the DNS data is found for this case, suggesting that the influence of this disagreement is small.
5.2. Model/DNS comparison
The simulation presented in §§ 3 and 4 is now used to assess the model proposed in § 5. This exercise is performed by exporting the full simulation state of the run on the $R_{24}$ mesh from Miranda at a time corresponding to $\tau =8.2$. This time is chosen as earlier times in the simulation do not have enough three-component mixing to assess the multicomponent aspects of the model while also being too diffuse to adequately test the no-mix limit behaviour of the model. The middle layer is significantly mixed through at later times and, thus, does not provide a good test case to verify that the model functions as expected for multiple components. Therefore, the chosen time instant corresponds to significant, though not complete, mixing of the three layers such that the model's behaviour in the presence of multiple components may be suitably tested.
The simulation state is imported into Ares, where several modifications are performed to transform it into a test case for reacting turbulence. First, the species mass fractions, densities and mesh resolution in the second stage are kept identical to their values in the first stage. Next, the material compositions and equations of state are changed from ideal gas properties to more closely represent materials found in ICF capsules. Specifically, two different configurations are considered. The first, called the ‘premixed’ configuration, involves the upper and lower layers of fluid being replaced with non-reactive CH plastic, and the middle layer being replaced with a DT mixture. This is equivalent to the $\alpha =2$, $\beta =2$ case in figure 17. The second, called the ‘non-premixed’ configuration, entails the upper layer being replaced with CD, the middle layer replaced with non-reactive CH plastic and the bottom layer replaced with tritium gas. This is equivalent to the $\alpha =1$, $\beta =3$ case in figure 17. Finally, hydrodynamic evolution is disabled, the temperature in the problem is set to a uniform $5\,{\rm keV}$ and TN burn physics are enabled.
As noted in § 5, the model considered in this work assumes a homogeneous temperature field. To enforce this constraint, the burn phase of these calculations is run twice. The first run is conducted without any constraints in order to obtain a realistic temperature history for the problem. These temperature profiles are shown in figure 18. Once the mass-weighted average temperature as a function of time is obtained, the problem is then run a second time with this temperature profile enforced uniformly over the whole domain as a function of time. Both the DNS calculation as well as the RANS model calculation are run using an identical temperature profile to compare results.
The number of moles of TN neutrons produced as a function of time from the DNS data as well as the RANS model is plotted in figure 19 for the (a) premixed and (b) non-premixed configurations. To examine the influence of the improved RANS model, the RANS portion of the calculation is run with no model active, representing the expected result for a fully atomically mixed layer, with the model introduced by Morgan (Reference Morgan2022b) and with the model proposed in § 5. As can be observed in these figures, the improved model significantly reduces the RANS model error with respect to the fully mixed (i.e. ‘no model’) case. Relatively little difference between the 2022 model and newly proposed model is observed in the non-premixed case, with both models agreeing quite closely with the DNS data. The newly proposed model demonstrates a clear improvement over the 2022 model in the premixed case however, with a clear convergence towards the DNS data observed between the no model, 2022 model and newly proposed model cases. The mixing layer is fairly well mixed at this time, and so the source of the improved agreement is most likely due to the retention of the previously neglected higher-order terms rather than the preservation of the no-mix limit. Greater differences between the 2022 model and the newly proposed model would be expected in a problem with less diffusive mixing (i.e. closer to the no-mix limit) as shown in figure 16. The newly proposed model would also most likely demonstrate better agreement with the data than the 2022 model in this case due to the new model's ability to return to the correct behaviour in this limit.
6. Conclusions
The present work has performed a DNS of a three-layer Rayleigh–Taylor mixing problem. Direct numerical simulation was validated through a rigorous convergence study, examination of artificial contributions to fluid parameters and length scale analysis. While DNS of three-layer RTI have been performed previously (Youngs Reference Youngs2009, Reference Youngs2017), to the best of the authors’ knowledge, the present work represents the first examination of the physics of the three-layer RTI through simulation.
The mixing layer width was found to have a linear growth over time beginning at a non-dimensional time of $\tau \approx 11$. Linear growth of an RTI mixing layer in the configuration considered in the present work was predicted by the theory of Jacobs & Dalziel (Reference Jacobs and Dalziel2005) and measured experimentally by Jacobs & Dalziel (Reference Jacobs and Dalziel2005) and Suchandra & Ranjan (Reference Suchandra and Ranjan2023). To the best of the author's knowledge, the present simulations represent the first time this has been measured in simulation. The present simulations find a slope of linear growth of $\gamma =0.46 \pm 0.006$, which is in agreement with the value of $\gamma =0.49\pm 0.03$ found by Jacobs & Dalziel (Reference Jacobs and Dalziel2005), and slightly greater than the value of $0.41 \pm 0.01$ found by Suchandra & Ranjan (Reference Suchandra and Ranjan2023).
An interesting change in the behaviour of the mass fraction covariances was observed for this three-layer case. Specifically, the ${\widetilde {Y''_1 Y''_2}}$ covariance was observed to undergo a sign change through the mixing layer. This is a behaviour that is not observed in a two-component mixing layer and represents an important way that the three-layer case differs from the two-layer one. This occurs due to the fact that, while an increase in one species must mean a corresponding decrease in the other for a two-species problem, such a constraint does not exist for more than two species. This may have several implications, perhaps most notably for models describing three-layer mixing such as the case considered here. Models to describe this case must account for the potential sign change of the covariances in order to accurately model the mixing process.
Two-point autocorrelation-based length scales were estimated in the vertical and horizontal directions separately to examine the differences in length scales in the axes aligned with, and perpendicular to, the direction of gravity. Notably, while the velocity-based integral length scales demonstrate a clear anisotropy with the vertical length scale larger than the horizontal one as may be expected for a Rayleigh–Taylor driven mixing layer, such an anisotropy was not observed in the species fraction fields. In fact, the species fraction-based integral length scales were generally smaller in the vertical axis than the horizontal, which is a notable difference from the velocity-based length scale. The size of the velocity-based Taylor microscale demonstrates a similar anisotropy as the integral length scale. The species fraction-based length scales however, are general either similar in size or slightly larger in the vertical axis than the horizontal one, representing a notable change from the integral scale behaviour. This suggests that models for configurations such as the one considered in this work should be aware of these differences, and how the field and scale size considered change the horizontal-to-vertical anisotropy.
Potential statistical descriptions of three-component, non-isotropic turbulent flow at several regions within the mixing layer were also considered based on the model p.d.f.s presented by Perry & Mueller (Reference Perry and Mueller2018). Interestingly, the Dirichlet distribution did not appear to accurately describe the data, suggesting that the three-layer case does not have marginal beta distributions. This represents a notable change from the two-layer case where a beta distribution has been shown to work well (Girimaji Reference Girimaji1991; Cook & Riley Reference Cook and Riley1994; Ihme & See Reference Ihme and See2011; Ristorcelli Reference Ristorcelli2017). Generally, a five- or six-parameter bivariate beta distribution was found to be necessary to reasonably describe the flow over multiple regions of the mixing layer, in agreement with the decision tree proposed by Reference Perry and MuellerPerry & Mueller. The BVB6 distribution may describe this flow best of all p.d.f.s considered in the present work, but the requirement of enforcing a third moment limits its utility for modelling purposes and adds additional complexity in terms of the correct third moment to enforce.
In general, the model p.d.f. that most accurately describes the probability distribution of mixing components appears to be influenced by not only the problem configuration, but also other factors such as the location within the mixing layer and the ability for different components to preferentially mix. This suggests that the most appropriate choice of p.d.f. to model the distribution of species concentration in a non-isotropic turbulent mixing problem, such as an RTI mixing layer, is complicated. The decision tree for selecting a suitable model p.d.f. a priori as proposed by Reference Perry and MuellerPerry & Mueller appears effective in general; however, they do not similarly suggest a method to select a neutral permutation of that p.d.f. This is particularly relevant to the present results, as the neutrality of the species concentration distribution appears to change throughout the mixing layer. Qualitative arguments based on physical intuition are useful to select a neutral variation of a p.d.f. in general, but a quantitative method to select a distribution and neutral permutation either a priori or as a function of the first and second moments of the simulation, as would be required for modelling efforts, is not apparent from the present results.
Finally, an improved model for the influence of turbulent mixing on average reaction rates based on the models of Ristorcelli (Reference Ristorcelli2017) and Morgan (Reference Morgan2022b) was presented. Notably, this model allows for an arbitrary number of mixing components and also preserves the no-mix limit, representing improvements over these previous models. A single time instant from the present simulations corresponding to a non-dimensional time of $\tau =8.2$ was selected to assess this model. Comparison between the model closures and DNS data demonstrate good agreement, indicating that the model closures reasonably approximate the DNS data. The configuration studied in the first part of the present work was transformed into an ICF-relevant configuration in order to directly compare the reaction rates from the DNS against the model predictions. Two configurations were considered, corresponding to a ‘premixed’ and ‘non-premixed’ configuration. The improved model demonstrated greatly improved agreement with the DNS results in both configurations compared with RANS simulations that neglect the impact of mixing heterogeneity on the average reaction rate. Improvement over the model of Morgan (Reference Morgan2022b) was also observed, particularly in the premixed configuration. The present work considers a relatively well-mixed case with significant diffusive mixing, and so the observed improvements over the 2022 model are attributed to previously neglected higher-order terms being retained in the new model, rather than the preservation of the no-mix limit. The influence of preserving the no-mix limit is expected to be more significant in cases with less diffusive mixing.
Acknowledgements
The authors would like to acknowledge J. Grandy, J. Greenough, B. Olson, B. Pudliner, P. Rambo, O. Schilling and other colleagues at LLNL for their useful assistance, input and discussions in the preparation of this work. They would also like to acknowledge P. Suchandra and Q. Dzurny at Georgia Institute of Technology for their assistance in providing the experimental data used for validation, as well as M. Mueller at Princeton University for their assistance with the p.d.f.s examined in this work.
Funding
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Declaration of interests
The authors report no conflict of interest.
Appendix A. Numerical methods
A.1. Miranda
Miranda is an artificial fluid LES code used to simulate hydrodynamic evolution in the present work (Cook Reference Cook2007, Reference Cook2009; Cabot & Cook Reference Cabot and Cook2006; Morgan et al. Reference Morgan, Olson, White and McFarland2017). The governing equations solved by Miranda, as well as the spatial and temporal integration schemes utilized, were outlined in § 2.1. This appendix adds additional detail regarding the details of Miranda's numerical methods. As noted in § 2.1, Miranda solves the compressible Navier–Stokes equations for a non-reacting, multicomponent mixture,
where $\rho$ is the density, $t$ is the time, $u_i$ is the velocity along axis $i$, $x_i$ is the spatial coordinate in axis $i$, $Y_\alpha$ is the mass fraction of species $\alpha$, $J_{\alpha,i}$ is the diffusive mass flux of species $\alpha$, $p$ is the pressure, $\sigma _{ij}$ is the viscous stress tensor, $g_j$ is the gravitational body force in axis $j$, $E$ is the total energy and $q_i$ is the heat flux in axis $i$. The diffusive mass flux is given by
for $k = 1, 2, \ldots, N$ total species, where $D_k$ is the Fickian diffusivity. The viscous stress tensor is
where $\mu$ is the shear viscosity, $\beta$ is the bulk viscosity and $\delta _{ij}$ is the Kronecker delta. Here $S_{ij}$ is the strain rate tensor, expressed as
The heat flux vector, $q_i$, is given as
where $\kappa$ is the thermal conductivity and $h_k$ is the enthalpy of species $k$, where $k$ is in the range $1, 2, \ldots, N$ for $N$ fluids. The pressure, temperature and enthalpy of each fluid component are obtained using an ideal gas equation of state,
where $c_{v,k}$ is the specific heat at constant volume and $\gamma _k$ is the ratio of specific heats for component $k$. An assumption of pressure and temperature equilibrium between the species allows an iterative process to be used to solve for component volume fractions, $V_k$. This, in turn, allows the determination of partial densities and energies according to
Total pressure is then calculated according to the mixture relationship
The subgrid transfer of energy is modelled using an AFLES approach in which artificial transport terms are added to the fluid viscosity, bulk viscosity, thermal conductivity and molecular diffusivity (Cook Reference Cook2007, Reference Cook2009). These are added according to
where the subscript $f$ denotes the molecular contribution to the property from the fluid and subscript $a$ denotes the artificial contribution. The present study is focused on a relatively low-Reynolds-number regime and so the fluid contributions to these parameters cannot be neglected. These are found from the properties of the fluids used in this problem together with Miranda's mixture equation of state (Cook Reference Cook2009). The specific values used in the present work are discussed in § 2. The artificial contribution to these terms are computed according to the method described by Campos & Morgan (Reference Campos and Morgan2019) as well as Morgan et al. (Reference Morgan, Olson, Black and McFarland2018a). Each of these terms has a functional form according to
where $\psi$ is the artificial fluid property, $C_\psi$ is a tuning coefficient, $\Delta = (\Delta x \Delta y \Delta z)^{1/3}$ is the local grid spacing, $G$ is an eighth-order derivative such that, for a scalar,
and, for a vector,
The overbar indicates the application of a truncated-Gaussian filter. The values of each of the tuning parameters, as well as $F$ and $\phi$, for each artificial component are outlined in table 4. In this table, $c_v$ is the specific heat at constant volume of the fluid and $\Delta t$ is the time step. This form of the artificial terms is chosen to ensure that the artificial terms are biased towards the high wavenumber components of the flow and to have very low influence at resolved scales (Cook et al. Reference Cook, Cabot and Miller2004; Cook Reference Cook2007; Campos & Morgan Reference Campos and Morgan2019).
A.2. Ares
The second stage of the simulations in the present work is performed using Ares. Ares utilizes an arbitrary Lagrangian–Eulerian method with a second-order remap and a second-order predictor–corrector scheme for time integration (Sharp Reference Sharp1978; Darlington et al. Reference Darlington, McAbee and Rodrigue2001). In the present work, Ares is used to simulate the TN burn of the mixing layer in an ICF-relevant configuration. Coupling between hydrodynamics and radiation is performed using a Planckian non-equilibrium radiation diffusion model. In this approach, a single opacity, $\omega$, is used to characterize the energy absorbed from the radiation field, as well as the energy contributed from the material to the radiation field via emission. The radiation energy, $E_r$, is evolved according to
where $c$ is the speed of light in a vacuum, $T_e$ is the electron temperature and $a_r$ is the radiation constant. This is given in terms of the Stefan–Boltzmann constant, $\sigma _{SB}$, by
Electron and ion energies are allowed to evolve separately. The ion energy, $E_i$, is given by
The electron energy, $E_e$, is given by
where $q_{e,i}$ is the electron heat flux vector and is given in terms of the electron conductivity, $\kappa _e$, by
The ion and electron fields are coupled to the radiation field through the source terms in (A24) and (A25). These source terms are given by
where $K_{ie}$ is the ion–electron coupling coefficient and $T_i$ is the ion temperature; $\dot {Q}_{TN,i}$ and $\dot {Q}_{TN,e}$ are source terms due to the local deposition of energy from TN reactions. The specific heat, electron and ion temperatures are determined from the equation of state. The radiation temperature is related to the radiation energy by
Only a single TN reaction is considered in the present work,
The rate of reaction with products $\gamma$ and reactants $\alpha$ and $\beta$ is described by
where $\langle \sigma v \rangle _{\alpha \beta }$ is the reaction cross-section, and $n_\alpha$ and $n_\beta$ are the particle number densities. The reaction cross-section is interpolated using the TDFv2.3 library (Warshaw Reference Warshaw2001). Additionally, each reaction has an average thermal energy, $Q_{TN}$, which is $Q_{TN} = 17.59\,{\rm MeV}$ for the ${\rm D} + {\rm T}$ reaction considered here. Local deposition of this energy is assumed such that the average thermal energy is removed from the ion energy field. Charged particle energy is deposited in the same volume with a split between the ion and electron energies, with this split determined according to the Corman–Spitzer model (Corman et al. Reference Corman, Loewe, Cooper and Winslow1975). Neutrons are assumed to immediately escape the problem and energy carried by neutron products is removed from the system. Thermal effects and the apportionment of average thermal energy between reactants is determined following the method of Warshaw (Reference Warshaw2001), and the ion–electron coefficient, $K_{ie}$, is determined via the method of Brysk (Reference Brysk1974).
Appendix B. Length scales
Section 4.1 presents the size of various turbulent length scales as determined from the Reynolds number, turbulent spectra and two-point correlations. This appendix outlines the relationships utilized to find these length scales from the simulation data.
The integral length scale is approximated through the Reynolds number found in § 3.3.2 using the relationship provided by Dimotakis (Reference Dimotakis2000) as
The integral length scale may also be found from the spectra of the flow. Pope (Reference Pope2000) shows that the size of the integral length scale for a quantity $f$ may be found from the spectrum of $f$ as
Notably, this definition allows for determination of integral length scales based on the spectra of TKE, $Y''_1$, $Y''_2$ and $Y''_3$. Finally, Reference PopePope also notes that the integral length scale is related to the two-point correlation through
where this length scale can be determined for $u_z''$, $Y''_1$, $Y''_2$ and $Y''_3$ in both the axis aligned with, as well as perpendicular to, gravity.
The size of the Taylor microscale may similarly be estimated from the results presented in the previous sections in several ways. Pope (Reference Pope2000) provides a relationship between the Reynolds number and the Taylor microscale as
Reference PopePope also notes that the two-point spatial correlations may also be related to the size of the Taylor microscale through
where $f$ is the correlation function found using (4.3) and this equation represents the zero crossing of a parabola osculating $f(r)$ with a value of $f(0)=1$ and a slope $({{\rm d}f}/{{\rm d}{\kern0.8pt}x}) (0) = 0$.
Finally, the smallest scales of a turbulent flow may also be estimated from the present simulations. For velocity fields, this scale is the Kolmogorov scale and, for scalar fields, this scale is the Batchelor scale (Batchelor Reference Batchelor1959; Kolmogorov Reference Kolmogorov1991). Dimotakis (Reference Dimotakis2000) provides an estimate of the Kolmogorov scale from the Reynolds number as
The work of Reference DimotakisDimotakis does not provide a similar estimate for the Batchelor length scale based on the Reynolds number. However, the Batchelor scale may be estimated from the Kolmogorov scale through the relationship
The Kolmogorov length scale may also be found from the dissipation rate, and this is defined by Pope (Reference Pope2000) as
where $\nu$ is the fluid viscosity and $\epsilon$ is the dissipation rate. The Batchelor scale can then be defined in terms of the Kolmogorov scale as (Batchelor Reference Batchelor1959)
where $D$ is the fluid mass diffusivity. It should be noted that in both (B8) and (B9) that the assigned fluid values of $\nu$ and $D$ were used, as opposed to the total values as discussed in § 3.1. This was done to ensure that the Kolmogorov and Batchelor length scales were calculated based on real fluid properties and do not include the influence of the artificial fluid terms. However, as shown in figure 3, the artificial contributions to the fluid properties are small for the highest resolution meshes, and so there is likely negligible influence from this approach. The minimum values of $\lambda _\eta$ and $\lambda _B$ as a function of $z$ are chosen to calculate a single value for the length scales found from (B8) and (B9).
Appendix C. Summary of p.d.f.s
A number of p.d.f.s that may be suitable to describe three-component mixing were presented in § 4.2. This appendix presents the concept of independence with regards to these density functions, as well as the functional form of the distributions themselves and assumptions in their derivations.
At the outset, it is worthwhile to discuss the concept of neutrality with regards to the p.d.f.s that will be discussed in the following paragraphs. This concept was introduced by Connor & Mosimann (Reference Connor and Mosimann1969), and is also discussed by Perry & Mueller (Reference Perry and Mueller2018). First, it is useful to define the remainder fraction, $F_{ij}$, as the proportion of the $j$th component of the mixture after removing all of the $i$th components. For a three-component mixture, this can be written as
where $Y_k$ is the third component. With this in mind, a vector of mixture fractions $(Y_i, Y_j, Y_k)$ is said to be neutral if $Y_i$ is independent of $F_{ij}$. In a physical sense, this can be viewed as saying that the amount of $Y_i$ in a mixture is independent of whether the remainder of the mixture is composed entirely of $Y_j$, $Y_k$ or some mixture of the two. Non-neutrality can, in turn, be viewed as stating that there is an asymmetry in the mixing of certain components, with the $i$th component mixing more or less readily with the others. This results in three unique (potentially) neutral vectors for three components (Perry & Mueller Reference Perry and Mueller2018).
Perry & Mueller (Reference Perry and Mueller2018) examined several joint p.d.f.s to describe their multicomponent mixing flow. The first is the Dirichlet distribution, which is a multivariate generalization of the beta distribution, and may be written for three components as
where $Y_1$ and $Y_2$ are the mass fractions of the first and second species. The p.d.f. has the constraint that $\sum _{{k}=1}^3 Y_{k} = 1$, and so $Y_3 = 1 - Y_1 - Y_2$. $K_N$ is a normalization factor, defined as
where $\varGamma$ is the gamma function. Finally, $\alpha _i > 0$ are the shape parameters that describe the distribution. These shape parameters may be related to the means and covariances of the p.d.f. as
where $\alpha _0 = \sum _i \alpha _i$, ${\widetilde {Y''_i Y''_j}}$ is the (co)variance of the mass fractions of species $i$ and $j$, $\delta _{ij}$ is the Kronecker delta and summation notation does not apply. There are a few notable aspects of the Dirichlet distribution worth mentioning here. First, this distribution is a three parameter distribution, and two of the three parameters must be used to specify the mean values, meaning that only a single variance or covariance may be enforced. Additionally, it can be shown through (C4a,b) that all components must be negatively correlated. Third, the Dirichlet distribution is neutral for all permutations of $(Y_i, Y_j, Y_k)$ (Perry & Mueller Reference Perry and Mueller2018). This suggests that the Dirichlet distribution is not likely to be descriptive in cases where there is preferential mixing between certain components. These are significant limitations on the usefulness of the Dirichlet distribution, and has motivated the consideration of other p.d.f.s that may describe multicomponent mixing.
A second model proposed by Reference Perry and MuellerPerry & Mueller is the generalized Dirichlet p.d.f. introduced by Connor & Mosimann (Reference Connor and Mosimann1969). The CM p.d.f. was derived based on the assumption that $Y_1$ has a marginal beta distribution and that $(Y_1, Y_2, Y_3)$ is neutral, though they did not assume anything about the other two permutations. This p.d.f. may be written for three components as
where $K_N$ is again the normalization factor, given by
where the parameters $\alpha _i$ and $\beta _i$ are the shape factors. In the three-component CM distribution, there are three parameters for each $\alpha$ and $\beta$, where $\alpha _i, \beta _i \geq 0$ and $\alpha _3 = 1$, $\beta _3 = 0$ (Connor & Mosimann Reference Connor and Mosimann1969). Thus, there are four free parameters in total. This represents an improvement over the Dirichlet distribution discussed previously as now two variances or covariances may be enforced instead of just one. Connor & Mosimann (Reference Connor and Mosimann1969) provides relationships that relate the parameters of the p.d.f. to the means of the species mass fractions as
to the variances as
and, finally, to the covariances as
where $\alpha _3=1$ and $\beta _3=0$ (Connor & Mosimann Reference Connor and Mosimann1969). Notably, this formulation requires ${\widetilde {Y''_1 Y''_2}} \leq 0$ and ${\widetilde {Y''_1 Y''_3}} \leq 0$ since $\alpha _i,\beta _i \geq 0$, though ${\widetilde {Y''_2 Y''_3}}$ can be positive or negative. Perry & Mueller (Reference Perry and Mueller2018) notes that analogous distributions where $(Y_2, Y_1, Y_3)$ or $(Y_3, Y_1, Y_2)$ are neutral can also be constructed by replacing the term $(1 - Y_1)^{\beta _1 - \alpha _2 - \beta _2}$ with similar expressions involving $Y_2$ or $Y_3$. These three p.d.f.s are denoted CM-1, CM-2 and CM-3, respectively, where the integer indicates the first component in the neutral permutation.
Reference Perry and MuellerPerry & Mueller also considers the BVB5 distribution introduced by Doran (Reference Doran2011). The p.d.f. for this distribution is given by
where $K_N$ is again a normalization factor and $\alpha _i$ is the shape parameters of the distribution. The fact that there are five parameters in this distribution mean that all of the unique first and second moments may be enforced, representing a further improvement over the previously discussed distributions. However, there are no closed-form relationships for $K_N$ and $\alpha _i$ in terms of the means and variances. Doran (Reference Doran2011) notes that, while it may be possible to express these moment relationships in terms of hypergeometric functions, it is most efficient to solve for the shape parameters in terms of a desired set of moments in an iterative fashion. Thus, the BVB5 distribution represents an improvement in that all of the unique first and second moments of a three-component distribution may be enforced, but with the caveat that the solution process is more intensive.
Finally, Reference Perry and MuellerPerry & Mueller introduces a BVB6 distribution, allowing for maximum generality in terms of neutrality of the distribution. The p.d.f. for this distribution is defined as
where $K_N$ is again a normalization factor. Notably, all of the BVB5, CM and Dirichlet distributions discussed previously are special cases of this distribution (Perry & Mueller Reference Perry and Mueller2018). The addition of the sixth parameter in this distribution requires the enforcement of a third moment. This is undesirable from a modelling perspective as higher-order moments are not frequently known, but this distribution is still worth considering to establish its descriptive capability for this flow. As with the BVB5 cases, no closed-form relationships between the shape parameters and the moments of the distribution can be found. Thus, the solution method for finding the shape parameters of this distribution is to iteratively solve for a desired set of moments as with the BVB5 case.
The p.d.f.s described in the previous paragraphs demonstrate a clear hierarchy of increasing generality and complexity as additional parameters are added. Increasing the number of parameters in the distribution has the advantage of allowing additional first and second moments to be enforced, and allows for more freedom in terms of the neutrality of the p.d.f.s. This means that simple expressions relating the moments of a p.d.f. to its parameters may not exist in the more complicated cases however, and numerical integration of the p.d.f. must be utilized to find these moments instead (Doran Reference Doran2011; Perry & Mueller Reference Perry and Mueller2018). Reference Perry and MuellerPerry & Mueller note that the optimal choice of distribution is the one that adequately captures asymmetries in the mixing with the fewest parameters.
Appendix D. No-mix statistics
In order to derive the no-mix statistical relationships necessary to complete the model described in § 5, we treat our mixture as a stochastic material where the composition of the species mass fractions $\boldsymbol {Y_\alpha } = \{Y_1, Y_2, Y_3,\ldots, Y_N \}$ is described by the multivariate probability distribution function $P(\boldsymbol {Y_\alpha })$. To describe the mixture p.d.f. in the no-mix limit, the mutivariate delta function is introduced:
Then, the p.d.f. of the composition is given by the sum of deltas:
Thus, in the no-mix limit,
and
Combining (D3) and (D4) yields the useful relationship for a mixture in the no-mix limit:
The no-mix specific volume and density-specific-volume covariance can then be solved:
Following a similar procedure of integrating the p.d.f., after considerable algebra, the remaining no-mix statistics can be solved as well: