Article contents
Adaptive stochastic trajectory modelling in the chaotic advection regime
Published online by Cambridge University Press: 13 March 2015
Abstract
Motivated by the goal of improving and augmenting stochastic Lagrangian models of particle dispersion in turbulent flows, techniques from the theory of stochastic processes are applied to a model transport problem. The aim is to find an efficient and accurate method to calculate the total tracer transport between a source and a receptor when the flow between the two locations is weak, rendering direct stochastic Lagrangian simulation prohibitively expensive. Importance sampling methods that combine information from stochastic forward and back trajectory calculations are proposed. The unifying feature of the new methods is that they are developed using the observation that a perfect strategy should distribute trajectories in proportion to the product of the forward and adjoint solutions of the transport problem, a quantity here termed the ‘density of trajectories’ $D(\boldsymbol{x},t)$. Two such methods are applied to a ‘hard’ model problem, in which the prescribed kinematic flow is in the large-Péclet-number chaotic advection regime, and the transport problem requires simulation of a complex distribution of well-separated trajectories. The first, Milstein’s measure transformation method, involves adding an artificial velocity to the trajectory equation and simultaneously correcting for the weighting given to each particle under the new flow. It is found that, although a ‘perfect’ artificial velocity $\boldsymbol{v}^{\ast }$ exists, which is shown to distribute the trajectories according to $D$, small errors in numerical estimates of $\boldsymbol{v}^{\ast }$ cumulatively lead to difficulties with the method. A second method is Grassberger’s ‘go-with-the-winners’ branching process, where trajectories found unlikely to contribute to the net transport (losers) are periodically removed, while those expected to contribute significantly (winners) are split. The main challenge of implementation, which is finding an algorithm to select the winners and losers, is solved by a choice that explicitly forces the distribution towards a numerical estimate of $D$ generated from a previous back trajectory calculation. The result is a robust and easily implemented algorithm with typical variance up to three orders of magnitude lower than the direct approach.
- Type
- Papers
- Information
- Copyright
- © 2015 Cambridge University Press
References
- 3
- Cited by