We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We propose a novel and unified sampling scheme, called the accelerated group sequential sampling scheme, which incorporates four different types of sampling scheme: (i) the classic Anscombe–Chow–Robbins purely sequential sampling scheme; (ii) the accelerated sequential sampling scheme; (iii) the relatively new k-at-a-time group sequential sampling scheme; and (iv) the new k-at-a-time accelerated group sequential sampling scheme. The first-order and second-order properties of this unified sequential sampling scheme are fully investigated with two illustrations on minimum risk point estimation for the mean of a normal distribution and on bounded variance point estimation for the location parameter of a negative exponential distribution. We also provide extensive Monte Carlo simulation studies and real data analyses for each illustration.
As TeV gamma-ray astronomy progresses into the era of the Cherenkov Telescope Array (CTA), instantaneously following up on gamma-ray transients is becoming more important than ever. To this end, a worldwide network of Imaging Atmospheric Cherenkov Telescopes has been proposed. Australia is ideally suited to provide coverage of part of the Southern Hemisphere sky inaccessible to H.E.S.S. in Namibia and the upcoming CTA-South in Chile. This study assesses the sources detectable by a small, transient-focused array in Australia based on CTA telescope designs. The TeV emission of extragalactic sources (including the majority of gamma-ray transients) can suffer significant absorption by the extragalactic background light. As such, we explored the improvements possible by implementing stereoscopic and topological triggers, as well as lowered image cleaning thresholds, to access lower energies. We modelled flaring gamma-ray sources based on past measurements from the satellite-based gamma-ray telescope Fermi-LAT. We estimate that an array of four Medium-Sized Telescopes (MSTs) would detect $\sim$24 active galactic nucleus flares >5$\sigma$ per year, up to a redshift of $z\approx1.5$. Two MSTs achieved $\sim$80–90% of the detections of four MSTs. The modelled Galactic transients were detectable within the observation time of one night, 11 of the 21 modelled gamma-ray bursts were detectable, as were $\sim$10% of unidentified transients. An array of MST-class telescopes would thus be a valuable complementary telescope array for transient TeV gamma-ray astronomy.
The order-disorder behavior of the isomorphous cation substitution of the octahedral sheet of phyllosilicates was investigated by Monte Carlo simulations based only on atomistic models in some three-species systems Al/Fe/Mg including a wide range of different octahedral compositions that can be relevant to clay compositions found in nature, especially for smectites and illites. In many cases, phase transitions do not occur, in that long-range order is not attained, but most systems exhibit short-range order at low temperature. The ordering of the octahedral cations is highly dependent on the cation composition. Variations in the tetrahedral charge (smectite vs. illite) produce slight differences in the cation distribution and the short-range and long-range order of octahedral cations do not change drastically. The average size of Fe clusters and the long-range order of Fe are not larger in illites than in smectites as previous reports concluded, but the proportion of Fe3+ cations non-clustered is higher in smectites than in illites. This behavior supports the experimental behavior of the Fe effect on the Al-NMR signal, which is lower in illites than in smectites.
Political scientists commonly use Grambsch and Therneau’s (1994, Biometrika 81, 515–526) ubiquitous Schoenfeld-based test to diagnose proportional hazard violations in Cox duration models. However, some statistical packages have changed how they implement the test’s calculation. The traditional implementation makes a simplifying assumption about the test’s variance–covariance matrix, while the newer implementation does not. Recent work suggests the test’s performance differs, depending on its implementation. I use Monte Carlo simulations to more thoroughly investigate whether the test’s implementation affects its performance. Surprisingly, I find the newer implementation performs very poorly with correlated covariates, with a false positive rate far above 5%. By contrast, the traditional implementation has no such issues in the same situations. This shocking finding raises new, complex questions for researchers moving forward. It appears to suggest, for now, researchers should favor the traditional implementation in situations where its simplifying assumption is likely met, but researchers must also be mindful that this implementation’s false positive rate can be high in misspecified models.
Brachytherapy is an effective local treatment for early-stage head and neck cancers. Mold irradiation is a method in which the source is placed in the oral cavity in sites where the soft tissue is thin and an irradiation source cannot be implanted. However, dose calculations based on TG-43 may be subject to uncertainty due to the heterogeneity of tissues and materials used for the irradiation of head and neck cancers.
Materials and Methods:
In this study, we investigated the basic physical properties of different materials and densities in the molds, retrospectively analysed patient plans and verified the doses of intraoral mold irradiation using a dose verification system with MC simulations specifically designed for brachytherapy, which was constructed independently.
Results and Discussion:
Dose–volume histograms were obtained with a treatment planning system (TG-43) and MC simulation and revealed a non-negligible difference in coverage of high-risk clinical target volume (HR-CTV) and organ at risk (OAR) between calculations using computed tomography values and those with density changes. The underdose was 10·6%, 3·7% and 5·6% for HR-CTV, gross tumour volume and OAR, respectively, relative to the treatment plan. The calculations based on the differences in the elemental composition and density changes in TG-43, a water-based calculation algorithm, resulted in clinically significant dose differences. The validation method was used only for the cases of complex small source therapy.
Conclusion:
The findings of this study can be applied to more complex cases with steeper density gradients, such as mold irradiation.
As TeV gamma-ray astronomy progresses into the era of the Cherenkov Telescope Array (CTA), there is a desire for the capacity to instantaneously follow up on transient phenomena and continuously monitor gamma-ray flux at energies above
$10^{12}\,\mathrm{eV}$
. To this end, a worldwide network of Imaging Air Cherenkov Telescopes (IACTs) is required to provide triggers for CTA observations and complementary continuous monitoring. An IACT array sited in Australia would contribute significant coverage of the Southern Hemisphere sky. Here, we investigate the suitability of a small IACT array and how different design factors influence its performance. Monte Carlo simulations were produced based on the Small-Sized Telescope (SST) and Medium-Sized Telescope (MST) designs from CTA. Angular resolution improved with larger baseline distances up to 277 m between telescopes, and energy thresholds were lower at 1 000 m altitude than at 0 m. The
${\sim} 300\,\mathrm{GeV}$
energy threshold of MSTs proved more suitable for observing transients than the
${\sim}1.2\,\mathrm{TeV}$
threshold of SSTs. An array of four MSTs at 1 000 m was estimated to give a 5.7
$\sigma$
detection of an RS Ophiuchi-like nova eruption from a 4-h observation. We conclude that an array of four MST-class IACTs at an Australian site would ideally complement the capabilities of CTA.
The Pannonian basin in Central Europe is well known for its rich geothermal resources. Although geothermal energy has been utilised, mainly for direct use purposes, for a long time, there are still a lot of untapped resources. This paper presents novel methods for outlining and assessing the theoretical and technical potential of partly still unknown geothermal reservoirs, based on a case study from the Dráva basin, one of the sub-basins of the Pannonian basin along the Hungarian–Croatian border. The presented methods include reservoir delineation based on combining geological bounding surfaces of the Upper Pannonian basin-fill units with a set of isotherms deriving from a conductive geothermal model. The geothermal potential of each identified reservoir was calculated by a Monte Carlo method, which was considered as being represented by the heat content of the fluids stored in the effective pore space (‘moveable fluid’). The results underline the great untapped geothermal potential of the Dráva basin, especially that of the reservoir storing thermal water of 50–75°C, which has the largest volume and the greatest stored heat content.
The systematics of cation ordering in binary spinel solid solutions have been investigated using an interatomic potential model combined with Monte Carlo simulations. The formalism to describe a system containing three cation species ordering over two non-equivalent sub-lattices is developed and the method applied to the MgAl2O4-FeAl2O4 binary solid solution. Our results compare favourably with experimental measurements of site-occupancy data, although the experiments display a slightly larger degree of non-ideality than the simulations. A possible kinetic origin of the non-ideal behaviour was examined by performing simulations in which only exchange of Mg and Fe2+ between tetrahedral and octahedral sites was permitted below the Al-blocking temperature of 1160 K. This approach improves the agreement with the experimental site occupancies, and suggests that the blocking temperature for moving Mg and Fe2+ between tetrahedral and octahedral sites is significantly lower than for moving Al
Sensitivity analysis plays an important role in finding an optimal design of a structure under uncertainty. Quantifying relative importance of random parameters, which leads to a rank ordering, helps in developing a systematic and efficient way to reach the optimal design. In this work, lift prediction and sensitivity analysis of a potential flow around a submerged body is considered. Such flow is often used in the initial design stage of structures. The flow computation is carried out using a vortex-panel method. A few parameters of the submerged body and flow are considered as random variables. To improve the accuracy in lift prediction in a computationally efficient way, a new semi-intrusive stochastic perturbation method is proposed. Accordingly, a perturbation is applied at the linear system solving level involving the inuence coefficient matrix, as opposed to using perturbation in the lift quantity itself. This proposed method, which is partially analogous to the intrusive or Galerkin projection methods in spectral stochastic finite element methods, is found to be more accurate than using perturbation directly on the lift and faster than a direct simulation. The proposed semi-intrusive stochastic perturbation method is found to yield faster estimates of the Sobol’ indices, which are used for global sensitivity analysis. From global sensitivity analysis, the flow parameters are found to be more important than the parameters of the submerged body.
Introduction. Family health history (FHx) is an important factor in breast and ovarian cancer risk assessment. As such, multiple risk prediction models rely strongly on FHx data when identifying a patient’s risk. These models were developed using verified information and when translated into a clinical setting assume that a patient’s FHx is accurate and complete. However, FHx information collected in a typical clinical setting is known to be imprecise and it is not well understood how this uncertainty may affect predictions in clinical settings. Methods. Using Monte Carlo simulations and existing measurements of uncertainty of self-reported FHx, we show how uncertainty in FHx information can alter risk classification when used in typical clinical settings. Results. We found that various models ranged from 52% to 64% for correct tier-level classification of pedigrees under a set of contrived uncertain conditions, but that significant misclassification are not negligible. Conclusions. Our work implies that (i) uncertainty quantification needs to be considered when transferring tools from a controlled research environment to a more uncertain environment (i.e, a health clinic) and (ii) better FHx collection methods are needed to reduce uncertainty in breast cancer risk prediction in clinical settings.
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
The paper studies the convergence behavior ofMonte Carlo schemes for semiconductors.A detailed analysis of the systematic error with respect to numerical parameters is performed.Different sources of systematic error are pointed out andillustrated in a spatially one-dimensional test case.The error with respect to the number of simulation particlesoccurs during the calculation of the internal electric field.The time step error, which is related to the splitting of transport andelectric field calculations, vanishes sufficiently fast.The error due to the approximation of the trajectories ofparticles depends on the ODE solver used in the algorithm.It is negligible compared to the other sources of time steperror, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanismis the most significant source of error with respect to the time step.
This study deals with the simulation of the experimental study of Roth
et al. (2000) on the interaction of
energetic Zn projectiles in partially ionized laser produced carbon
targets, and with similar type experiments. Particular attention is paid
to the specific contributions of the K and L shell target electrons to
electron recombination in the energetic Zn ionic projectile. The classical
Bohr–Lindhard model was used for describing recombination, while
quantum mechanical models were also introduced for scaling the L to K
cross-section ratios. It was found that even for a hydrogen-like carbon
target, the effect of the missing five bound electrons brings about an
increase of only 0.6 charge units in the equilibrium charge state as
compared to the cold target value of 23. A collisional radiative
calculation was employed for analyzing the type of plasma produced in the
experimental study. It was found that for the plasma conditions
characteristic of this experiment, some fully ionized target plasma atoms
should be present. However in order to explain the experimentally observed
large increase in the projectile charge state a very dominant component of
the fully ionized plasma must comprise the target plasma. A procedure for
calculating the dynamic evolvement of the projectile charge state within
partially ionized plasma is also presented and applied to the type of
plasma encountered in the experiment of Roth et al. (2000). The low temperature and density tail on the
back of the target brings about a decrease in the exiting charge state,
while the value of the average charge state within the target is dependent
on the absolute value of the cross-sections.
Search and study the general principles that govern
kinetics and thermodynamics of protein folding generates
new insight into the factors that control this process.
Here, we demonstrate based on the known experimental data
and using theoretical modeling of protein folding that
side-chain entropy is one of the general determinants of
protein folding. We show for proteins belonging to the
same structural family that there exists an optimal relationship
between the average side-chain entropy and the average
number of contacts per residue for fast folding kinetics.
Analysis of side-chain entropy for proteins that fold without
additional agents demonstrates that there exists an optimal
region of average side-chain entropy for fast folding.
Deviation of the average side-chain entropy from the optimal
region results in an anomalous protein folding process
(prions, α-lytic protease, subtilisin, some DNA-binding
proteins). Proteins with high or low side-chain entropy
would have extended unfolded regions and would require
some additional agents for complete folding. Such proteins
are common in nature, and their structure properties have
biological importance.
Suppose the n × n matrix A gives the payoffs for some evolutionary game, and its entries are the values of independent, identically distributed, continuous random variables. The distribution of the pattern of evolutionarily stable strategies for A will depend, if n ≧ 3, on this underlying distribution. A fairly complete picture for n = 3 is found, and some results are obtained for n ≧ 4.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.