We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
How can groups best coordinate to solve problems? The answer touches on cultural innovation, including the trajectory of science, technology, and art. If everyone acts independently, different people will explore different solutions, but there is no way to leverage good solutions across the community. If everyone acts in consort, early successes can lead the group down dead ends and stifle exploration. The challenge is one of maintaining innovation but also communicating effective solutions once they are found. When solutions spaces are smooth – that is, easy – communication is good. But when solution spaces are rugged – that is, hard – the balance should tilt toward exploration. How can we best achieve this? One answer is to place people in social structures that reduce communication, but maintain connectivity. But there are other solutions that might work better. Algorithms, like simulated annealing, are designed to deal with such problems by adjusting collective focus over time, allowing systems to “cool off” slowly as they home in on solutions. Network science allows us to explore the performance of such solutions on smooth and rugged landscapes, and provides numerous avenues for innovation of its own.
Chapter 5 is dedicated to the most important part of predictive modeling for biomarker discovery based on high-dimensional data – multivariate feature selection. When dealing with sparse biomedical data whose dimensionality is much higher than the number of training observations, the crucial issue is to overcome the curse of dimensionality by using methods capable of elevating signal (predictive information) from the overwhelming noise. One way of doing this is to perform many (hundreds or thousands) parallel feature selection experiments based on different random subsamples of the original training data and then aggregating their results (for example, by analyzing the distribution of variables among the results of those parallel experiments). Two designs of such parallel feature selection experiments are discussed in detail: one based on recursive feature elimination, and the other on implementing the stepwise hybrid selection with T2. The chapter includes also descriptions of three evolutionary feature selection algorithms: simulated annealing, genetic algorithms, and particle swarm optimization.
The increase in Electrical and Electronic Equipment (EEE) usage in various sectors has given rise to repair and maintenance units. Disassembly of parts requires proper planning, which is done by the Disassembly Sequence Planning (DSP) process. Since the manual disassembly process has various time and labor restrictions, it requires proper planning. Effective disassembly planning methods can encourage the reuse and recycling sector, resulting in reduction of raw-materials mining. An efficient DSP can lower the time and cost consumption. To address the challenges in DSP, this research introduces an innovative framework based on Q-Learning (QL) within the domain of Reinforcement Learning (RL). Furthermore, an Enhanced Simulated Annealing (ESA) algorithm is introduced to improve the exploration and exploitation balance in the proposed RL framework. The proposed framework is extensively evaluated against state-of-the-art frameworks and benchmark algorithms using a diverse set of eight products as test cases. The findings reveal that the proposed framework outperforms benchmark algorithms and state-of-the-art frameworks in terms of time consumption, memory consumption, and solution optimality. Specifically, for complex large products, the proposed technique achieves a remarkable minimum reduction of 60% in time consumption and 30% in memory usage compared to other state-of-the-art techniques. Additionally, qualitative analysis demonstrates that the proposed approach generates sequences with high fitness values, indicating more stable and less time-consuming disassembles. The utilization of this framework allows for the realization of various real-world disassembly applications, thereby making a significant contribution to sustainable practices in EEE industries.
In this paper, we propose new Metropolis–Hastings and simulated annealing algorithms on a finite state space via modifying the energy landscape. The core idea of landscape modification rests on introducing a parameter c, such that the landscape is modified once the algorithm is above this threshold parameter to encourage exploration, while the original landscape is utilized when the algorithm is below the threshold for exploitation purposes. We illustrate the power and benefits of landscape modification by investigating its effect on the classical Curie–Weiss model with Glauber dynamics and external magnetic field in the subcritical regime. This leads to a landscape-modified mean-field equation, and with appropriate choice of c the free energy landscape can be transformed from a double-well into a single-well landscape, while the location of the global minimum is preserved on the modified landscape. Consequently, running algorithms on the modified landscape can improve the convergence to the ground state in the Curie–Weiss model. In the setting of simulated annealing, we demonstrate that landscape modification can yield improved or even subexponential mean tunnelling time between global minima in the low-temperature regime by appropriate choice of c, and we give a convergence guarantee using an improved logarithmic cooling schedule with reduced critical height. We also discuss connections between landscape modification and other acceleration techniques, such as Catoni’s energy transformation algorithm, preconditioning, importance sampling, and quantum annealing. The technique developed in this paper is not limited to simulated annealing, but is broadly applicable to any difference-based discrete optimization algorithm by a change of landscape.
This article examines large-time behaviour of finite-state mean-field interacting particle systems. Our first main result is a sharp estimate (in the exponential scale) of the time required for convergence of the empirical measure process of the N-particle system to its invariant measure; we show that when time is of the order
$\exp\{N\Lambda\}$
for a suitable constant
$\Lambda > 0$
, the process has mixed well and it is close to its invariant measure. We then obtain large-N asymptotics of the second-largest eigenvalue of the generator associated with the empirical measure process when it is reversible with respect to its invariant measure. We show that its absolute value scales as
$\exp\{{-}N\Lambda\}$
. The main tools used in establishing our results are the large deviation properties of the empirical measure process from its large-N limit. As an application of the study of large-time behaviour, we also show convergence of the empirical measure of the system of particles to a global minimum of a certain ‘entropy’ function when particles are added over time in a controlled fashion. The controlled addition of particles is analogous to the cooling schedule associated with the search for a global minimum of a function using the simulated annealing algorithm.
Aircraft sequencing and scheduling within terminal airspaces has become more complicated due to increased air traffic demand and airspace complexity. A stochastic mixed-integer linear programming model is proposed to handle aircraft sequencing and scheduling problems using the simulated annealing algorithm. The proposed model allows for proper aircraft sequencing considering wind direction uncertainties, which are critical in the decision-making process. The proposed model aims to minimise total aircraft delay for a runway airport serving mixed operations. To test the stochastic model, an appropriate number of scenarios were generated for different air traffic demand rates. The results indicate that the stochastic model reduces the total aircraft delay considerably when compared with the deterministic approach.
Phenological models for predicting the grapevine flowering were tested using phenological data of 15 grape varieties collected between 1990 and 2014 in Vinhos Verdes and Lisbon Portuguese wine regions. Three models were tested: Spring Warming (Growing Degree Days – GDD model), Spring Warming modified using a triangular function – GDD triangular and UniFORC model, which considers an exponential response curve to temperature. Model estimation was performed using data on two grape varieties (Loureiro and Fernão Pires), present in both regions. Three dates were tested for the beginning of heat unit accumulation (t0 date): budburst, 1 January and 1 September. The best overall date was budburst. Furthermore, for each model parameter, an intermediate range of values common for the studied regions was estimated and further optimized to obtain one model that could be used for a diverse range of grape varieties in both wine regions. External validation was performed using an independent data set from 13 grape varieties (seven red and six white), different from the two used in the estimation step. The results showed a high coefficient of determination (R2: 0.59–0.89), low Root Mean Square Error (RMSE: 3–7 days) and Mean Absolute Deviation (MAD: 2–6 days) between predicted and observed values. The UniFORC model overall performed slightly better than the two GDD models, presenting higher R2 (0.75) and lower RMSE (4.55) and MAD (3.60). The developed phenological models presented good accuracy when applied to several varieties in different regions and can be used as a predictor tool of flowering date in Portugal.
Since the introduction of spatial grammars 45 years ago, numerous grammars have been developed in a variety of fields from architecture to engineering design. Their benefits for solution space exploration when computationally implemented and combined with optimization have been demonstrated. However, there has been limited adoption of spatial grammars in engineering applications for various reasons. One main reason is the missing, automated, generalized link between the designs generated by the spatial grammar and their evaluation through finite-element analysis (FEA). However, the combination of spatial grammars with optimization and simulation has the advantage over continuous structural topology optimization in that explicit constraints, for example, modeling style and fabrication processes, can be included in the spatial grammar. This paper discusses the challenges in providing a generalized approach by demonstrating the implementation of a framework that combines a three-dimensional spatial grammar interpreter with automated FEA and stochastic optimization using simulated annealing (SA). Guidelines are provided for users to design spatial grammars in conjunction with FEA and integrate automatic application of boundary conditions. A simulated annealing method for use with spatial grammars is also presented including a new method to select rules through a neighborhood definition. To demonstrate the benefits of the framework, it is applied to the automated design and optimization of spokes for inline skate wheels. This example highlights the advantage of spatial grammars for modeling style and additive manufacturing (AM) constraints within the generative system combined with FEA and optimization to carry out topology and shape optimization. The results verify that the framework can generate structurally optimized designs within the style and AM constraints defined in the spatial grammar, and produce a set of topologically diverse, yet valid design solutions.
The hub location problems involve locating facilities and designing hub networks to minimize the total cost of transportation (as a function of distance) between hubs, establishing facilities and demand management. In this paper, we consider the capacitated cluster hub location problem because of its wide range of applications in real-world cases, especially in transportation and telecommunication networks. In this regard, a mathematical model is presented to address this problem under capacity constraints imposed on hubs and transportation lines. Then, a new hybrid algorithm based on simulated annealing and ant colony optimization is proposed to solve the presented problem. Finally, the computational experiments demonstrate that the proposed heuristic algorithm is both effective and efficient.
Detailed tephrochronologies are built to underpin probabilistic volcanic hazard forecasting, and to understand the dynamics and history of diverse geomorphic, climatic, soil-forming and environmental processes. Complicating factors include highly variable tephra distribution over time; difficulty in correlating tephras from site to site based on physical and chemical properties; and uncertain age determinations. Multiple sites permit construction of more accurate composite tephra records, but correctly merging individual site records by recognizing common events and site-specific gaps is complex. We present an automated procedure for matching tephra sequences between multiple deposition sites using stochastic local optimization techniques. If individual tephra age determinations are not significantly different between sites, they are matched and a more precise age is assigned. Known stratigraphy and mineralogical or geochemical compositions are used to constrain tephra matches. We apply this method to match tephra records from five long sediment cores (≤ 75 cal ka BP) in Auckland, New Zealand. Sediments at these sites preserve basaltic tephras from local eruptions of the Auckland Volcanic Field as well as distal rhyolitic and andesitic tephras from Okataina, Taupo, Egmont, Tongariro, and Tuhua (Mayor Island) volcanic centers. The new correlated record compiled is statistically more likely than previously published arrangements from this area.
A method for designing efficient sampling schemes for reconnaissance surveys of contaminated bed sediments in water courses is presented. The method can be used in networks of water courses, for instance to estimate the total volume of bed sediment of a defined quality class. The water courses must be digitised as arcs in a Geographical Information System.
The method comprises six steps: (1) stratifying the water courses; (2) choosing a variogram; (3) calculating the parameters of the variance model; (4) choosing a compositing scheme; (5) choosing the values for the cost-model parameters; and (6) optimising the sampling scheme. The method is demonstrated with a survey of the main water courses in the reclaimed areas of Oostelijk Flevoland and Zuidelijk Flevoland.
Let M be acomplete Riemannian manifold, M ∈ ℕ andp ≥ 1. Weprove that almost everywhere on x = (x1,...,xN) ∈ MNfor Lebesgue measure in MN, the measure \hbox{$\di \mu(x)=\f1N\sum_{k=1}^N\d_{x_k}$} has a unique p–mean ep(x).As a consequence, if X = (X1,...,XN)is a MN-valued randomvariable with absolutely continuous law, then almost surely μ(X(ω)) has aunique p–mean. In particular if (Xn)n ≥ 1is an independent sample of an absolutely continuous law in M, then the processep,n(ω) = ep(X1(ω),...,Xn(ω))is well-defined. Assume M is compact and consider a probability measureν inM. Usingpartial simulated annealing, we define a continuous semimartingale which converges inprobability to the set of minimizers of the integral of distance at power p with respect toν. When theset is a singleton, it converges to the p–mean.
Discrete-continuous project scheduling problems with positive discounted cash flows andthe maximization of the NPV are considered. We deal with a class of theseproblems with an arbitrary number of discrete resources and one continuous, renewableresource. Activities are nonpreemptable, and the processing rate of an activity is acontinuous, increasing function of the amount of the continuous resource allotted to theactivity at a time. Three common payment models – Lump Sum Payment, Payments at ActivityCompletion times, and payments in Equal Time Intervals are analyzed. Formulations ofmathematical programming problems for an optimal continuous resource allocation for eachpayment model are presented. Applications of two local search metaheuristics – Tabu Searchand Simulated Annealing are proposed. The algorithms are compared on a basis ofcomputational experiments. Some conclusions and directions for future research are pointedout.
Previous work in studying interstellar exploration by one or several probes has focused primarily either on engineering models for a spacecraft targeting a single star system, or large-scale simulations to ascertain the time required for a civilization to completely explore the Milky Way Galaxy. In this paper, a simulated annealing algorithm is used to numerically model the exploration of the local interstellar neighbourhood (i.e. of the order of ten parsecs of the Solar System) by a fixed number of probes launched from the Solar System; these simulations use the observed masses, positions and spectral classes of targeted stars. Each probe visits a pre-determined list of target systems, maintains a constant cruise speed, and only changes the direction from gravitational deflection at each target. From these simulations, it is examined how varying design choices – differing the maximum cruise speed, number of probes launched, number of target stars to be explored, and probability of avoiding catastrophic system failure per parsec – change the completion time of the exploration programme and the expected number of stars successfully visited. In addition, it is shown that improving this success probability per parsec has diminishing returns beyond a certain point. Future improvements to the model and possible implications are discussed.
This paper presents a new order selection technique of matrix memory polynomial technique that models the nonlinearities of single-branch and multi-branch transmitters. The new criteria take into account the complexity of the model in addition to its mean-square error in the selection criteria. The quasi-convexity of the proposed criteria was proven in this work. By using this proposed Akaike information criterion (AIC) and Bayesian information criterion (BIC) criteria, the model order selection was cast as a cost minimization problem. To minimize the criteria, modified gradient descent and simulated annealing algorithms were utilized which resulted in a considerable reduction in the number of search iterations. The performances of the criteria were shown by comparing the normalized mean square error (NMSE) of a higher-order model and the optimum model. It has been shown that the NMSE difference is <0.5 dB, but the complexity is much smaller.
This paper provides a general framework based on statistical design and Simulated Annealing (SA) optimization techniques for the development, analysis, and performance evaluation of forthcoming snake robot designs. A planar wheeled snake robot is considered, and the effect of its key design parameters on its performance while moving in serpentine locomotion is investigated. The goal is to minimize energy consumption and maximize distance traveled. Key kinematic and dynamic parameters as well as their corresponding range of values are identified. Derived dynamic and kinematic equations of n-link snake robot are used to perform simulation. Experimental design methodology is used for design characterization. Data are collected as per full factorial design. For both energy consumption and distance traveled, logarithmic, linear, and curvilinear regression models are generated and the best models are selected. Using analysis of variance, ANOVA, effects of parameters on performance of robots are determined. Next, using SA, optimum parameter levels of robots with different number of links to minimize energy consumption and maximize distance traveled are determined. Both single and multi-criteria objectives are considered. Webots and Matlab SimMechanics software are used to validate theoretical results. For the mathematical model and the selected range of values considered, results indicate that the proposed approach is quite effective and efficient in optimization of robot performance. This research extends the present knowledge in this field by identifying additional parameters having significant effect on snake robot performance.
Structural characterization from powder diffraction of compounds not containing isolated molecules but three-dimensional infinite structure (alloys, intermetallics, framework compounds, extended solids) by direct space methods has been largely improved in the last 15 years. The success of the method depends very much on a proper modeling of the structure from building blocks. The modeling from larger building blocks improves the convergence of the global optimization algorithm by a factor of up to 10. However, care must be taken about the correctness of the building block, like its rigidity, deformation, bonding distances, and ligand identity. Dynamical occupancy correction implemented in the direct space program FOX has shown to be useful when merging excess atoms, and even larger building blocks like coordination polyhedra. It also allows joining smaller blocks into larger ones in the case when the connectivity was not a priori evident from the structural model. We will show in several examples of nonmolecular structures the effect of the modeling by correct structural units.
The crystal structure of the mineral strontiodresserite, (Sr,Ca)Al2(CO3)2(OH)4⋅H2O, from the Francon Quarry, Montreal, Quebec, Canada, has been solved from laboratory powder diffraction data using a combination of charge-flipping and simulated annealing methods. The structure is orthorhombic in space group Pnma with a=16.0990(7), b=5.6133(3), and c=9.1804(4) Å (Z=4) and the framework of the mineral is isostructural with that of dundasite. The strontium has a coordination number of 9 and the carbonate anions form a bridge between the SrO9 polyhedra and AlO6 octahedra. The water molecule lies in a channel that runs parallel to the b axis. An ordered network of hydrogen atoms could be uniquely determined from crystal-chemical principles in the channels of strontiodresserite. Ab initio density functional theory (DFT) energy minimization of the whole structure gave results in full agreement with X-ray refinement results for nonhydrogen atoms. The stability of this model (as well as that of the corresponding model of dundasite) in the proposed Pnma space group was tested by DFT optimization in space group P1 of random small distortions of this structure. This test confirms that both minerals are isostructural, including their hydrogen-bond networks.
The structure of a high-pressure polymorph of glycine (the β′-polymorph formed reversibly at 0.8 GPa from the β-polymorph) was determined from high-resolution X-ray powder diffraction data collected in situ in a diamond anvil cell at nine pressure points up to 2.6 GPa. X-ray powder diffraction study gave a structural model of at least the same quality as that obtained from a single-crystal diffraction experiment. The difference between the powder-diffraction and the single-crystal models is related to the orientation of the NH3-tails and the structure of the hydrogen-bonds network. The phase transition between the β- and β′-polymorphs is reversible and preserves a single crystal intact. No transformations were observed between the β-, α-, and β′-polymorphs on compression and decompression, although the α- and β′-polymorphs belong to the same space group (P21/c). The instability of the β- and γ-forms with pressure can be predicted easily when considering the densities of their structures versus pressure. The direction of the transformation (i.e., which of the high-pressure polymorphs is formed) is determined by structural filiation between the parent and the high-pressure phases because of the kinetic control of the transformations.
A new way of incorporating powder diffraction data into a cost function to predict the crystalline structure of inorganic solids is proposed. This approach was applied to the following series of compounds: cubic SrTiO3, tetragonal NaNbO3, TiO2 (anatase), tetragonal CaTiO3, and hexagonal BaTiO3. A tremendous increase in the efficiency of obtaining the correct structure is achieved when a cost function based upon this new approach is applied to these problems.