We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Exploring important theories for understanding freezing and the liquid-glass transition, this book is useful for graduate students and researchers in soft-condensed matter physics, chemical physics and materials science. It details recent ideas and key developments, providing an up-to-date view of current understanding. The standard tools of statistical physics for the dense liquid state are covered. The freezing transition is described from the classical density functional approach. Classical nucleation theory as well as applications of density functional methods for nucleation of crystals from the melt are discussed, and compared to results from computer simulation of simple systems. Discussions of supercooled liquids form a major part of the book. Theories of slow dynamics and the dynamical heterogeneities of the glassy state are presented, as well as nonequilibrium dynamics and thermodynamic phase transitions at deep supercooling. Mathematical treatments are given in full detail so readers can learn the basic techniques.
An important class of NP-complete problems is that of constraint satisfaction problems (CSPs), which have been widely investigated and where a phase transition has been found to occur (Williams and Hogg, 1994; Smith and Dyer, 1996; Prosser, 1996). Constraint satisfaction problems are the analogue of SAT problems in first-order logic; actually, any finite CSP instance can be transformed into a SAT problem in an automatic way, as will be described in Section 8.4.
Formally, a finite CSP is a triple (X, R, D). Here X = {xi|1 ≤ i ≤ n} is a set of variables and R = {Rh, 1 ≤ h ≤ m} is a set of relations, each defining a constraint on a subset of variables in X; D = {Di|1 ≤ i ≤ n} is a set of variable domains Di such that every variable xi takes values only in the Di, whose cardinality |Di| equals di. The constraint satisfaction problem consists in finding an assignment in Di for each variable xi ∈ X that satisfies all relations in R.
In principle a relation Rh may involve any proper or improper subset of X. Nevertheless, most authors restrict investigation to binary constraints, defined as relations over two variables only. This restriction does not affect the generality of the results that can be obtained because any relation of arity higher than two can always be transformed into an equivalent conjunction of binary relations.
Learning involves vital functions at different levels of consciousness, starting with the recognition of sensory stimuli up to the acquisition of complex notions for sophisticated abstract reasoning. Even though learning escapes precise definition there is general agreement on Langley's idea (Langley, 1986) of learning as a set of “mechanisms through which intelligent agents improve their behavior over time”, which seems reasonable once a sufficiently broad view of “agent” is taken. Machine learning has its roots in several disciplines, notably statistics, pattern recognition, the cognitive sciences, and control theory. Its main goal is to help humans in constructing programs that cannot be built up manually and programs that learn from experience. Another goal of machine learning is to provide computational models for human learning, thus supporting cognitive studies of learning.
Classification
Among the large variety of tasks that constitute the body of machine learning, one has received attention from the beginning: the acquiring of knowledge for performing classification. From this perspective machine learning can be described roughly as the process of discovering regularities from a set of available data and extrapolating these regularities to new data.
Machine learning as an algorithm
Over the years, machine learning has been understood in different ways. At first it was considered mainly as an algorithmic process. One of the first approaches to automated learning was proposed by Gold in his “learning in the limit” paradigm (Gold, 1967).