Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
7 - Statistical physics and machine learning
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
Summary
The study of the emergence of phase transitions, or, more generally, the application of statistical physics methods to automated learning, is not new. For at least a couple of decades ensemble phenomena have been noticed in artificial neural networks. In the first part of this chapter we will illustrate these early results and then move to currently investigated issues.
According to Watkin et al. (1993), statistical physics tools are not only well suited to analyze existing learning algorithms but also they may suggest new approaches. In the paradigm of learning from examples (the paradigm considered in this book), examples are drawn from some unknown but fixed probability distribution and, once chosen, constitute a static quenched disorder (Watkin et al., 1993).
Artificial neural networks
Artificial neural networks (NNs) are graphs consisting of a set of nodes that correspond to elementary computational units, called “neurons”, connected via links recalling “axons” and “synapses”. Even though the terminology is borrowed from neurology, the analogy between an NN and the brain can well be ignored. The idea behind the computation in an NN is that inputs from the external world activate a subset of the neurons (the input neurons), which, in turn, elaborate the input and transmit the results of this local computation to other neurons until some subset of neurons (the output neurons) provide the final results to the external world.
- Type
- Chapter
- Information
- Phase Transitions in Machine Learning , pp. 140 - 167Publisher: Cambridge University PressPrint publication year: 2011
- 2
- Cited by