Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
Preface
Published online by Cambridge University Press: 05 August 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- Notation
- 1 Introduction
- 2 Statistical physics and phase transitions
- 3 The satisfiability problem
- 4 Constraint satisfaction problems
- 5 Machine learning
- 6 Searching the hypothesis space
- 7 Statistical physics and machine learning
- 8 Learning, SAT, and CSP
- 9 Phase transition in FOL covering test
- 10 Phase transitions and relational learning
- 11 Phase transitions in grammatical inference
- 12 Phase transitions in complex systems
- 13 Phase transitions in natural systems
- 14 Discussion and open issues
- Appendix A Phase transitions detected in two real cases
- Appendix B An intriguing idea
- References
- Index
Summary
From its inception in the 1930s, the rich and vigorous field of computer science has been concerned with the resources, both in time and in memory, needed to carry out a computation. A number of fundamental theorems were discovered that resorted to a worst-case analysis. The central question was whether a given algorithm could be guaranteed to terminate a computation in finite time whatever the inputs, and, if so, in which class of complexity it lay, given the control parameters: polynomial, exponential, and so on. Therefore, in 1991, a paper by Cheeseman, Kaneefsky, and Taylor came as a bolt from the blue. Indeed, while its title, “Where the really hard problems are”, was not altogether disturbing, its content was. Broadly speaking, the authors argued that even if it was important to analyze worst cases, it was just as essential to look for the typical complexity of computations, the complexity encountered when solving typical problems. And there lies a gem: the transition from the region of problems that are hard, in terms of algorithmic complexity, to the region of problems that are easy can be quite sharp. Moreover, these regions and transitions are not related to the worst cases.
We remember that this 1991 paper, presented at the International Joint Conference on Artificial Intelligence (IJCAI), started a commotion, though how profound this would be was not at first apparent.
- Type
- Chapter
- Information
- Phase Transitions in Machine Learning , pp. ix - xiiPublisher: Cambridge University PressPrint publication year: 2011