We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The emphasis in the previous part of the book was on mathematical tools that permit the classification of symbolic signals in a general and accurate way by means of a few numbers or functions. The study of complexity, however, cannot be restricted to the evaluation of indicators alone, since each of them presupposes a specific model which may or may not adhere to the system. For example, power spectra hint at a superposition of periodic components, fractal dimensions to a self-similar concentration of the measure, the thermodynamic approach to extensive Hamiltonian functions. It seems, hence, necessary to seek procedures for the identification of appropriate models before discussing definite complexity measures. Stated in such a general form, the project would be far too ambitious (tantamount to finding “meta-rules” that select physical theories). The symbolic encoding illustrated in the previous chapters provides a homogeneous environment which makes the general modelling question more amenable to an effective formalization.
Independent efforts in the study of neuron nets, compiler programs, mathematical logic, and natural languages have led to the construction of finite discrete models (automata) which produce symbolic sequences with different levels of complexity by performing elementary operations. The corresponding levels of computational power are well expressed by the Chomsky hierarchy, a list of four basic families of automata culminating in the celebrated Turing machine.
The core of the problem when dealing with a complex system is the difficulty in discerning elements of order in its structure. If the object of the investigation is a symbolic pattern, one usually examines finite samples of it. The extent to which these can be considered regular, however, depends both on the observer's demand and on their size. If strict periodicity is required, this might possibly be observed only in very small patches. A weaker notion of regularity permits the identification of larger “elementary” domains. This intrinsic indefiniteness, shared alike by concepts such as order and organization, seems to prevent us from attaining a definition of complexity altogether. This impasse can be overcome by noticing that the discovery of the inner rules of the system gives a clue as to how its description can be shortened. Intuitively, systems admitting a concise description are simple. More precisely, one tries to infer a model which constitutes a compressed representation of the system. The model can then be used to reproduce already observed patterns, as a verification, or even to “extend” the whole object beyond its original boundaries, thus making a prediction about its possible continuation in space or time.
As we shall see, a crucial distinction must be made at this point.
In this chapter, we present some of the most frequently quoted examples of “complex” behaviour observed in nature. Far from proposing a global explanation of such disparate systems within a unique theoretical framework, we select those common properties that do cast light on the ways in which complexity exhibits itself.
Natural macroscopic systems are usually characterized by intensive parameters (e.g., temperature T or pressure P) and extensive ones (volume V, number of particles N) which are taken into account by suitable thermodynamic functions, such as the energy E or the entropy S. When the only interaction of a system with its surroundings consists of a heat exchange with a thermal bath, an equilibrium state eventually results: the macroscopic variables become essentially time independent, since fluctuations undergo exponential relaxation. The equilibrium state corresponds to the minimum of the free energy F = E − TS and is determined by the interplay between the order induced by the interactions, described by E, and the disorder arising from the multiplicity of different macroscopic states with the same energy, accounted for by the entropy S.
The commonest case is, however, represented by systems that are open to interactions with the environment, which usually takes the form of a source of energy and a sink where this is dissipated.
The term “complex” is being used more and more frequently in science, often in a vague sense akin to “complication”, and referred to any problem to which standard, well-established methods of mathematical analysis cannot be immediately applied. The spontaneous, legitimate reaction of the careful investigator to this attitude can be summarized by the questions: “Why study complexity?”, “What is complexity?”.
In the first part of the book, we have illustrated several examples from various disciplines in which complexity purportedly arises, trying, on the one hand, to exclude phenomena which do not really call for new concepts or mathematical tools and, on the other, to find common features in the remaining cases which could be of guidance for a sound and sufficiently general formulation of the problem. While amply answering the former question, the observed variety of apparently complex behaviour renders the task of formalizing complexity, i.e., of answering the latter question, quite hard. This is the subject of the main body of the book.
Aware of the difficulty of developing a formalism which is powerful enough to yield meaningful answers in all cases of interest, we have presented a critical comparison among various approaches, with the help of selected examples, stressing their complementarity.
The scientific basis of the discussion about complexity is first exposed in general terms, with emphasis on the physical motivation for research on this topic. The genesis of the “classical” notion of complexity, born in the context of the early computer science, is then briefly reviewed with reference to the physical point of view. Finally, different methodological questions arising in the practical realization of effective complexity indicators are illustrated.
Statement of the problem
The success of modern science is the success of the experimental method. Measurements have reached an extreme accuracy and reproducibility, especially in some fields, thanks to the possibility of conducting experiments under well controlled conditions. Accordingly, the inferred physical laws have been designed so as to yield nonambiguous predictions. Whenever substantial disagreement is found between theory and experiment, this is attributed either to unforeseen external forces or to an incomplete knowledge of the state of the system. In the latter case, the procedure so far has followed a reductionist approach: the system has been observed with an increased resolution in the search for its “elementary” constituents. Matter has been split into molecules, atoms, nucleons, quarks, thus reducing reality to the assembly of a huge number of bricks, mediated by only three fundamental forces: nuclear, electro-weak and gravitational interactions.
The intuitive notion of complexity is well expressed by the usual dictionary definition: “a complex object is an arrangement of parts, so intricate as to be hard to understand or deal with” (Webster, 1986). A scientist, when confronted with a complex problem, feels a sensation of distress that is often not attributable to a definite cause: it is commonly associated with the inability to discriminate the fundamental constituents of the system or to describe their interrelations in a concise way. The behaviour is so involved that any specifically designed finite model eventually departs from the observation, either when time proceeds or when the spatial resolution is sharpened. This elusiveness is the main hindrance to the formulation of a “theory of complexity”, in spite of the generality of the phenomenon.
The problem of characterizing complexity in a quantitative way is a vast and rapidly developing subject. Although various interpretations of the term have been advanced in different disciplines, no comprehensive discussion has yet been attempted. The fields in which most efforts have been originally concentrated are automata and information theories and computer science. More recently, research in this topic has received considerable impulse in the physics community, especially in connection with the study of phase transitions and chaotic dynamics.