From linear to nonlinear and stochastic dynamics
A dynamical system is characterized by its elements and the time-dependent development of their states. The states can refer to moving planets, molecules in a gas, the excitation of neurons in a neural net, nutrition of populations in an ecological system, or products in a market system. The dynamics of a system, i.e. the change of system states depending on time, is mathematically described by differential equations. A conservative (Hamiltonian) system, e.g. an ideal pendulum, is determined by the reversibility of time direction and conservation of energy. Dissipative systems, e.g. a real pendulum with friction, are irreversible.
In classical physics, the dynamics of a system is considered a continuous process. However, continuity is only a mathematical idealization. Actually, a scientist has single observations or measurements at discrete-time points which are chosen to be equidistant or defined by other measurement devices. In discrete processes, there are finite differences between the measured states and no infinitely small differences (differentials), which are assumed in a continuous process. Thus, discrete processes are mathematically described by difference equations.
Random events (e.g. Brownian motion in a fluid, mutation in evolution, innovations in economy) are represented by additional fluctuation terms. Classical stochastic processes, e.g. the billions of unknown molecular states in a fluid, are defined by time-dependent differential equations with distribution functions of probabilistic states. In quantum systems of elementary particles, the dynamics of quantum states is defined by Schrödinger’s equation with observables (e.g. position and momentum of a particle) depending on Heisenberg’s principle of uncertainty, which allows only probabilistic forecasts of future states.
Historically, during the centuries of classical physics, the universe was considered a deterministic and conservative system. The astronomer and mathematician P.S. Laplace (1814), for example, assumed the total computability and predictability of nature if all natural laws and initial states of celestial bodies are well known. The Laplacean spirit expressed the belief of philosophers in determinism and computability of the world during the 18th and 19th centuries.
Laplace was right about linear and conservative dynamical systems. In general, a linear relation means that the rate of change in a system is proportional to its cause: small changes cause small effects while large changes cause large effects. Changes of a dynamical system can be modelled in one dimension by changing values of a time-depending quantity along the time axis (time series). Mathematically, linear equations are completely computable. This is the deeper reason for Laplace’s philosophical assumption to be right for linear and conservative systems.
In systems theory, the complete information about a dynamical system at a certain time is determined by its state at that time. In general, the state of a system is determined by more than two quantities. Then, a higher dimensional phase space is needed to study the dynamics of a system. From a methodological point of view, time series and phase spaces are important instruments to study systems dynamics. The state space of a system contains the complete information of its past, present and future behaviour.
At the end of the 19th century, H. Poincaré (1892) discovered that celestial mechanics is not completely computable clockwork, even if it is considered as a deterministic and conservative system. The mutual gravitational interactions of more than two celestial bodies (‘Many-bodies-problem’) correspond to nonlinear and non-integrable equations with instabilities and irregularities. According to the Laplacean view, similar causes effectively determine similar effects. Thus, in the phase space, trajectories that start close to each other also remain close to each other during time evolution. Dynamical systems with deterministic chaos exhibit an exponential dependence on initial conditions for bounded orbits: the separation of trajectories with close initial states increases exponentially.
Thus, tiny deviations of initial data lead to exponentially increasing computational efforts for future data, limiting long-term predictions, even though the dynamics are, in principle, uniquely determined. This is known as the ‘butterfly effect’: initial, small and local causes soon lead to unpredictable, large and global effects. According to the famous KAM-TheoremReference Arnold1 of A.N. Kolmogorov (1954), V.I. Arnold (1963), and J.K. Moser (1967), trajectories in the phase space of classical mechanics are neither completely regular, nor completely irregular, but depend sensitively on the chosen initial conditions.
Dynamical systems can be classified on the basis of the effects of the dynamics on a region of the phase space. A conservative system is defined by the fact that, during time evolution, the volume of a region remains constant, although its shape may be transformed. In a dissipative system, dynamics causes a volume contraction.
An attractor is a region of a phase space into which all trajectories departing from an adjacent region, the so-called basin of attraction, tend to converge. There are different kinds of attractors. The simplest class of attractors contains the fixed points. In this case, all trajectories of adjacent regions converge to a point. An example is a dissipative harmonic oscillator with friction: the oscillating system is gradually slowed down by frictional forces and finally comes to a rest at an equilibrium point.
Conservative harmonic oscillators without friction belong to the second class of attractors with limit cycles, which can be classified as being periodic or quasi-periodic. A periodic orbit is a closed trajectory into which all trajectories departing from an adjacent region converge. For a simple dynamical system with only two degrees of freedom and continuous time, the only possible attractors are fixed points or periodic limit cycles. An example is a Van der Pol oscillator modelling a simple vacuum-tube oscillator circuit.Reference Scott2
In continuous systems with a phase space of dimension n > 2, more complex attractors are possible. Dynamical systems with quasi-periodic limit cycles show a time evolution that can be decomposed into different periodic parts without a unique periodic regime. The corresponding time series consist of periodic parts of oscillation without a common structure.Reference Small3 Nevertheless, closely starting trajectories remain close to each other during time evolution. The third class contains dynamical systems with chaotic attractors which are non-periodic, with an exponential dependence on initial conditions for bounded orbits. A famous example is the chaotic attractor of a Lorenz system simulating the chaotic development of weather caused by local events, which cannot be forecast in the long run (butterfly effect).
Measurements are often contaminated by unwanted noise, which must be separated from the signals of specific interest. Further on, in order to forecast the behaviour of a system, the development of its future states must be reconstructed in a corresponding phase space from a finite sequence of measurements. Thus, time-series analysisReference Abarbanel4, Reference Mandelbrot5 is an immense challenge in different fields of research from, for example, climatic data in meteorology, ECG-signals in cardiology, and EEG-data in brain research to economic data of economics and finance. Beyond the patterns of dynamical attractors, randomness of data must be classified by statistical distribution functions.
Typical phenomena of our world, such as weather, climate, the economy and daily life, are far too complex for a simple deterministic description to exist. Even if there is no doubt about the deterministic evolution of, for example, the atmosphere, the current state whose knowledge would be needed for a deterministic prediction contains too many variables in order to be measurable with sufficient accuracy. Hence, our knowledge does not usually suffice for a deterministic model. Instead, very often a stochastic approach is more situated. Ignoring the unobservable details of a system, we accept a lack of knowledge. Depending on the unobserved details, the observable part may evolve in different ways. However, if we assume a given probability distribution for the unobserved details, then the different evolutions of the observables also appear with specific probabilities. Thus, the lack of knowledge about the system prevents us from deterministic predictions but allows us to assign probabilities to the different possible future states. It is the task of a time series analysis to extract the necessary information from past data.
Dynamical models contain nonlinear feedback, and the solutions to these are usually obtained by numerical methods. Statistical models are data driven and try to fit a given set of data using various distribution functions. There are also hybrids, coupling dynamic and statistical aspects, including deterministic and stochastic elements. Simulations are often based on computer programs (e.g. cellular automata and network formalisms in Chapter 4), connecting input and output in nonlinear ways. In this case, models are calibrated by training the networks, in order to minimize the error between output and given test data.
In the simplest case of statistical distribution functions, a Gaussian distribution has exponential tails situated symmetrically to the far left and right of the peak value. Extreme events (e.g. disasters, pandemics, floods) occur in the tails of the probability distributions. Contrary to the Gaussian distribution, probabilistic functions p(x) of heavy tails with extreme fluctuations are mathematically characterized by power laws, e.g. p(x) ∼ x −α with α > 0. Power laws possess scale invariance corresponding to the (at least statistical) self-similarity of their time series of data. Mathematically, this property can be expressed as p(bx) = b −αp(x) meaning that the change of variable x to bx results in a scaling factor independent of x while the shape of distribution p is conserved. So, power laws represent scale-free systems. The Gutenberg–Richter size distribution of earthquakes is a typical example of natural sciences. Historically, Pareto’s distribution law of wealth was the first power law in the social sciences, with a fraction of people presumably several times wealthier than the mass of a nation.Reference Albeverio, Jentsch and Kantz6, Reference Mainzer7
Complexity and nonlinear dynamics of evolution and brains
Structures in nature can be explained by the dynamics and attractors of complex systems.Reference Haken and Mikhailov8 They result from collective patterns of interacting elements that cannot be reduced to the features of single elements in a complex system. Nonlinear interactions in multi-component (‘complex’) systems often have synergetic effects that can neither be traced back to single causes nor be forecast in the long run. The mathematical formalism of complex dynamical systems is taken from statistical physics. In general, however, the theory of complex dynamical systems deals with profound and striking analogies that have been discovered in the self-organized behaviour of quite different systems in physics, chemistry, and biology. These multi-component systems consist of many units, such as elementary particles, atoms, cells or organisms. The elementary units, e.g. their position and momentum vectors, and their local interactions, constitute the microscopic level of description, for instance, the interacting molecules of a liquid or gas. The global state of the complex systems results from the collective configurations of the local multi-component states. At the macroscopic level, there are few collective (‘global’) quantities like, for instance, pressure, density, temperature, and entropy, characterizing observable collective patterns or figures of the units.
If the external conditions of a system are changed by varying certain control parameters (e.g. temperature), the system may undergo a change in its macroscopic global states at some threshold value. For instance, water as a complex system of water molecules changes spontaneously from a liquid to a frozen state at the critical value of temperature of zero Celsius. In physics, those transformations of collective states are called ‘phase transitions’. Obviously, they describe a change of self-organized behaviour between the interacting elements of a complex system.
According to L.D. Landau (1959), the suitable macrovariables characterizing this change of global order are denoted as ‘order parameters’. In statistical mechanics, the order transition of complex systems such as fluids, gases, etc, is modelled by differential equations of the global state. A paradigmatic example is a ferromagnet consisting of many elementary atomic magnets (‘dipoles’). The two possible local states of a dipole are represented by upwards and downwards pointing arrows. If the temperature (‘control parameter’) is annealed to the thermal equilibrium (Curie point), then the average distribution of upwards and downwards pointing dipoles (‘order parameter’) is spontaneously aligned in one regular direction. This regular pattern corresponds to the macroscopic state of magnetization. Obviously, the emergence of magnetization is a self-organized behaviour of atoms that is modelled by a phase transition of a certain order parameter, the average distribution of upwards and downwards pointing dipoles.
Landau’s scheme of phase transitions cannot be generalized to all cases of phase transitions. A main reason for its failure results from an inadequate treatment of fluctuations, which are typical for many multi-component systems. Nevertheless, Landau’s scheme can be used as a heuristic device to deal with several non-equilibrium transitions. In this case, a complex system is driven away from equilibrium by increasing energy (not decreasing energy as in the case of equilibrium transitions such as freezing water or magnetizing ferromagnets). The phase transitions of nonlinear dissipative complex systems far from thermal equilibrium can be modelled by several mathematical methods.
In a more mathematical way, stochastic nonlinear differential equations (e.g. Fokker–Planck equations, Master equation) are employed to model the dynamics of complex systems. H. HakenReference Albeverio, Jentsch and Kantz6 suggested that the dominating order parameters should be founded by the adiabatic elimination of fast relaxing variables of these equations. The reason is that the relaxation time of unstable modes (order parameters) is very long compared with the fast relaxing variables of stable ones, which can therefore be neglected. Thus, this concept of self-organization can be illustrated by the quasi-biological slogan: long-living systems dominate short-living systems.
From the view point of dynamical systems, even rare, exceptional, catastrophic, and surprising (‘extreme’) events are not generated randomly. They occur in systems far from equilibrium in states of large variability and collective effects. For example, weather extremes occur in a state of the Earth’s atmosphere that is determined by well-known equations of motion such as the nonlinear Navier–Stokes equations. Therefore, weather prediction is based on numerical simulations of model equations, fed by observations and measurements as initial conditions. In this framework, extreme events (e.g. hurricanes, cyclones) are considered as manifestations of nonlinear dynamics of complex systems. The dynamical mechanism explains why the system makes an excursion far from its normal state. These scenarios are known as large deviations, deterministic chaos, or fully developed turbulence. The model of self-organized criticality (SOC) suggests that a system reacts to a sequence of perturbations by manoeuvring itself into a critical state without external tuning of appropriate control parameters. In this case, huge fluctuations are the rule rather than the exception. The nonlinear dynamics of SOC are used to explain statistical power law distribution functions for the relevant observables. For example, the Gutenberg–Richter law of earthquake magnitude distribution can be reproduced by suitable SOC-models. SOC provides a remarkable connection between the phase transitions of dynamical systems and statistical laws of extreme events. However, it is worth noting that this theory has, until now, been mainly supported by results from computer simulations.
In general, dynamical systems and their phase transitions deliver a successful formalism to model the emergence of order in nature. However, these methods are not reduced to special laws of physics, although their mathematical principles were first discovered and successfully applied in physics. There is no physicalism, but an interdisciplinary methodology to explain the increasing complexity and differentiation of forms by phase transitions. The question is how to select, interpret and quantify the appropriate variables of dynamical models.
Models of thermodynamic self-organization are not sufficient to explain the emergence of life. As a nonlinear mechanism of genetics we use the autocatalytic process of genetic self-replication. The evolution of new species by mutation and selection can be modelled by nonlinear stochastic equations of second-order non-equilibrium phase transitions.Reference Mainzer9 Mutations are mathematized as ‘fluctuating forces’ and selections as ‘driving forces’. Fitness degrees are the order parameters dominating the phase transitions to new species. During evolution, a sensible network of equilibria between populations of animals and plants has developed. Open dissipative systems of ecology may become unstable by local perturbations, e.g. pollution of the atmosphere, leading to global chaos of the atmosphere in the sense of the butterfly effect.Reference Mainzer7
In brain research, the brain is considered as a complex dynamical system of firing and non-firing neurons, self-organizing in macroscopic patterns of cell assemblies by neurochemical interactions. Their dynamical attractors are correlated with states of perception, motion, emotion, thoughts, or even consciousness. There is no ‘mother neuron’ that can feel, think or at least coordinate the appropriate neurons. The famous binding problem of pixels and features in a perception is explained by clusters of synchronously firing neurons dominated by learnt attractors of brain dynamics. The brain is also a self-monitoring and self-mapping system of all bodily, cognitive, and emotional states leading to self-awareness and self-consciousness, which can be interpreted as dominating order parameters. Thus, even human subjectivity, the traditional philosophical problem of ‘qualia’, can be explained by nonlinear dynamics of complex systems. Human intentions and preferences correspond to attractors of brain dynamics influencing human acting and behaviour.
Complexity and nonlinear dynamics of economies and finance
The self-organization of complex systems can also be observed in social groups. An application of social dynamics is the behaviour of car drivers. In automobile traffic systems, a phase transition from non-jamming to jamming phases depends on the averaged car density as the control parameter. The spontaneous emergence of chaotic patterns of traffic is a famous self-organizing effect of nonlinear interactions, which can often not be reduced to single causes. At a critical value, fluctuations with fractal or self-similar features can be observed. The term self-similarity states that the time series of measured traffic flow looks the same at different time scales, at least from a qualitative point of view, with small statistical deviations. This phenomenon is also called ‘fractality’.Reference Mainzer10 In the theory of complex systems, self-similarity is a (not sufficient) hint on chaotic dynamics. These signals can be used by traffic guide systems.
In a political community, collective trends or majorities of opinions can be considered as order parameters, which are produced by mutual discussions and interaction of the people in a more or less ‘heated’ situation. They can even be initiated by a few people in a critical and unstable (‘revolutionary’) situation of the whole community. There may be a competition of order concepts during heavy fluctuations. The essential point is that the winning concept of order will dominate the collective behaviour of the people. Thus, there is a kind of feedback: the collective order of a complex system is generated by the interactions of its elements (‘self organization’). After thermodynamic, genetic, and neural self-organization we also distinguish social and economic self-organization. On the one side, the behaviour of the elements is dominated by the collective order. On the other side, people have their individual will to influence collective trends of society. Nevertheless, they are also driven by attractors of collective behaviour.
Sometimes, this approach is called ‘econophysics’ (i.e. a combination of ‘economics’ and ‘physics’). However, modelling self-organizing social systems is no physicalism because the mathematical formalism of complex systems and nonlinear dynamics is independent of physical concepts and only considers economic and social data. Therefore, we prefer the term ‘sociodynamics’. A brilliant forerunner of modern sociodynamics was the Austrian economist Joseph F. Schumpeter, who analysed the correlation of innovation dynamics and economic cycle theory. New ideas arise steadily. When enough ideas have accumulated, a group of innovations are introduced by entrepreneurship. They develop slowly at first, then accelerate as the methods are improved. A logistic development characterizes the typical trajectory of an innovation. Some investment must precede the introduction of an innovation. Investment stimulates demand. Increased demand facilitates the spread of the innovation. Then, as all innovations are fully exploited, the process decelerates towards zero.
Schumpeter called this phenomenon the ‘swarming’ of innovations. In his three-cycle model, the first short cycle relates to the stocks cycle and innovations play no role. The following longer cycle is related to innovations. Schumpeter recognized the significance of historical statistics and related the evidence of long waves to the fact that the most important innovations, like steam, steel, railways, steamships and electricity, required 30 to 100 years to become completely integrated into the economy.
In general, Schumpeter described economic evolution as a technical progress in the form of ‘swarms’ that were explained in a logistic framework. A technological swarm is assumed to shift the equilibrium to a new fixed point attractor in a cyclical way. The resulting new equilibrium is characterized by a higher real wage and higher consumption and output. Thus, Schumpeter’s innovation dynamics can easily be interpreted in terms of sociodynamics with attractors. Innovation swarms at economical points of instability can be considered order parameters dominating long-term business cycles.
Historically, the great Depression of the 1930s inspired economic models of business cycles. However, from a mathematical point of view, the first models (for instance Hansen–Samuelson and Lundberg–Metzler) were linear and hence required exogenous shocks to explain their irregularity. The explanations by exogenous shocks have the great disadvantage that they are often arbitrary ad hoc hypotheses and hence can explain anything. The standard econometric methodology has argued in this tradition, although an intrinsic analysis of cycles has been possible since the mathematical discovery of strange attractors. The traditional linear models of the 1930s can easily be reformulated in the framework of nonlinear systems. From a methodological point of view, endogenous nonlinear models with attractors seem to be more satisfactory. Nevertheless, endogenous nonlinear models and linear models with exogenous shocks must be taken in earnest and tested in economics.
In contrast to physical, chemical, and biological systems, no equations of motion on the microlevel are available for social systems. People are not atoms or molecules, but human beings with intentions, motivations, and emotions. In principle, their individual behaviour and decision-making could be explained by their brain dynamics. Cognitive and emotional dynamics are determined by order parameters characterizing individual thoughts, decisions, and motivations. But this is only a theoretical option, because the corresponding neural equations are not yet known. Furthermore, these equations would be too complex to solve and predict the future behaviour of people.
Therefore an alternative approach is suggested which gets along without microscopic equations, but nevertheless takes into account the decisions and actions of individuals with probabilistic methods in order to derive the macrodynamics of social systems. The modelling design consists of three steps: in the first step, appropriate variables of social systems must be introduced to describe the states and attitudes of individuals. The second step defines the change of behaviour by probabilistic phase transitions of individual states. The third step derives equations for the global dynamics of the system by stochastic methods.
In a society we can distinguish several sectors and sub-sectors that are denoted by variables. There are variables for material states, extensive and intensive personal states. The socioconfiguration of a social system is characterized by these material and personal macrovariables. They are measured by the usual methods of demoscopy, sociology, or economics. As in thermodynamics, there are intensive economic variables that are independent of the size of a system. Examples are prices, productivity, and the density of commodities. Extensive variables are proportional to the size of a system, and concern, for example, the extent of production and investment, or the size and number of buildings. Collective material variables are measurable. Their values are influenced by the individual activities of agents, which are often not directly measurable. The social and political climate of a firm is connected to socio-psychological processes, which are influenced by the attitudes, opinions, or actions of individuals and their subgroups. Thus, in order to introduce the socioconfiguration of collective personal variables, we must consider the states of individuals, expressed by their attitudes, opinions, or actions. Furthermore, there are subgroups with constant characteristics (e.g. sections or departments of a firm or an institution), so that each individual is a member of one subgroup. The number of members of a certain state is a measurable macrovariable. The socioconfiguration of, for example, a company is a set of macrovariables describing the distribution of attitudes, opinions, and actions among its subgroups at a particular time. The total macroconfiguration is given by the multiple of material configuration and socioconfiguration.
Their probabilistic phase transitions can be used for setting up the macroevolution equation of a social system. The probabilistic macrobehaviour of a society is described by a probability distribution function over its possible socioconfigurations at a certain time. The distribution function P(m, n; t) can be interpreted as the probability of finding a certain macroconfiguration of material configuration m and socioconfiguration n at time t. The evolution of the social system is the time-dependent change of its probabilistic macrobehaviour, i.e. the time derivative of the probability function dP(m, n; t)/dt. Thus, we get a stochastic nonlinear differential equation that is well-known in thermodynamics as a master equation.Reference Haken and Mikhailov8
An example of social phase transitions and symmetry breaking is provided by worldwide migration processes. The behaviour and the decisions of people to stay or to leave a region are illustrated by spatial distributions of populations and their change. The models may concern regional migration within a country, motivated by different economic and urban developments, or even the dramatic worldwide migration between poor and rich countries in the age of globalization. The migration interaction of two human populations may cause several macro-phenomena, such as the emergence of a stable mixture, the emergence of two separated, but stable ghettos, or the emergence of a restless migration process. In numerical simulations and phase portraits of the migration dynamics, the macro-phenomena can be identified with corresponding attractors.
The stability and welfare of our societies depend sensitively on the dynamics of international financial markets. We have already mentioned that, in general, we do not know the microscopic motions of economic data and agents. Therefore, in 1900, the French mathematician L. BachelierReference Bachelier11 considered the fluctuations of stock prices as statistical random walk (Brownian motion) before physicists such as A. Einstein (1905)Reference Einstein12 discovered it in the microscopic motion of small particles in fluids. Brownian motion does not only imply statistical stationarity of price increments and scaling of prices (i.e. invariance under displacement and change of scale), but also independence of price increments (knowing the past brings no knowledge about the future), continuity of price variation (a sample of Brownian motion is a continuous curve), rough evenness of price changes (normal Gaussian distribution or ‘white noise’), absence of clustering (no emergence of local patterns and structure) and absence of cyclic behaviour. Further on, Gaussian distribution leads to the assumption of an efficient market and successful arbitraging with prices following a martingale: whether the past is known in full, in part, or not at all, price changes over all future time spans have zero as expectation.Reference Imada13
Modern theory and practice of finance is still more or less based upon these fundamental beliefs. What makes an investment in the stock market risky is that there is a spread of possible outcomes. The usual measure of this spread is believed to be the standard deviation or variance in a bell-shaped (Gaussian) normal distribution. On this basis, H. Markowitz (1952)Reference Markowitz14 suggested his well-known construction of portfolios in order to diversify away unique risk. A stock’s contribution to the risk of a fully diversified portfolio depends on its sensitivity to market changes, which is measured by a parameter ‘beta’. Beta delivers the benchmarks for the expected risk premium which, since the mid-1960s, was calculated by the capital asset pricing model (CAPM). Bachelier did not only suggest the random walk of price changes, but also considered the effects of investing in options. A breakthrough in 1973 was the famous Black–Scholes formula for calculating the present call option when there is a continuum of possible future stock prices on the basis of a normal (Gaussian) distribution. Every day dealers on the options exchanges still use this formula to make their trades.
Brownian motion is mathematically more manageable than any alternative. But, unfortunately, it is an extremely poor approximation to financial reality. Since the end of the 1980s, we can observe financial crashes and turbulences deviating significantly from normal distributions. Investment portfolios collapsed and hedging with options à la Black–Scholes failed. From the viewpoint of dynamical systems, patterns of time series analysis illustrate the failures of traditional financial theory. While a record of Brownian motion changes looks like a kind of ‘grass’ with normal length, a record of actual price changes looks like an irregular alternation of quiet periods and bursts of volatility that stand out from the normal length of the grass. This feature demonstrates the apparent non-stationarity of the underlying rules. Further on, discontinuities appear as sharp peaks from the normally distributed Gaussian ‘grass’. These peaks are not isolated but bunched together. Cyclic (but not periodic) behaviour can be observed. Instability of the sample variance is expressed by long-tailed distribution price changes. Last but not least, there is a long-term dependence of data.
Financial markets display some common properties with fluid turbulence. As for fluid turbulent fluctuations, financial fluctuations have intermittency at all scales. In fluid turbulence, a cascade of energy flux is known to occur from the large scale of injection to the small scales of dissipation. In the nonlinear and fractal approach of the financial system, randomness can no longer be restricted to the ‘normal’ Gaussian distribution of price changes. Non-Gaussian distributions with Levy- and Pareto-distributions are more appropriate to the wild turbulence of financial markets of today. We must consider degrees of randomness.Reference Imada13 Gaussian distribution corresponds to a pattern of time series with a ‘normal’ length of ‘grass’ without extreme peaks. Therefore, it is called mild randomness, which can be compared to a solid state of matter aggregation with low energy, stabile structure and defined volume. Wild randomness resembles the gas phase of matter with high energy, less structure and no defined volume.Reference Mainzer10 Slow randomness means the fluid state between the gas and solid state. From the viewpoint of time series, mild randomness corresponds to short- and long-run evenness. Slow randomness corresponds to short-run concentration and long-run evenness. Wild randomness corresponds to short- and long-run concentration.
The rationality of human decision is bounded by the wild randomness of markets. Human cognitive capabilities are overwhelmed by the complexity of nonlinear systems they are forced to manage. Traditional mathematical decision theory assumed perfect rationality of economic agents (homo oeconomicus). Herbert Simon, Nobel Prize laureate of economics and one of the leading pioneers of systems science and artificial intelligence, introduced the principle of bounded rationality in 1959.
The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problem whose solution is required for objectively rational behavior in the real world or even for a reasonable approximation to such objective rationality.Reference Simon15
Bounded rationality is not only given by the limitations of human knowledge, information and time. It is not only the incompleteness of our knowledge and the simplification of our model. The constraints of short-term memory and of information storage in long-term memory are well-established. In stressful situations people are overwhelmed by a flood of information, which must be filtered under time pressure. People deviate from game-theoretically predicted equilibria. They act neither in the strict sense of the homo oeconomicus nor completely chaotically. Therefore, we must refer to the real features of human information processing and decision making, which is characterized by emotional, subconscious, and kinds of affective and non-rational factors. Even experts and managers often prefer to rely on rules of thumb and heuristics, which are based on intuitive feelings of former experience. Experience shows that human intuition does not only mean lack of information and the failure to make decisions. Our affective behaviour and intuitive feeling are parts of our evolutionary heritage that enable us to make decisions when matters of survival are at stake. Therefore, we must know more about the factual microeconomic acting of people, their cognitive and emotional behaviour, in order to understand macroeconomic trends and dynamics. This is the goal of experimental economics, observing, measuring, and analysing the behaviour of economic agents with methods of psychology, cognitive and social sciences in, for example, stock markets or in situations of economic competition.
Complexity and nonlinear dynamics of computational and information systems
Dynamical systems can be characterized by information and computational concepts. A dynamical system can be considered as an information processing machine, computing a present state as output from an initial state of input. Thus, the computational efforts to determine the states of a system characterize the complexity of a dynamical system. The transition from regular to chaotic systems correspond to increasing computational problems, according to increasing degrees in the computational theory of complexity. In statistical mechanics, the information flow of a dynamical system describes the intrinsic evolution of statistical correlations. In chaotic systems with sensitivity to the initial states, there is an increasing loss of information about the initial data, according to the decay of correlations between the entire past and future states of the system. In general, dynamical systems can be considered as deterministic, stochastic, or quantum computers, computing information about present or future states from initial conditions by the corresponding dynamical equations. In the case of quantum systems, the binary concept of information is replaced by quantum information with the superposition of binary digits. Thus, quantum information only provides probabilistic forecasts of future states.
The idea of conceiving dynamical systems as automata dates back to the mechanistic world-view of the 17th and 18th centuries. In the philosophy of Leibniz, even organic systems are considered as ‘natural automata surpassing all artificial automata infinitely’.Reference Leibniz16 John von Neumann’s concept of cellular automata delivered the first hints of computational models of living organisms conceived as self-reproducing automata and self-organizing complex systems.Reference von Neumann17 The phase space is a homogeneous lattice that is divided into equal cells like a chess board. An elementary cellular automaton is a cell that can have different states; for instance, binary states of ‘black’ (1) and ‘white’ (0). An aggregation of elementary automata is called a composite or complex automaton. Each elementary automaton is characterized by its environment, i.e. the neighbouring cells. They change their states according to Boolean transformation rules depending on their cellular environments. The dynamics of a complex automaton are determined by synchronous applications of the transformation rules producing cellular patterns of black cells. These clusters can be considered as attractors at which the dynamics of cellular automata aim. Thus, there are classes of automata with fixed points and oscillating patterns, independent of initial cellular configurations, contrary to chaotic patterns of automata, depending sensitively on tiny changes of initial configurations.
Unlike program-controlled computers, the human brain is characterized by fuzziness, incompleteness, robustness, and resistance to noise, but also by chaotic states, dependence on sensitive initial conditions, and – last but not least – by learning processes. These features are well known in the nonlinear complex system approach. Concerning the architecture of program-controlled computers and complex systems, an essential limitation derives from the sequential and centralized control of computers, but nonlinear complex dynamical systems are intrinsically parallel and self-organized.
The information processing of the human brain is simulated by complex neural networks with learning algorithms. From a technical point of view, neural nets are complex systems of cells with different layers like the architecture of our cortex. Neurochemical interactions of cells are simulated by numerical weights of input data, which affect the firing or non-firing of technical neurons in dependence of certain threshold values. In this manner, microscopic neurons connect themselves in macroscopic patterns. There is no central processor or commanding neuron that can think or feel. Cognitive features of the brain are correlated to macroscopic patterns of connected neurons. Perceptions are transformed into neural maps of the brain that can be characterized by macroscopic order parameters. The complex system approach is an empirical research program that can be specified and tested in appropriate experimental applications to understand the dynamics of the human cognitive system. Furthermore, it enables heuristic devices to construct artificial systems with cognitive features in robotics.
In a dramatic step, the complex systems approach has been enlarged from neural networks to global computer networks such as the World Wide Web.Reference Haken and Mikhailov8 The internet can be considered as a complex open computer network of autonomous nodes (hosts, routers, gateways, etc), self-organizing without central control mechanisms. The information traffic is constructed by information packets with source and destination addresses. Routers are nodes of the network determining the local path of each packet by using local routing tables with cost metrics for neighbouring routers. A router forwards each packet to a neighbouring router with lowest costs to the destination. As a router can only deal with one packet, other arriving packets at a certain time must be stored in a buffer. If more packets arrive than a buffer can store, the router discards the overflowed packets. Senders of packets wait for a confirmation message from the destination host. These buffering and re-sending activities of routers can cause congestion in the internet. A control parameter of data density is defined by the propagation of congestion from a router to neighbouring routers and dissolution of the congestion at each router. The cumulative distribution of congestion duration is an order parameter of phase transition. At a critical point, when the congestion propagation rate is equal to congestion dissolution, fractal and chaotic features can be observed in data traffic.
Congested buffers behave in surprising analogy to infected people. If a buffer is overloaded, it tries to send packets to the neighbouring routers. Therefore, the congestion spreads spatially. On the other hand, routers can recover when the congestion from and to their own subnet is lower than the service rate of the router. That is not only an illustrative metaphor, but a hint on nonlinear mathematical models describing true epidemic processes such as malaria, as well as the dynamics of routers. Computer networks are computational ecologies. The capability to manage the complexity of modern societies depends decisively on effective communication networks.Reference Mainzer18
Complex networks like the internet, the World Wide Web (WWW), social networks, and biochemical networks are characterized by power law distributions. The simplest local property of a vertex in a network is its degree, i.e. the total number of edges attached to a vertex, which is simply the number of the nearest neighbours of the vertex. Here, it is mainly the degree distribution of the corresponding graph representation of the network that follows a power law. Since power law distributions have no characteristic size, they are scale-free systems. The question arises how the emergence of power laws in information networks can be explained by phase transitions at critical states.
It is not only a metaphor to transform the internet into a ‘superbrain’ with self-organizing features of learning and adapting. Information retrieval is already realized by neural networks adapting to the information preferences of a human user with synaptic plasticity. In sociobiology, we can learn from populations of ants and termites how to organize traffic and information processing by swarm intelligence. From a technical point of view, we need intelligent programs distributed in the nets. There are already more or less intelligent virtual organisms (‘agents’), learning, self-organizing and adapting to our individual preferences of information, to select our e-mails, to prepare economic transactions or to defend the attacks of hostile computer viruses, like the immune system of a human body.
The complexity of global networking does not only mean increasing numbers of PCs, workstations, servers, and supercomputers interacting via data traffic in the internet. Below the complexity of a PC, cheap and smart devices of low-power are distributed in intelligent environments of our everyday world. Like GPS (Global Position System) in cars, everyday life things could interact wirelessly by sensors. The real power of the concept does not come from any one of these single devices. In the sense of complex systems, the power emerges from the collective interaction of all of them. For instance, the optimal use of energy could be considered as a macroscopic order parameter of a household that is realized by the self-organizing use of different household goods according to less consumption of electricity during special time periods that have cheap prices. The processors, chips and displays of these smart devices do not need a user’s interface, like a mouse, windows, or keyboards, only a pleasant and effective place to get things done. Wireless computing devices, of small scale, are becoming increasingly invisible to the user. Ubiquitous computing enables people to live, work, use, and enjoy things directly without being aware of their computing devices.
What can we learn from nonlinear dynamics of complex systems?
What are the human perspectives in these developments of dynamical, information, and computational systems? In the age of globalization, modern societies, economies, and information networks are highly dimensional systems with complex nonlinear dynamics. From a methodological point of view, it is a challenge to improve and enlarge the instruments of modelling from low to high dimensional systems. Modern systems science offers an interdisciplinary methodology to understand typical features of self-organizing dynamics in nature and society. The widespread presence of power laws has changed our point of view from regarding extreme events as exceptional to regarding them as the norm of complex systems. They affect people and the environment: e.g. societal disasters (pandemics such as AIDS), natural disasters (floods, cyclones), technical breakdown (power outages, chemical contaminations), or economic turbulence (collapse of banks, huge losses in the stock markets). Their appearance in nature, society, and the economy now appears to be standard. It is a challenge for future research to find causal explanations of these scale-free systems. From a methodological point of view, we have significant hope that relationships between power laws, causal networks, phase transitions, criticality and the self-organization of complex systems can be exploited further.
As nonlinear models are applied in different fields of research, we gain general insights into the predictable horizons of oscillatory chemical reactions, fluctuations of species, populations, fluid turbulence, economic processes, and information dynamics. Obviously, nonlinear modelling explains the difficulties of the modern Pythias and Sibyls. The reason is that human societies are not complex systems of molecules or ants, but the result of highly intentional acting beings with a greater or lesser degree of free will. A particular kind of self-fulfilling prophecy is the Oedipus effect, in which people, like the legendary Greek king, try – in vain – to change their future as forecasted to them. From a macroscopic point of view we may observe single individuals contributing with their activities to the collective macrostate of society, representing cultural, political, and economic order (order parameters). Yet, macrostates of a society, of course, do not simply average over its parts. Its order parameters strongly influence the individuals of the society by orientating their activities and by activating or deactivating their attitudes and capabilities. This kind of feedback is typical for complex dynamical systems. lf the control parameters of the environmental conditions attain certain critical values due to internal or external interactions, the macrovariables may move into an unstable domain out of which highly divergent alternative paths are possible. Tiny unpredictable microfluctuations (e.g. the actions of very few influential people, scientific discoveries, new technologies) may decide which of the diverging paths in an unstable state of bifurcation society will follow.
Therefore, the paradigm of a centralized control must be abandoned with respect to the insights in the self-organizing dynamics of highly dimensional systems. We act and decide under the conditions of bounded rationality and not with the Laplacian spirit of a totally informed homo oeconomicus. However, self-organization also leads to undesired effects. Cancer is a self-organizing process of growth. Turbulent financial markets are also out of control. Thus, we need a balance between self-organization and an appropriate degree of control. We need global order parameters to realize global governance. Global crises – such as, for example, the banking financial crisis – need global response strategies and international cooperation between nations. Complexity management accepts the uncertainty that exists in the real world rather than ignoring it. Complexity management is a structured process that reduces the costs of individual experience while increasing opportunities for social, technological, and scientific learning for global cooperation. During a long evolution, cellular self-organization of organisms was embedded in a hierarchy of control processors, emerging in a learning process of evolution. In engineering science, we should aim at self-organizing systems with controlled emergence of new appropriate features. By detecting global trends and order parameters of complex dynamics, we have the chance of implementing favourite tendencies. By cooperation in complex systems we can make much more progress in choosing our next steps. Cooperation in complex systems supports deciding and acting for the sustainable future of a complex world.Reference Abarbanel7, Reference Mandelbrot13
Prof. Dr. Klaus Mainzer, professor for philosophy of science, director of the Carl von Linde Academy at the Technical University of Munich, member of the Academia Europaea, author of several books on complexity research, e.g., ‘Thinking in Complexity’ (5th edition 2007) and “Symmetry and Complexity. The Spirit and Beauty of Nonlinear Science” (2005).