Lessons Learned from 19 Years of Chaos and Complexity
J. C. Sprott
C&CS Seminar, May 7, 2013
It seemed to me that after 19 years and almost 600 talks in this
seminar series, someone should summarize what we've learned. Since
I've probably heard more of the talks than anyone else, I decided to
give it a try. It's my hope that this will become a tradition, with
others of you sharing what you have learned, and it could also
provide a framework for our summer discussions on the Union Terrace.
In this talk I'll describe the different approaches researchers have
taken to understanding the world, make some general observations
about the prospects and limitations of their methods, and share some
of my views about the future of humanity. It will necessarily be
personal and somewhat subjective, and thus probably controversial.
Either explicitly or implicitly, most people, both scientists and
non-scientists, are trying to understand the world by making models.
Some people have a model in which events are determined by God or
perhaps by the position of the planets at the moment of one's birth.
A model is a simplified description of a complicated process,
ideally amenable to mathematical analysis. However, as the late
George Box reminded us, "all models are wrong, but some are useful."
Furthermore, the usefulness of a model may not relate to how
realistic it is. A simple model is usually more informative and
sometimes more predictive than one that includes every effect that
one can imagine.
Typically a model involves one or more agents. Although "agent"
suggests a person, it could also be a whole society, an industry, an
organism, a neuron, or even an individual atom. Agents are exposed
to stimuli and exhibit corresponding responses. Sometimes we know
the stimuli and are trying to determine the response; other times we
observe an action and seek to understand its cause. Science could be
defined as the study of such cause-effect relationships.
Consider an example. Somewhere I read that people who floss every
day live six years longer than those who don't. The flossing is the
stimulus, and the increased longevity is the response. The agent
could be an individual, or it could be a statistical statement about
a whole society.
In fields like physics, we have the luxury of going into the
laboratory and doing a controlled experiment on the agent. Even
psychologists experiment with human subjects, but more often, when
the agent is something like a galaxy, a society, or an economy, the
best one can do is to make observations, attempting to correlate
stimuli with responses. The difficulties are a paucity of data, a
lack of adequate control, and the inability to distinguish
correlation from causality. Those who floss are probably also
engaging in other healthy activities.
A third approach is to use reductionism, in which one looks at the
inner workings of the agent, where other simpler agents are found,
and then try to develop a theory relating the response to the
stimulus. Scientists are sometimes attacked for their theories by
people who equate "theory" with "speculation" and who instead want
to know the "facts." However, theories are much better than facts,
since they provide understanding and prediction even outside the
realm where they have been tested. If we had a theory for why
flossing increases longevity, it might suggest alternate ways to
achieve the same or even better result.
I'm glad there are people willing to devote their whole professional
career to looking for the Higgs boson or understanding the nervous
system of a worm. Reductionism has been a very powerful scientific
method, but it takes enormous patience, perseverance, and financial
and human resources. Furthermore, even a complete understanding of
the inner workings of an agent may not shed much light on the
emergent behavior of the agent because of the multiple levels of
complexity.
A common difficulty is that responses sometimes occur in the absence
of any apparent cause, and there are many reasons for such
nonstationarity. The agent may be remembering some event in the
past, or perhaps the causes are not adequately identified or
controlled, or there is noise or measurement error. However, even in
a perfect experiment, the agent can exhibit a time-varying behavior
due to some internal dynamic even when all the external stimuli are
constant -- a common occurrence to which I will return shortly.
The simplest cause-effect relationship is linearity. Linearity does
not mean a chain of causality in which A causes B which causes C,
and so forth, but rather that the response is proportional to the
stimulus. In the flossing example, it means that I would gain about
one year of life by flossing weekly, or sixty years by flossing ten
times a day. If I accepted the fact about flossing and believed in a
linear model, I'd probably be flossing right now.
Furthermore, linearity means that the response to two or more
stimuli is the sum of the responses to each individually. Doctor Oz
claims that those who have 200 orgasms a year live six years longer,
which sounds like more fun than all that flossing, but my point is
that linearity says that I could gain twelve years by appropriately
manipulating two parts of my anatomy.
If linear models make such nonsensical predictions, why would one
even consider them? First of all, they are simple and provide a good
starting point. Secondly, it turns out that most things are linear
if the stimulus is sufficiently small. Finally, linear systems of
equations can be solved exactly and unambiguously for any number of
variables, although, as a practical matter, a computer may be
required if the system is large.
It often happens that an agent is stimulated by its own response in
a feedback loop, either directly or indirectly through other agents.
Thus the effect becomes the cause, and the cause becomes the effect,
like the chicken and the egg. The feedback can be either positive
(reinforcing the response) or negative (inhibiting it). In such a
case, time-varying dynamics can occur because of the inevitable time
delay around the loop, and that time delay determines the time scale
for the dynamics.
In a linear system with feedback, only four things can happen.
Negative feedback leads to exponential decay or a decaying
oscillation, while positive feedback leads to exponential growth or
a growing oscillation. Positive feedback implies a source of energy
or other resource from outside the system. A PA system exhibiting
audio feedback will go silent if the power is removed. These four
linear behaviors are rarely seen, especially unlimited exponential
growth, because resources are limited and nature is not linear.
There are many possible nonlinearities. In two simple examples, the
response increases monotonically with the stimulus but either slower
than linear (diminishing returns) or faster than linear (economy of
scale). An example of a mathematical function that is slower than
linear is the square root, and one that is faster than linear is the
square. I would argue that the former is more common since the
response usually cannot increase without bound. Even if I could gain
six years by flossing daily, it's unlikely that I could gain 144
years by flossing hourly or by having 13 orgasms a day. As Joel
Robbin reminded us, "too much of anything is bad; otherwise it
wouldn't be too much."
Nonlinear agents with feedback can exhibit a wide variety of
dynamics including the four linear behaviors already mentioned. They
can have multiple stable equilibria. They can have stable periodic
cycles. They can exhibit quasiperiodicty, which means a combination
of periods. They can have bifurcations in which a small change in a
parameter causes a completely different dynamic -- what Al Gore
calls a "tipping point." They can exhibit hysteresis, a form of
memory in which the original behavior cannot be recovered after a
bifurcation without making a large change in the opposite direction.
They can have coexisting (or hidden) attractors, meaning that
different dynamics are possible even for a given set of conditions,
depending on the past history of the system. And, of course, they
can exhibit chaos in which a small change in the initial condition
completely changes the future.
Most systems in the real world involve large networks of nonlinearly
interacting agents. The ecological system, the climate system, the
political system, and the economic system each involve numerous
agents and are strongly coupled to one another. Of necessity, most
scientists are studying a small part of a much larger network,
hoping that the part not being studied can be treated as a fixed
external stimulus. I think this often leads to erroneous conclusions
and predictions, as does the implicit assumption of linearity and
the disregard for feedback loops.
For example, if some species of animal consumes some species of
plant as its primary food supply, and the abundance of that plant is
suddenly reduced to half, we might naively assume that half the
animals would die. However, it is much more likely that they would
find a different source of food somewhere. Similarly, if global
warming causes the sea level to rise a meter over the next century,
it's unlikely that the hundred million people who now live along the
coast will drown as a result, and much more likely that they (or
rather their descendants) will simply migrate to higher ground, or
perhaps they will build some simple dikes as the Dutch have done.
An alternate approach is to characterize the general behaviors of
large nonlinear networks without regard to what they are modeling.
This is an extension of the method used by mathematicians to
characterize the nonlinear dynamics of simple systems. The task is
made difficult (and interesting) by the fact that the architecture
of a network (the connection strengths between the agents) can
change in time even while the network is exhibiting dynamics, and
the two types of dynamics are coupled. This distinction is sometimes
called the dynamics OF the network as opposed to the dynamics ON the
network. The neurons in the brain slowly reconnect even while the
brain is actively performing tasks and in response to those
activities. Curiously, an evolving network can always be exactly
represented by a (sometimes much) larger network with static
connections. What we need is a set of laws governing the behavior of
large networks analogous to the laws of thermodynamics that describe
the behavior of gases without the necessity of knowing what the
individual molecules are doing or why or even that the gas is made
up of molecules.
If I may digress for a moment, I would like to mention one
accomplishment of which I'm especially proud. Twenty years ago, I
became interested in the question of what is the simplest network
that is capable of exhibiting chaos. One would think that question
had long ago been asked and answered, but apparently not. I didn't
originally think of the question in that way, but rather I was
trying to find the simplest ordinary differential equation whose
solution is chaotic, and it was only in preparing this lecture that
I realized it was the same question. It has long been known that at
least three agents are required and that at least one of them must
be nonlinear, but I was able to show that only three feedback loops
are required and how they are arranged. Two years later Stefan Linz
and I found another equally simple arrangement, and both cases were
published in Physics Letters A.
Large nonlinear networks are appropriate models of a complex
adaptive system of the type that occur throughout nature, and much
has been learned recently about their behavior. In particular, they
are usually chaotic, although only weakly so, and thus they are
inherently unpredictable but sensitive to small changes in both the
state of the system and the parameters, and thus potentially easily
controllable. More interestingly, such systems can self-organize,
adapt, and learn -- qualities we normally associate with human
intelligence, but that are observed in physical systems as well.
Witness the organization of the Universe into galaxies and stars and
planets that ultimate gave rise to life on Earth.
We have heard many speakers over the years make dire predictions,
especially regarding the climate and the ecology, but I am more
optimistic than most about our future for five fundamental reasons:
- Negative feedback is at least as common as positive feedback,
and it tends to regulate many processes.
- Most nonlinearities are beneficial, putting inherent limits on
the growth of deleterious effects.
- Complex dynamical systems self-organize to optimize their
fitness.
- Chaotic systems are sensitive to small changes, making
prediction difficult, but facilitating control.
- Our knowledge and technology will continue to advance, meaning
that new solutions to problems will be developed as they are
needed or, more likely, soon thereafter in response to the need.
Whether it's fusion reactors, geoengineering, vastly improved
batteries, halting of the aging process, memory implants,
de-extinction, or some other game changer, things may get worse
before they get better, but humans are enormously ingenious and
adaptable and will rise to the challenge of averting disaster.
This is not a prediction that our problems will vanish or an
argument for ignoring them. On the contrary, our choices and actions
are the means by which society will reorganize to become even better
in the decades to follow, albeit surely not a Utopia.
Thank you.