Friday, April 23, 2010

Self-Organized Criticality

In 1987, a physicist named Per Bak published a paper in which he and others introduce the term "self-organized criticality".  He developed a machine that dropped grains of sand onto a disk, and counted the number of avalanches which occurred.  He found that most of the time, a grain of sand would hit the pile and only slightly adjust the configuration of grains.  Once in a while, a small avalanche would occur, sending sand right off the disk.  But rarely, an avalanche with a size several standard deviations above the mean would knock the pile down to a fraction of its previous size.  They hypothesized that the sand pile was attracted to a critical state, at which point the next grain of sand could cause a near-complete collapse of the pile.  He also found that the distribution of avalanches followed a power law, meaning that the size of an avalanche was inversely proportional to its frequency over time.  Systems that follow a power law also create distributional fractals, since the distribution is self-similar and scale invariant.


This concept of self-organized criticality has now been studied in many fields, including political science.  A great example from this field can be found in Gregory G. Brunk's "Self-Organized Criticality: A New Theory of Political Behaviour and Some of Its Implications."  He studied changes in political affiliation in American gubernatorial elections between 1790 and 1990.  Although most elections showed small changes in support for one party or another, some elections had shifts of more than 40%, many standard deviations above or below the mean.  He concludes that this must be a self-organized criticality system due to the very large, unpredictable shifts so common in political life.

Rather than a sand pile, Brunk prefers to use an analogy of a forest fire to explain the concept of self-organized criticality.  Imagine a forest where as time passes, undergrowth expands, trees are planted, and brush dries out and becomes very flammable.  In a very young forest, a fire might burn a few trees down, but they are very spread out and there is little brush to keep the fire going.  In an old, dense forest, however, one spark could turn into a huge forest fire.  How could we model this using PS-I?

Luckily, I've already prepared a model that does exactly this and you can download it here.  Every time step, the model grows a certain number of trees (which can be adjusted using the parameter probability_of_growth).  There is also a .05% chance that a tree on the landscape will spontaneously combust, causing all neighboring trees to also burn.  As you run the model, you'll notice that most fires are small, and die out quickly.  Once in a while, though, a fire will be disastrous, and cause nearly the whole forest to burn.  By choosing New Statistics Plot in the View menu, you can see the number of trees on the landscape over time (image above).  Where is the critical point?  When is a large fire most likely?  One interesting implication of this theory is that large events actually need to "build up" over time, so perhaps the idea that we're "due" for something terrible to happen isn't so far-fetched.

No comments:

Post a Comment