Here are real world examples of looping and issue storms that should help.

The city of Seattle WA had a $1 billion redevelopment boom going, but it was taking 8 months on average to get a building permit to renovate an office building. The builders were screaming at the city because they were paying interest while waiting for a permit. We took data and found the following.

The average permit application went back for rework 9 times. The city thought they were processing 2000 permits a year, but it was actually 20,000 counting the reworked ones.

67% of that looping was due to a new energy efficiency code because (a) the detailed interpretation of the new rules was being worked out via the application process and (b) many applicants made the same mistakes. New rules often create looping as the regulated community learns them. In fact, re (a) learning typically involves looping, and (b) I call the "flow of novices" condition. Solution: make the code people work out the rules and train the community.

18% of the looping was due to two agencies with contradictory rules. The traffic department wanted parking garages to exit into the alleys, not the streets. The fire department wanted them to exit into the streets, not the alleys. Applications bounced back and forth endlessly. The only person who could resolve this was the mayor who was so far removed that he did not know it was happening. I call this an "administrative interface" that is where people who must work together are many administrative layers below the person who can resolve differences. Administrative distance (= number of layers up and down between persons or groups in conflict) is important in the model. Solution: raise the issue to the proper level. In chaos management this is called early identification and rapid resolution of issue storms.

There were, by the way, 18 different agencies enforcing about 10,000 pages of rules. This is a typical chaotic situation. Note that adding people can make things worse, because they can cause more looping. Aside: I understand that India is famous for the complexity of its bureaucracy.

Another very important measure is the "stack/action ratio". This is the ratio of the time an item waits to be processed at a given location to the time it takes to process it. In organizations I have studied ratios of 100 to 1 are common. Part of this is due to the following seldom considered fact. Suppose two processors, A and B, are in sequence and process at the same rate. Suppose B, the downstream processor, takes a week off, thereby accumulating a one week stack. Every item passing through them thereafter will accumulate a one week stack delay. the solution is what I call "burning off the stacks."

One mode of implementation of chaos management is to track the documents and spot looping and stacking. This is easiest where the items are documents. The model, which I call the "chaos box" could actually be a visual display, though I have never done that. But sometimes I look at a big office building and imagine the flows of issues therein. In fact I view the issues as the real things and the people as merely their prey, as it were. The ecology of issues.

Another example, the opposite of the Seattle case. The US Navy reorganized and the Chief of Naval Research went from having 6000 employees to having 36,000 overnight with no warning because he picked up all the laboratories. Chaos indeed. Suddenly the Admiral was being asked about stuff he knew nothing about. He would come back from the Pentagon and hand out these questions to his top staff on the 8th floor of his HQ. They would call their staffers, mostly on the 7th floor. I once literally took the elevator down and watched these questions engulf the cognitive resources of the building. Plus they flowed out to the labs and flooded the system.

I tracked a few questions then asked the Admiral how much effort he thought an answer took. He said he had no idea (true) but guessed 40 person-hours. I told him it was closer to 4000 hours. When you do not know what you are "spending" to two orders of magnitude, you are out of control. Solution: we created "issue money." When the Chief asked a question he issued X hours --- literally wrote it on a card and gave it to the guy. That person could not involve anyone else without giving them some of that time. This is classic budgeting, but used in a new way.

A typical organization can have 20 or 30 major issue storms sloshing around at once, all competing for cognitive resources. Hence the model.

Note too that the processors are usually organized via say a 2D org chart -- a tree structure. Issues also have a tree structure -- an outline is an example. Tasks have a network structure of precedence relations -- the CPM chart. Reflecting these makes the model 6 dimensional. No wonder it is hard to envision what is going on in an organization.

Finally, my chaos conjecture is not unfounded. It is based on research for the Navy by B. Huberman (I will look for citations -- it's been 10 years). He showed that delay of information about other processors was sufficient to produce genuine chaos in distributed computer systems. I am quite sure this is true of cognitive production systems, i.e., organizations.

I call this phenomenon a Huberman game. Here's an example. A room full of people close their eyes and raise either one or two fingers. Assuming the split is not 50-50, which is most times, the majority must pay the minority a fine proportional to the difference. Repeat the process. Some of the majority, perhaps many, will change their gesture. If too many change they lose again. Absent communication they are unlikely to converge on a 50-50 equilibrium. Rather they will oscillate chaotically around it.

This I believe is the essence of commodity markets and business cycles generally. they are Huberman games driven by delay of information about what others are going to do. Classic economics has always admitted that its assumption of perfect information was false, but we never knew what difference it made. I think Huberman has solved that puzzle.

Reference: The Behavior of Computational Ecologies, by Bernardo Huberman and Tad Hogg, in The Ecology of Computation (Studies in Computer Science and Artificial Intelligence, 2) by B.A. Huberman Hardcover, October 1988, Elsevier Science Ltd; ISBN: 0444703756 (Amazon.com Price: $145.25)

Now you have a lot to think about.

Have fun,

David