horizontal rule

S Y S T E M A N T I C S   1

horizontal rule

I. GENERAL SYSTEM BEHAVIOR AND MORPHOLOGY


1. If anything can go wrong, it will.

Immortalized as "Murphy's Law," this rather dour observation is actually a generalized version of a comment made by USAF Capt. Ed Murphy, a development engineer for Col. J.P. Stapp's rocket sled research at Edwards Air Force base near Muroc, California, about 1949. According to George E. Nichols (Reliability and Quality Assurance manager for the Viking Mars lander project at the Jet Propulsion Lab) there was a certain mechanic about whom Capt. Murphy said: "If there's a way to do it wrong, he'll find it."

Somehow, over time this particular observation about one person was broadened into an indictment of systems--and indeed, the entire world--in general.


2. Systems in general work poorly or not at all.

Another way of saying this: Things don't work very well--in fact, they never did. What is this but the recognition that being human is intimately linked to building systems, which by their nature are always more likely to fail than to succeed?

Trying to short-circuit the evolutionary process means making lots of mistakes. The really amazing thing is that we succeed as often as we do.


3. Complicated systems seldom exceed five percent efficiency.

Part of the reason why designed systems (as distinct from evolved systems) perform so poorly is their complexity. The trouble with complexity is not simply that each component of a system may fail; the trouble is that more components means more connections between those components, which in turn means more potential points of failure than a simple count of components would suggest.

Thus, the more components there are, the more likely it becomes that at any moment some parts of the system will be operating incorrectly or not operating at all. If these are critical aspects of the system, then the entire system may be in "failure-mode."

For this reason, five percent efficiency should be thought of as good in very complex designed systems.


4. In complex systems, malfunction and even total non-function may not be detectable for long periods (if ever).

Large-scale human systems are particularly susceptible to this condition. Because the determination of the quality of the system's output is often highly subjective--in many cases coming from a component of the system itself--the partial or even complete failure of the system can be masked. This is not always deliberate; sometimes it is simply the complex nature of the system itself that prevents observers from determining within any useful margin of error the level at which the system is functioning.

This condition rarely occurs in non-human systems. Either the system's efficiency will be impaired to some detectable degree, or else it will pack up and refuse to work at all (or self-destruct, sometimes spectacularly). Either way, malfunction and non-function are easier to spot in non-human systems.

This is because such systems--especially self-organizing systems in nature--tend to either work or not. If they don't work, it's obvious. If they do work, they will tend to do so at close to their maximum potential efficiency because they are simple enough to work; there aren't a lot of parts getting in each other's way.

Humans could probably learn from this. Admittedly, it's not easy to imagine what a self-organizing car engine would look like, but maybe it's time someone tried.


5. A system can fail in an infinite number of ways.

The use of the term "infinite" is only a slight exaggeration, given the number of bits of matter and energy present in any system and the interactions between those component parts. It's really an unfair fight. Although there are only a very few ways in which a system may operate to fulfull its purpose, the number of ways in which a system can fail to work is close enough to infinite for any engineer.


6. Systems tend to grow, and as they grow, they encroach.

The human impulse toward improvement (some might call it "tinkering") means that any working system is regarded as a challenge. "If this system is good now," the thinking goes, "expanding it could only make it better." Another way of describing this behavior: "Systems tend to grow at five per cent per annum." (Examples abound in business and government... and indeed, government itself is an example of how working systems tend to grow.)

There are several problems with taking this path. One, growth can sometimes be counterproductive because larger systems are more prone to failure than smaller systems. Usually a small system that works in a limited environment is better than a larger version that fails more people. Modern businesses have finally begun to learn this lesson; many are now growing, not by simply adding more bodies to an already bloated staff, but by splitting off subsidiaries and breaking large, unwieldy groups into smaller and more responsive teams. The entrepreneurial model is being adopted because it works better. And it works better because small systems are less likely to fail to meet their design goals than large systems. (Of course, if the goals that are met are useless or counterproductive goals for the larger system, then the larger system may still suffer. But the lower-level systems will be very efficient at running off a cliff.)

Two, in many cases a large system expanded from a small working system will begin to demonstrate strange new behaviors never anticipated by either the original designers or the expansioneers. If the system is big enough and lasts long enough, these new functions may eventually replace those originally planned for the system.

Finally, as a system grows, its focus changes. Originally designed and built to serve humans, as systems grow, they begin to require humans to serve them. Rather than being a labor-saving device, large and complex systems are often said to require "care and feeding" as though they were loud, messy, self-absorbed and demanding human children. Thus it can be said that, as systems grow, they tend to encroach, not only on the resources of those they were meant to serve but on other systems which might be working better. With increasing size, systems stop performing their designed function and start expecting other systems to begin performing that function.

In turn, those systems grow to take on the new functions. And as they grow, they encroach on still other systems. The only thing allowing necessary functions to be performed at all is the constant genesis of new systems which haven't yet grown into uselessness.

Yet.


7. As systems grow in complexity, they tend to oppose their stated function.

Although it may seem counterintuitive, as systems become more complex their outputs actually begin to change to preserve or even intensify the problem they were originally designed to solve. This becomes clear when it is understood that systems are only necessary as long as the problem they are meant to solve exists. If the problem is ever "solved," the system will become unnecessary... which means that the system actually has a vested interest in maintaining the existence of the problem it was designed to bypass or eliminate.

As a system grows in complexity, this impulse grows in strength to the point that the system can begin doing more harm than good. One obvious example of this is the modern system of welfare. Originally developed to help nonproductive members of society survive until they could find new jobs, the system worked. And because it worked, it was expanded.

How do you expand a welfare system? You do it by redefining "poverty," by expanding the criteria of need to include more citizens. So, back in 1964 a Social Security Administration employee named Mollie Orshansky invented a dynamic "poverty threshold." Orshansky had read a 1955 U.S. Department of Agriculture survey that revealed that families of three or more persons spent roughly one-third of their incomes on food. So Orshansky decided that a "poverty threshold" level of income should be set at three times the estimated cost of the USDA's "economy" food plan. Any family earning less than this estimated amount (now adjusted annually for inflation by the Bureau of the Census) would be considered "poor."

(In 1990 the threshold income for a family of four was $12,700. By that measure, almost one of every seven Americans was "poor.")

This definition caught on quickly with the Johnson-era bureaucrats, who were looking for justifications for the Great Society "War on Poverty." It insured that there would always be a fairly constant percentage of the population classified as "poor," even if they lived like kings relative to the truly poor of other nations (or, more appropriately, when compared to an absolute measure of poverty). Transferring wealth to these definitional poor thereby became a permanent function of our system of government.

And so the welfare system grew, and grew, and grew, classifying more and more citizens as "clients" and swelling the number of public employees hired to handle that "caseload." As with systems generally, the welfare system became institutionalized. It came to employ so many people and administer so much money and power that those within the system began to rely on the continued survival of indigency for their own well-being. The system itself, created to end dependency, slowly became the single greatest enabler of dependency.

Evidence of this is unpleasantly easy to find. Spokespersons for the welfare apparatus brag about their "outreach" efforts to identify individuals thought not to be fully using all the money and goods and services to which the system says these persons are "entitled." School officials write letters to parents encouraging them to enroll their children in federal lunch programs whether the children require such assistance or not, just so the schools can qualify for other federal tax dollars. By 1996 there were 77 different federal welfare programs, whose combined effect (since most welfare recipients qualify for multiple forms of assistance) was to make welfare pay better in some places than teaching, nursing and police work. (Note: this isn't an argument for higher teacher, nurse and police wages, which are already high--relative to the number of job-seekers--due to unionization.)

It seems that the more complex a system becomes, the more tightly it embraces the problem it was meant to solve. Eventually the system takes on the new goal of preserving the original problem in order to preserve its own existence.


8. As systems grow in size, they tend to lose basic functions.

A system is created to solve a problem. It exists to perform a basic function. Sometimes it actually works.

If it does, it is doomed, because its success means that it will be given new functions to perform. Sometimes these new functions are conceptually related to the old one, in which case the old function may continue to be performed at some degree of efficiency. But even so, it becomes less efficient, because the "system" is now no longer just about performing that old function. The very meaning of the system has grown to encompass a new purpose, so that "success" is now defined not by whether the original problem is being solved, but by whether the expanded system can claim to be directed at the new goal.

Systems grow by expanding their support functions. As they continue to grow, these support systems become "necessary" functions that must in their turn have new support systems of their own added. The priority of the original function is downgraded over time as the importance of the support functions increases. Eventually no one even remembers what the system was originally supposed to do; the new, larger system goal is all that matters.


9. The larger the system, the less the variety in the product.

As a system grows, it becomes progressively more difficult for the element designed to exercise control to do so. In order for a system's defined control element to function as designed, it requires information about how well the system's outputs are conforming to the control element's goals.

Because it is difficult to measure highly-varied outputs, what happens in reality is that the variation in those outputs is often artificially constrained in order to make them easier to numerically quantify. System success, as a result, is slowly transformed from being about solving a problem to being about whether measured numbers resemble expected numbers.

And difficult-to measure variety in the system's outputs starts looking like an error condition. As system size increases, the actual goal of the system's designed control elements shifts from being about serving users to simplifying the work of those control elements.

Note: You can always tell when a business system reaches this point: it starts imposing "quality" processes. Whatever they happen to be called this week--"quality circles," "TQM," or the diabolical "ISO 9000"--the real point of such exercises is to allow executives (the designed control elements of a business system) to continue to exercise control over an enlarged system by limiting the variety in output product.

Notice how this goal is very different from "giving customers what they want."


10. The larger the system, the narrower and more specialized the interfaces between individual elements.

It is a fundamental principle of information theory that the integrity of a message is inversely proportional to the length of the communication channel through which the message travels. In other words, information degrades over distance.

For an example of this, consider the game in which a person whispers a message in a second person's ear. The second person then whispers it to a third person, and so on. When it reaches the last person, the message received is compared to the original message... to which it often bears little if any resemblance.

There are a number of ways to minimize this effect. One way is to add redundancy--to repeat a message or parts of a message. Another method (which usually seems counterintuitive until you think about it) is to add noise to the message. This helps because it reduces the chance that a critical piece of information will be lost if the message is damaged.

In the case of systems that have grown to the point that the controlling element is no longer the acting element, so that communication channels have lengthened, yet another method of minimizing error is to formalize the structure of messages. By knowing the expected structure of a message, it becomes possible to understand the meaning of an error-laden message by context, since some of the information of a message is contained in its structure.

The problem is that a side effect of formalization is "brittleness" of communication. That is, it gets easier to say "I didn't understand that message" if the sender didn't express that message in precisely the expected format. Since important information is contained in the message's structure, if the structure is "wrong," the message may not be understood.

Where this becomes a problem for systems is that the formalization of intersystem messages tends to make the interfaces between the various parts of a system narrower and more specialized. In order for one part to communicate with another, messages are required to be phrased in highly particular ways. (Have you ever had to fill out a purchase requisition?) And the result of this--especially for human systems in which one "part" may have got up on the wrong side of the bed that morning--is that it gets easier to shift work from the system itself to the user of that system.

Consider the U.S. Internal Revenue Service, or Inland Revenue, or just about any bureaucratic agency which interacts with the public. As these systems grew, they began to require citizens to communicate with them in ever-more-formal ways. The interface between a citizen and the IRS came to consist of the citizen being required to fill out a maddeningly complicated form (four times a year for the self-employed). The work originally meant to be performed by the system--the administration of revenue collection--was slowly shifted to the users of that system as it grew. To add insult to injury, any deviation by the user in the formal structure of that work--say, failing to include one of the many pieces of paper demanded--is severely punished.

Another example, as a system grows, of the focus shifting from satisfying the users of a system to easing the work of the administrators of a system.


11. Control of a system is exercised by the element with the greatest variety of behavioral responses.

Systems don't operate in a vacuum. They function in an environment, and the important thing to note about an environment is that it can change.

As with biological systems, which must adapt or die, so it is with other kinds of systems. The difficulty for human-created systems, however, is that their activity is meant to be directed by conscious intelligences. Because we tend to use hierarchical command structures, that means the activity of a whole system is supposed to be directed from the top down. But as systems become very large, the behavioral responses of both the highest and lowest elements is often constrained.

The director of an agency, for example, is the most visible to those outside persons concerned with the activities of that agency. So what that director can do is often limited due to time (shareholder meetings, appearances on Nightline, etc.) or political maneuvering (lobbying, avoiding the appearance of impropriety, etc.). And what the person on the shop floor is permitted to do is usually even more tightly constrained.

Thus it can be said that these system elements, whose responses to environmental stimuli are highly constrained, don't actually exercise real control over the system of which they are parts. Instead, the element that is objectively the most capable of directing the system in response to environmental conditions is the true controlling element of that system.

In a school, that element may be the teachers union. In a government agency, it may be (and probably is) an assistant deputy undersecretary for something-or-other. In a business, it may be the sole supplier of a critical component. In each of these cases and others, the element that is capable of the greatest variety of behavioral responses to changes in their system's environment is the element that exercises the most control over that system.


12. Loose systems last longer and work better.

Gall, in Systemantics, describes Charles Babbage's experience with his Difference Engine. In the early versions, the internal parts tended to bind up against one another, with the result that the entire system either did not work or broke from being pushed too hard.

While it may seem like a stretch to equate this with groups of people, supposing even that there is some mysterious power of Internal Friction to make things fail to work, there is a connection. Namely, systems tend to demonstrate similar structural properties no matter what their composition. If it's a system, if it is a thing whose function is generated and determined by the organization of its formative elements, then it can be counted on to act like a system. That is, it will tend to bind up and even break if pushed to do more than the Internal Friction of its components will permit.

This leads rather obviously to the system design principle that systems whose elements are not tightly constrained with respect to one another will tend to perform their given function more capably. There will be occasional slippage between parts, but that is the price which must be paid to achieve overall system function.


13. Complex systems exhibit complex and unexpected behaviors.

Complex systems by definition are composed of many smaller parts. The interaction of these parts--since there are so many of them, and so many connections between them--reaches a point of incomprehensibility very rapidly. Thus the mutual action of the parts is likely to be unpredictable.

This leads to the potential shifting of the controlling element of the whole system. Over time, as the parts of a system change with respect to each other, slipping here, binding there, the controlling element may change. With this change in control may come a change in "goals" of the entire system. And it is this unpredictability of changes in control that makes the whole system appear to act unpredictably.


14. Colossal systems foster colossal errors.

From burying its rulers under simple stone structures called mastabas, the Egyptians progressed (if that is the word) to complex pyramids. As the pyramids got larger and larger, they became too tall for their bases. The pyramids fell down.

Eventually the Egyptian engineers figured out the problem. This allowed the pharaohs to sink more and more of the treasury into building more and more elaborate tombs. As the output of the nation shifted largely toward building pyramids instead of other functions of statecraft, it could no longer support itself.

Egypt fell down.


horizontal rule

Background

I. General System Behavior and Morphology

II. Systems and Information

III. System Design Guidelines: Number

IV. System Design Guidelines: Size

V. System Design Guidelines: Longevity


horizontal rule

Home

Heart

Body

Spirit

Mind

Art Writing Religion Personality
Music Travel Politics Computers
Genealogy Work History Reasoning
Fiction Games Economics Science

horizontal rule