S Y S T E M A N T I C S 3
III. SYSTEM DESIGN GUIDELINES: NUMBER
1. New systems mean new problems.
Systems are created to solve some problem. The trouble is, the moment a system is created it brings into being an entirely new set of problems related to the general behavior of all systems. Instead of there being just the one original problem, now there is the original problem plus the problems generated by the new system.
If the new system works (as happens on rare occasions), this situation might be considered a fair trade-off. But of course it won't last. A working system always expands. Eventually this system--despite the best intentions of its creators and functionaries--will expand to the point that its problems outweigh its products. If it continues to survive, at some point the system's problems will come to be regarded as desirable products.
And yet we persist in thinking we can solve problems by throwing systems at them.
2. Systems should not be unnecessarily multiplied.
The lesser-known formulation of Occam's Razor. (The usual form is, "Given a choice between two systems, choose the simpler.")
Again, the point is that the problems generated by multiple systems don't add together arithmetically (three systems with three problems each together make six problems). In fact, the problems expand geometrically or even exponentially (three systems with three problems each combine to produce nine problems or more). This is because it's not enough to simply see the visible pieces of a system--you also have to consider the relationships between the pieces, the interactions of all the problems... and that's where the combinatorial explosion happens.
Do it without a new system if you can.
Do it with an existing system if you can.
Do it with a little system if you can.
In other words, if you can, don't assume that you have to design and implement a new system to solve your problem. If after careful thought you conclude that some system is required, don't assume that you have to build a completely new system from scratch. And if a new system does seem to be called for, don't assume that it has to be a big one that will do everything that will ever be needed.
4. Escalating the wrong solution will not improve the outcome.
Example: In the world of software development, projects sometimes wind up being delayed. Humans can make mistakes; requirements can change; resources can be denied--there are numerous events that can affect a plan for the worse.
What we do know is that adding more systems to prevent failure often makes the problem worse. The IBM "human-wave" method of overcoming programming defects is a classic example of this. Rather than removing distractions from small teams whose members all know and can communicate with each other, executives reason (if that is the word) that if five programmers can write a program in five days, twenty-five programmers should be able to write the same program in one day.
What actually happens is that the multiplication of systems (in this case, programmers) massively increases the amount of communication required to achieve the desired goal. The more systems that have to function together, the more time they spend trying to communicate rather than actually working the problem.
Sometimes, less is better.
5. If a problem seems unsolvable, consider that you may have a metaproblem.
In other words, it may be that the problem you're trying to solve is itself the result of some other, deeper problem. In such a case, figuring out the real problem may give you the necessary insight to solve the lesser problem, or even make its solution unnecessary.
I. General System Behavior and Morphology
II. Systems and Information
III. System Design Guidelines: Number
IV. System Design Guidelines: Size
V. System Design Guidelines: Longevity