Pointer to article:
Here’s one of the scariest pronouncements I’ve ever read, for reasons both technological and theological:
“The better you design a system, the more likely it is to fail catastrophically. It's designed to perform very well up to some limit, and if you can't tell how close it is to this limit, the collapse will occur suddenly and surprisingly. On the other hand, if a system slowly erodes, you can tell when it's weakening; typically, a well-designed system doesn't expose that.”
I’m not sure if I agree with that grand statement. If Leonard Kleinrock had cited a few examples to bolster this assertion it would have greater credence. Is this supposed to make us swear off structured, top-down, waterfall system development approaches forever? Is that all pure Frankensteinian hubris that will produce monsters destined to run amok and torch the castle wherein they were created? Should we instead let rogue teams of maverick programmers attack any problem they see with any available code they can slap down on a moment’s notice, regardless of whether it duplicates others’ work, or whether it conflicts with or fails to interoperate smoothlessly with legacy systems? Without regard for what high-level architecture, if any, that it figures into? And what do we say to the deists who regard all of creation as figuring into God’s master plan, which, by definition, is the best-designed system of all? That it’s all destined to “fail catastrophically”? That Armageddon is the fate in any God-designed order of things?
Getting back to earth for a moment, and to the interview with Kleinrock, he contradicts himself in the very next paragraph:
“So, how can complex systems be made more safe and reliable? Put the protective control functions in one portion of the design, one portion of the code, so you can see it. People, in an ad hoc fashion, add a little control here, a little protocol there, and they can't see the big picture of how these things interact. When you are willy-nilly patching new controls on top of old ones, that's one way you get unpredictable behavior."
Huh? Follow the train of my puzzlement on this for a moment. The best-designed systems are those that surface their overall structure, behavior, and controls in the most visible, maintainable, monitorable, extensible way. And these are the systems that Kleinrock says are doomed to fail catastrophically. So how does he propose to save them from self-immolation? By surfacing the control code even more saliently! By making them even better designed! In the previous paragraph, he implied that chaotic bottom-up development produces the most stable structure. In this paragraph, he says that chaotic development is to be avoided, in favor of structured top-down development! I don’t get it. He’s trying to have it both ways.
Actually, I think that, in the final analysis, he’s arguing that the Second Law of Thermodynamics is God’s fundamental law, that the best-designed systems are those that hint at grand eternal plans but slowly melt into entropy, accepting the inevitability of a steady stream of localized fixable malfunctions, thereby warding off the “Big Crunch” that some say will reverse the plan burst forth in the “Big Bang.” How else to interpret Kleinrock’s statement: “On the other hand, if a system slowly erodes, you can tell when it's weakening; typically, a well-designed system doesn't expose that.”
Is the Internet—Kleinrock’s Big Bang—eroding around us? Are spyware, spam, viruses, Trojans, DDoS, and other assaults on the matrix a sign of this? From a systemwide point of view, they’re all more or less “localized fixable malfunctions,” and none of them has crashed the Internet as a whole, which keeps, bottom-up, layering new controls over old to keep the rickety structure operating reason ably well. If Kleinrock’s perspective is valid, should we doubt that a localized Armageddon can ever crash the Internet as a whole?
I certainly hope so. My hope is the only certainty I know on this matter. Hope expressed through prayer, secular or otherwise, to the cybergod(s).