"Given the undeniable trend towards all-encompassing change in software development, the case can be made that general purpose software is doomed to always be unreliable and buggy."




"Is this some sort of collective insanity that has somehow woven its way into our society?"

26.10.06

 

The Coming Desktop Metaphor Crash Part I

The world of computers is reaching a critical juncture. At stake are the past fortunes of some very large “old school” companies such as Microsoft and the future fortunes of some up and coming “rebel” companies like Google and Yahoo.

As the prevalence of always-connected broadband internet access increases, some subtle but powerful shifts are taking place. The future looks a lot less like the Bill Gates vision of “A PC on Every Desktop” and a lot more like the Scott McNealy vision of “The Network IS the Computer”.

More specifically, it is my assertion that the desktop computer is already doomed. It has no place in the coming age of always-on, always-connected networking. The desktop PC is a over burdensome, clumsy, hulking beast that must fall victim to its own overambitious goals.

Why must this happen? Because it is the natural progression of technologies. As technologies mature, they become much more complex on the inside while becoming much simpler on the outside. A good example is the telephone system. These once obtrusive, loud, large, burdensome devices are slowly fading into the background. When phone systems were young they were very difficult to use, but the overall technology was fairly simple. Anyone with a phone and a line could pick up the phone, crank a handle to signal the operator, and wait for her to come onto the line. The operator would ask who you wanted to call and would then physically connect the wire from your phone to the wire that snaked across town to their phone. Then, since transmissions were of such poor quality, the two connected parties took turns yelling into the microphone.

That is where computers are now.

If we compare the above description to the modern phone system, the similarities are very superficial. I can still call someone and talk to them, and microphones and transmission systems have improved to the point that a whisper can be heard across the world as if it were in the same room. Saved contact lists ease the burden of having to remember phone numbers and voice mail makes it easy to receive calls even when I am unavailable. With a cell phone, I can be practically anywhere in world and receive a call from practically anywhere else, and it is no more difficult than calling my neighbor. The phone system has gotten simple to use.

But that simplicity comes at a cost. Most people are familiar with the concept of “conservation of energy”, a physics law that states that “Energy is neither created nor destroyed, it only changes form”. When wood burns, the heat isn’t created…it already existed in the fuel potential of the logs. That energy came (as all earthly energy does) ultimately from the sun, which in turn is converting potential energy to heat energy. The point being that the energy must exist somewhere in some form.

I have borrowed from that law of physics to formulate the “Law of Conservation of Complexity”. This simply states that:
A system (as a whole) can become neither simpler nor more complex, it can only shift its complexity to other areas”.

If this is true, then how can the above example of the phone system be explained? From a user’s point of view, it has definitely become simpler and easier to use. But a closer look at the system as a whole will show that the law indeed holds true.

Most people have no idea how their phone works. They simply dial a number and the person they wish to speak with answers. But in the one or two seconds that it takes to “connect” to the phone you are calling, an amazing amount of complex technology is making sure it gets there intact. Even a cursory overview of the process exposes some of the underlying complexities:

1. A communications link is established between your phone and the closest cell tower. In order for that to happen, your phone must send a unique identifying code to all surrounding towers. Each tower analyzes the signal and determines if your phone is authorized to be in the network. If it is, it creates entries in a large central database of where your phone is (i.e. which towers are within range). This process may happen every three to four seconds.

2. Once the link is established, your phone sends a request in the form of a phone number that you entered using the keypad.

3. When the request is received by a cell tower, it immediately forwards the request to a central computer system that scans the database for a phone number that matches the number you dialed.

4. If the phone number you dialed is found and it is a land line, the request is forwarded to the phone company switches. If it is a cell phone number, the request is forwarded to the cell tower closest to the person you are calling.

5. When the cell tower receives your request, it passes it on to the receiving cell phone, causing it to play some really horrible “ringtone” and disturb everyone in the office again.

6. When the person being called answers, their voice is turned into a long stream of “1”s and “0”s. These are broken into thousands of tiny pieces called “packets” which are sent back to the cell tower. START THE CLOCK…

7. When the cell tower receives a packet, it reads information that is included with each and every one that indicates where it is from and where it needs to go. Something like tiny address labels on each 1/1000th of a second of phone conversation.

8. Packets are forwarded to any available “router”, a specialized computer whose job it is to receive packets and forward them towards their intended destination.

9. This process is repeated 10, 50, maybe 500 times. Each time a packet is retransmitted, it is sent a direction that (in theory) is closer to the destination. An interesting note here is that they are not sent in sequential order, nor are they all sent on the same routes. One packet may travel through Atlanta, GA while the next one goes through Reston, VA.

10. As packets arrive at the final router (i.e. the destination cell tower), they are held in limbo until enough pieces arrive to be reassembled into their original order. This is accomplished by analyzing sequencing information contained in the header of each packet.

11. Once a group of packets is reassembled, it is forwarded to the receiving cell phone.

12. The receiving cell phone converts the millions of “1”s and “0”s back into an analog signal that can then be cleaned up, amplified, and sent to the ear speaker on the receiving phone.

13. STOP CLOCK: 1/25th of a second has passed and the word “Hello?” emerges from the phone speaker.

As previously noted, the above is a very high-level view of what really happens between two cell phone calls. There are literally thousands of details that have not even been mentioned to provide clarity, security, caller ID information, etc.

Comparing the two descriptions of how phone systems used to work and the way they work now shows that although it is much easier to use a phone than it used to be, the underpinning technology has grown exponentially more complex. What used to be done by a pair of copper wires physically connecting two telephones has been replaced by a mind-boggling array of computers, fiber optic lines, routers, switches, cell towers, RF transmitters, and millions upon millions of lines of computer code.

So it should be obvious now that even the simple telephone has become a metaphor. It acts like an analog phone, but “under the hood” it has absolutely no resemblance to it. But the metaphor remains because it provides a way for us to interact with an extremely complex system without having to understand or even be aware of what it takes to make it work.

Telephones are an example of a good metaphor that has brought a technology to maturity. Now nearly anyone can pick up a phone nearly anywhere in the world and use it without having to read an instruction manual. This is the result of a history of standardization and of slowly shifting the complexity of the system away from the user.

Desktop computers are (despite what many “experts” say) an infant technology. They are still far too complex to use and far too fragile for a mature technology. In fact, it is a fluke of history that they even exist at all. Before the PC, all computers were what is commonly referred to as “dumb terminals”. Although you may refer to your computer as such (and many times probably much worse), the term actually has a legitimate meaning. A dumb terminal was strictly a “window” into the system. It was a simple display device (CRT) and a simple input device (keyboard).

The technology of computers in 1975 was, in my opinion, more mature than it is today. Its capabilities were less, but it fit the model of a maturing technology, The complexity had been moved away from the user and into “mainframe” computers. These resided in highly controlled computer centers that only authorized personnel were allowed in. But using a terminal, a user could connect to a mainframe from practically anywhere and take advantage of its usefulness from afar. If a terminal quit functioning it was a simple matter to replace it. Just unplug it and plug the new one in. Since it was just a link to the true computer, there was no information stored on the terminal. By nearly all standards, this was a good, mature system that kept the complexities far from the sight or care or the end user.

Then came IBM with the Personal Computer. The original “advantages” of a personal computer were pretty appealing. You could use it without it being connected to a mainframe. You could save files on an internal disk and pull them up later from the same computer or another computer. You could add programs that other people may not have or want. You could even write your own programs and save them on the computer.

But there was a problem with this concept. While appearing to be liberating to the user…eliminating the need to be always connected to some mainframe somewhere…a huge tradeoff was made. With a PC, the complexity of the system was placed squarely on the user’s desk. Now it wasn’t enough to know how to turn it on and log in. When it didn’t work, you had to be able to fix it, or pay someone else a high hourly rate to fix it for you. If the hard drive crashed, you couldn’t just get a new PC and keep on working, because all of your files and programs were still on the old, broken PC.

Because of the PC, the end user had to become an expert not just on the programs they used, but the entire system…hardware, operating system, programs, modems, and everything that it took to make them work.

The fact that PCs brought the complexity to the user and dropped it into their laps should have been enough to prevent the PC from ever becoming popular. But that didn’t happen because of a few companies that saw a huge potential for profit in supplying software for these little gremlins. Emerging on top of this push was Microsoft. Through extremely ingenious marketing, being in the right place at the right time, and merciless business tactics Bill Gates, Steve Ballmer, and a ragtag crew of hippies cobbled together a new metaphor from various borrowed, purchased, and stolen sources. This metaphor was the “desktop.“ It has been Microsoft’s domain and source of business and financial power for almost three decades, yet it has matured surprisingly little.

18.10.06

 

Welcome to the Crash

What is Metaphor Crash?

In his essay “Once there was the command line”, Neil Stephenson introduces the concept of Metaphor Shear with a hypothetical occurrence that nearly all of us can, unfortunately, relate to. He described it as the moment when the document that he had just been working on suddenly disappeared from his computer screen with no warning. What was a few seconds before a very convincing replica of a sheet of paper being typed on was suddenly and forever gone. But did the documents really go anywhere? Not really.

What happened was a breakdown of the system that provided the metaphor of a sheet of paper in a typewriter. The user is typically so engrossed in what they are doing, and so completely sold out to the metaphor that is providing the interface that when this happens, it is shocking.

Sadly, the more complex information technology becomes, the more commonplace these sudden crumbling of the fabric that we interact with will become. Most everyone who uses a computer as part of their job has come to expect it to periodically freeze, inexplicably close a running application, disconnect from the network, draw strange partial shapes on the screen, and completely crash, forcing a restart of the whole system.

There is a lot of reality in the joke about the four engineers riding through the desert when their car broke down. The first, a mechanical engineer, was busy under the hood looking for a broken part. The electrical engineer was tracing out the wiring to the distributor and the chemical engineer was trying to figure out if they had gotten a bad tank of gas. Suddenly the car started and they all saw the fourth engineer sitting at the wheel.

“What did you do?” they all asked.

“Well, first I shook the steering wheel back and forth. That didn’t work so I pressed all the buttons on the dash. When that didn’t get things going I closed all of the windows and opened them back up. Then it cranked right up.”

“Amazing!” they all exclaimed. “But we are all engineers and we would never have thought to do those things. How did you know what to do?”

“Easy”, he replied. “I am a software engineer”.

The truth in this story is that we have all come to accept performance from software that is far, far below the standards applied to other industries. And as more and more of our critical infrastructure is brought online with computerized controls, what is now a constant annoyance could become a series of software-induced disasters. With software problems, either intentional or accidental, power plants may shut down; communications systems could crash, and the systems that emergency personnel and even our military depend on could be rendered useless. Unlikely? Not as unlikely as you may think.

When Metaphor Shear moves outside of the little box under your desk called your computer into the realm of power, communications, and defense systems, the stakes are much, much higher. If your computer fails, you may lose a few hours worth of work, or even a week’s worth. Or you may not be able to check your email for an extended period of time. Inconvenient, no doubt. But when large, distributed systems begin failing it is much, much more than an inconvenience. It is potentially a large-scale disaster.

This is what I choose to label “Metaphor Crash”. It is when the systems that we depend on to provide critical infrastructure suffer from the kinds of glitches that cause office workers around the world curse under their breath and have fantasies that involve their computer and a large hammer.

Metaphor Crash is not inevitable, but it is becoming more and more likely. This blog will provide periodic thoughts on what it is, how we got to this point, and how to avoid it as we move into the future.


Metaphor Crash is starting...