"Given the undeniable trend towards all-encompassing change in software development, the case can be made that general purpose software is doomed to always be unreliable and buggy."




"Is this some sort of collective insanity that has somehow woven its way into our society?"

25.7.08

 

In bookstores now! (not really)

As promised, this is the "dust cover summary" of what this blog is about. If this sounds interesting, maybe you will periodically come back and read the new installments. If not, then you are no worse off.

Project Integrity Management
Copyright © 2008, M@

January, 2005. After over four years and half a billion dollars, the FBI software system modernization project codenamed Project Trilogy was officially scrapped with no usable functionality to show. Every year, thousands of corporate software development projects around the world slip into the red, running over budget and over schedule. Many of them are scrapped and written off as complete losses while others are released incomplete and buggy. In the software development world, an on-time, on-budget, fully functional project is the exception, not the norm. Why?

Many will argue that software development is just too complex to be reliably managed. Software is typically not held to the same level of scrutiny that hardware projects are, giving much more responsibility to the developers to "do the right thing" in writing the code. In most cases, there aren't even established standards for how software is developed, tested, or deployed. Is it any wonder that problems are common?

But in "Project Integrity Management", the author makes the case that it is not the lack of standards or negligence of the developers that lead to software disasters; it is a lack of project integrity. This is a concept that creates a framework for managing the myriad of components that make up a software project. While most projects have tools and procedures in place for managing schedules, budgets, and source code, the large majority of them do not have anything in place to insure that each change to hardware, software, documentation, or procedures is done in a way that gives full visibility to all levels of the project.

It is the author's point of view that the vast majority of the problems that appear in a development project are the result of subtle changes and differences between the environments used to develop, test and deploy the applications. In a similar manner, the failure to effectively capture client expectations in requirements and manage changes to those requirements causes most of the resource sapping rework that projects seem doomed to go through.

When a project is under integrity management, each piece is accounted for and tracked as a small part of a larger unit. Just as individual wires, bolts, panels and other parts make up engines, dashboards and suspension on a car, each component of a integrity controlled project makes up the operating systems, databases, applications, and configurations. In the same way, just as engines, dashboards, and suspension make up a completed car, operating systems, databases, applications, and configuration make up environments.

This atomic approach to integrity accounting offers a manageable, easy to control framework that insures that the same code that worked in the development environment will work in the test environments and in production environments. It provides a way to insure that all parts are known in all environments, and provides a verifiable method of promotion of ALL changes (application code, hardware, operating system, configuration, etc.) from one environment to the next.

This sounds like a complicated thing to do on the surface. The variety and number of components in most projects is overwhelming, and once ALL changes are truly accounted for, they can come in at a furious pace. But by managing every item in a project as both a unique item and as part of a larger item, the entire project can be mapped and managed as a single, large but organized hierarchal structure. That is the power of Integrity Management!

-M@
 

Oh, the tedium!

I have received comments on my blog that indicate that some readers may doubt my genuineness on the topic of Project Integrity Control. Probably through some fault of my own, some people have gotten the impression that I am holding back to keep people coming back and that there may not even be anything to the whole topic. The fear is, apparently, that they will invest great amounts of their time reading installments of something that turns out to be impractical or just a rehash of old ideas. My posts have been called "tedious", which I can only take to mean "too long" or "too detailed". But unlike many things out there that are routinely blogged about (sports, politics, games), this is not a simple topic that can be covered in a few paragraphs and it takes a good understanding of the structure to realize the benefits of it. Maybe I am too detailed in my descriptions, but then again, I want to make sure that what I am explaining is what you are getting.

Still, I could be wrong and how would you know? You could just set a reminder in your datebook to come back in six months and take it all in at once, but if a person considers the 1-2 page articles tedious on their own, I doubt they will have the time or patience to get caught up all at once. Instead they will likely skip to the newest posts and then start arguing about things that they missed by not reading the previous ones. This is like complaining that an author is too tedious in a technical book and he should just summarize everything. Most all of them do this though. It is called the last chapter. Unfortunately without having read the previous 20 or 30 chapters, while it may be nice and summarized, you won't understand most of what it is talking about and will then have to read the "tedious" parts to gain that understanding.

I also realize that blogs aren't books and that the format that works for one may not neccssarily work for the other, but then again, I am not trying to get published...I already am published just by putting it in a blog. I am not trying to get traffic to support myself through banner ads and referrals. I have none. I am simply trying to explain something that I find very interesting in a way that will stick and make sense.

Still, I can sympathize with the point of view that maybe this is all hype and in the end, there will be nothing worthwhile. I have a funny name for that situation: life. But since I am comparing these articles to a traditional book (as opposed to a magazine article which typically wraps everything up in one installment), I will concede that books have something that helps with this problem that, so far, this blog does not: a dust cover summary.

I could argue the merits of doing such a thing on a blog...after all, the dust cover summary is a marketing tool and I have nothing here to market other than an idea which the reader can take or leave, but since I know that there is more to this series than dragging you on with the hope of some good information in the end I will shed some light on where I am going. I will write the equivalent of a dust cover summary for the concepts I am building here. This will give you the opportunity to decide whether or not I am nuts in one single reading instead of having to try to determine it over time. Then you can happily hang around and read them as I write them, or you can go somewhere less nutty and read other things with your time. Fair enough?

M@

23.7.08

 

Sharpening the Focus on ICIs

In order to show how mapping each item in a project to a ICI an make managing the project easier, it is necessary to define some type of structure that we can talk about. In this installment I will talk about the structure of an ICI and why it is so flexible.

As I have already pointed out, the first thing we know about an ICI is that it has a unique identifier. This can be in any form that you have a way of guaranteeing is unique. If you are a database-knowledgeable person, this would be equivalent to a unique key in a table. It must be entirely unique within a project. For the sake of simplicity, we will use an incrementing number for our examples, although in a real life situation I would opt for a Universally Unique Identifier (UUID) to guarantee uniqueness across projects in different databases. In addition to a unique identifier, what else do we need to manage an ICI? If you remember, we also talked about the need in IM to version everything. That implies that an ICI should have a version associated with it. It should. Each time the item that the ICI represents changes, the version should be incremented.

So far we have the following:

ID (unique identifier)
Version ID (incremented)

This is a good start, but that can't be all. I said that each ICI represents an item in the project. How do we know what that item is? Is it a .dll file? A configuration file? A SQL script? A user document? It seems that a good thing to know is what we are dealig with. Maybe I don't want to see all ICIs, only the ones that are user documents. Storing an indicator of what the underlying item is would be helpful in determining which ICIs I wanted to work with, so we should add a ItemType to our ICI.

So far, so good. We can uniquely identify this item, we can find out what version it is at, and we can get a general idea of what it is. But what if I want to actually get a list of the documents, not just a bunch of unique IDs? In this case, we would want to add a Source File value to it. But wait...what if I want to only see the user documents that are for the system administrator? Should I add an "intended audience" value also? That seems to make sense unless you consider that we could be talking about a .dll file instead of a document. Now "intended audience" makes no sense at all. We seem to be headed down a dead-end road.

If you take this discussion to its logical end, you will realize that everything that could possibly be tracked about ANY item in your project will need some place to put data. This littl ICI is quickly starting to turn into a massive data structure with potentially hundreds of data items. It is a database designer's nightmare! What to do?

If you think back to the last discussion, I talked a little about the powers of abstraction. By ignoring some things, or looking at them at a higher level, you can do some pretty amazing tricks that are not immediately obvious at the lower level. In some cases they are downright impossible. But if we reframe the problem, a solution will emerge soon enough. Lets pretend for a moment that we are not talking about a project, but a retail order. An order can have a single item, or several items. Come to think of it, an order can have an unlimited number of items. How it this normally handled? Skipping over the technical explanation, the answer is that at the order level, all items, no purchase what they are, are handled as "order items". Like ICIs, they can represent anything in inventory. With this approach, we don't have to worry about whether or not an individual item has a particular value because it is still an order item and can be treated like all others. While they may have some common pieces of information (stock number, quantity, price, etc.), the products that these pieces of information represent are really not very much alike.

Coming back to ICIs, we identified that there are several things that are common between all ICIs, but potentially many more that are not. The way to handle this seemingly impossible situation is to abstract this never-ending pile of possible data items down to a single thing: a Property.

Let's take a look at what that does for us. For our purposes, a property is defined as "a piece of information relevant to an ICI". It is obvious that we must be able to associate multiple properties to a single ICI, but once we come to that conclusion, the vexing problem that we were facing melts away. Now a .dll file can have certain properties (file name, size, install folder, etc.) while a web server can have other properties (Manufacturer, model, processor(s), RAM, etc.). By breaking apart properties from the ICIs that they are associated with, we have solved the problem of how to anticipate all possible pieces of information to be tracked. The solution is that new properties are created as they are needed. This solution, however, leaves a problem to be solved. On one side of the room we have ICIs and on the other side of the room we have a big pile of properties. How do we know which properties go with which items? This is a fairly easy problem to solve logically. With retail orders, each item has an associated "order number". The order number is unique per order (just as an ICI id is unique per ICI). When an item is added to an order, a line item record is created that contains both the order number and the product code for the item ordered as well as other pieces of information. Now we have a link that ties our item ordered to a particular order. To reconstruct the order, all we have to do is retrieve all line items that match the order number.

This is similar to what is done with an ICI. As properties are added for an ICI, they store not only the property data, but also the unique ICI ID.

A simple puzzle. You have a room full of kids and no information about who they are. You don't even know their names. You also have a list of parent names. How do you figure out who each child's parent is? The answer is pretty obvious; you ask the kids. The point is, the kids know who their parents are so you don't need any other method of figuring it out. in fact, you don't even need to know anything about the child, you just need to ask what their mom or dad's name is and find them on the list.

Similarly, in the ICI scheme we are discussing it is very easy to figure out which property goes with which ICI because the property "knows" who their parent is just as the order items "know" which order they belong to due to the order number "link" that each one contains. This is referred to as a "linked list" in programming and is a common method for linking a single item to multiple sub-items.

Armed with our new technique of defining properties as they are needed and linking them to an ICI, we can now put the problem aside of figuring out what an ICI looks like. It is sufficient to say that an ICI has a unique ID and one or more linked properties. Problem solved! We have come full circle and in the end, decided that the only thing we need in an ICI is a unique identifier value. Everything else can be linked to it as we need to add information. We will leave the actual "how" of linking properties to ICIs for a later discussion. Just trust me that it can be and is done every day in software that you probably use.

There seems to be a axiom emerging as we continue to discuss the methods we can use to create simplicity from complexity and order from chaos. You will see this theme emerging time and time again as we progress because it is a very simple and powerful technique. What it boils down to is "If you don't know the answer, try changing the question." Not something you are likely to get away with in other parts of life unless you are a politician, lawyer, or philosopher, but it works in Integrity Management quite well.

Now that we are mentally at peace with our ability to forge ahead without having to know and account for everything we may encounter along the way we can start looking at ways we can leverage this newfound power to make the job of creating software more reliable and easier.
 

ICI Basic Training

Last time I talked about how every part of a project must be uniquely identifiable and explained how the Integrity Control Item (ICI) can be used to perform this task. What I didn't really explain was what an ICI looks like, where it lives, and what it can do other than be unique. While being unique is a full time job for many celebrities, it is not extremely useful on its own. If we are going to go through the trouble of giving something a name, it should do a little more than sitting around being unique to earn it! The ICI does. It should provide some practical and useful information of functionality.

First, let’s talk about the structure of an ICI. The question of "what does an ICI look like?" is comparable to the age-old mystery of "how long is a piece of string?” In the case of the string, it is as long as it is, and in the case of the ICI, it looks like it looks. So much for practical, useful information! The reason for giving this apparently useless description (other than wanting to use the string question) is to drive home the point that the ICI is a concept, not a thing. Like a color, it can be useful but it is hard to pick one up and put it in your pocket (no, that is a crayon you are thinking of...only school kids call them "colors"!). This is because as useful as colors are, they are not things. We can speak of them (I have on my blue shirt today) and use them to convey information (the blue lights in my rear view mirror); colors just cannot exist on their own. Instead, they describe other things that already exist. We can even group things logically by color that have no other common properties (make a list of the blue items you see in the room). In many ways, this is what ICIs do. They provide a convenient way to talk about and manage things that may not be very alike otherwise.

In the software world, as many other disciplines, we say that the ICI functions as an abstraction of the various pieces that make up a project. It gives us a way to perform the same actions on very different things that would otherwise be difficult or impossible. For example, your project may contain a configuration file and a packaged database system. Other than being file-based, these items have very little in common. If they were people they would stand on other sides of the room at parties. But what if the hypothetical people are both huge Dolphins fans? This is a common thing that they have in common that they can both relate to, so while the conversation is on politics, they don't talk. DBMS is a Republican and the configuration file is a Democrat. When the conversation turns to movies, still they have no common ground. DBMS is an action movie fan while CF only likes documentaries. But when the subject turns to football, suddenly they are both interested and you can't get them to shut up. That is until DBMS's wife, Web Server, whispers that he should talk to some people of his own grain for a while...then they again drift off to opposite sides of the room. Pretty dramatic, huh?

The purpose of an ICI is to provide that common ground between all of the different pieces of your project. Like Army soldiers' proclamation of "I'm not black or white, I am green" and the even broader "soldier fist" motto, ICIs ignore the differences between the pieces and leverage the commonalities. No matter what a soldier's job is whether field cook, generator repairman, or helicopter pilot, you can assume several things about him without even knowing him. First, he knows how to fire and field strip an M-16. He knows how to throw a grenade without killing himself, and he will put his life on the line to defend his flag. These are things that are common to everyone who wears the Army uniform. They may have different taste in music, different jobs, different religions, and different political views, but when an officer barks out "attention!” they will all snap to attention in the exact same manner.

In the same way the military trains each of its members with common skills and responsibilities, ICIs define things about any item in a project that can apply to every single one, no matter what it is.

This may sound like some kind of shell game of pretending that things are the same when they aren't, but it isn't at all. The key to understanding how to implement ICIs is to understand that nobody is born with the reflex to snap to attention when an officer yells "Atten-hun!" (Anyone who thinks they yell "ten-hut" has never actually been in the Army). This is an artificial behavior that is introduced in their training to allow leaders to control them as a single unit when needed. Using the techniques collectively know as "drill", a leader can move 1000 soldiers with the same effort it takes to move 10. This is significant because anyone who has ever been in a crowd of 1000 or more people trying to move from one place to another will have the word "mob" in their mind. With so many different people with so many different ideas about how to get from point A to point B, the resulting chaos is typically extremely inefficient. By introducing some artificial behaviors into this crowd, they can be treated as a single unit, moving in step and in perfectly straight columns, turning, stopping and even saluting in perfect order. This is the power of abstracted behaviors.

It is important to note that introducing these standardized behaviors does not in any way diminish the individual's ability to do their job or move about on their own when coordinated movement is not required. The same soldier who marches in perfect synchronization with 999 of his fellow troops can, in the next moment, take up a lone position and effectively protect it without any further orders from his commander. He can even be left at this post for days and will perform his job as required with no further intervention. But when he rejoins his platoon and the "FALL IN!" order is given, he will be just as efficient in moving with the others as he was before. Those who have participated in marching bands will recognize the parallels between ICIs and marching drill as well. Each member has their own part to play (literally), but it takes a common set of drill steps and maneuvers to control the overall musical production.

The purpose of an ICI then is, on an abstract level, the same as the purpose of marching drill. It defines a set of characteristics that can be relied on no matter what the underlying item it represents is. For those familiar with object oriented programming, an ICI could be considered an instance of an IC class. They all have the same structure but represent different distinct items.

Once we have established what the structure of our ICI is, we can then "map" it to any item in our project whether it is hardware, software, documentation, or even other abstract things such as routine maintenance tasks and procedures.

Now that you understand the concept of what an ICI is and how it is used, next time I will talk about how bringing all project under the collective control of ICIs can impressively simplify some of the most complex tasks that face projects as they move through the development, testing and deployment phases. Hope you will join me.

-M@

18.7.08

 

A huge floating mass of metal or a collection of units?

In my last article I talked about the need for a unique identifier for each Integrity Control Item (ICI). I am going to expand on that concept this time around. If you end up following this series all the way through, when you look back you will see that the IM is so critical to getting control of your project that without it, there isn't much chance of pulling it off successfully. It will, one day, seem obvious.

The Integrity Control Item is a very basic unit. It is used to represent the smallest practical unit in an Integrity Controlled project. What is meant by "smallest practical unit"? The way I like to think of it is to compare pieces to atoms. That probably sounds a little off the wall, but having a high interest in Physics, that comparison just makes the most sense. Let me explain.

Everything in the universe is made of something. Well, in theory anyway. I have a personal theory that once physicists followed the route of existence down from "rock" to "mineral" to "element" to "molecule" to "atom" to "electrons, protons and neutrons" to "quarks" to "sub quarks and gluons" we are in grave danger. I can just imagine the fateful day that some physicist in a high energy lab somewhere makes the discovery that the entire universe and everything in it is made up of absolutely nothing. Suddenly everything disappears, leaving the limitless vacuum that everyone seems to agree once existed. Sounds like a Douglas Adams story.

Like physical matter, everything in a project, no matter how complex that project is, is made up of something. The closer to that something you get, the less the things differ. A ship is a very complex system, but if you divide it into hull, deck and superstructure, it can be logically separated. Designating decks, compartments, on down to individual pieces of equipment give an even more detailed view. Now we are standing on the deck of a ship looking at a single bolt. Is that the smallest practical part of a ship? I would say it is one of many. In fact, if you think about what we just did, something interesting is going on. Call it Conservation of Complexity.

We started out imagining a ship. It is a single item. If you saw it coming into port, you would not tell someone "wow, look at that massive collection of steel plates, bolts, pipes, wires and paint!" You would more likely say "wow, look at that huge ship! How in the world does that thing float anyway?" But that is a different discussion. The point is that the ship you see pulling into port represents a single item, but an extremely complex one. Since the human mind is only capable of focusing on one or two items at a time, we see a ship, not all of the individual parts. But something weird happens. Once you start looking at it as a collection of parts, the overall number of items skyrockets. Instead of a ship, you have a hull, deck, and superstructure. And within that superstructure you have windows, doorways, passages, rooms, radar dishes, radio antennas, etc. If you were to examine a single room, you would find innumerable parts make it: steel plates, beams, insulation, pipes, wires switches, valves, and fixtures. If you were to examine the sink in a room you would find that it contains a myriad of parts as well: brackets, pipes, valves, screws, gaskets, etc. Examining the drain pipe would reveal individual sections of curved pipe to make up the trap, nuts to hold them together, gaskets to prevent leaks, a ball valve to close the drain, linkage to the drain stopper lever, and on and on. But have you noticed that although the number of things in the ship has grown exponentially by examining them at close range, their complexity has decreased. A single room is much less complex than a complete deck. A sink is simpler than a room, and a drain is simpler than the whole sink.

Yes, that is obvious, so what? Why do I care?

Because ships offer a very good model for software projects. Like projects, they are extremely complex, they require many people with many different skills working together in a very organized manner to slide into the water after having a bottle of bubbly broken on its bow. The effort required to plan the designs, procure the materials, control the order that things are built, make sure that steam lines are not run next to fuel tanks, and the millions of other details that go into building a ship is at least as great as the most complex software project. Yet ships do, generally get built and most of them don't just sink when they are launched. This fact offers hope to those of us who feel that software is just too complex to do correctly. So are ships, but they get built.

So how do shipbuilders handle the overwhelming complexity of their trade? In short, they choose simplicity over component size. They design things from the top down, but they keep working it until they get down to a "lowest unit" level. They may be building a hull, but they do it in small job-sized pieces. If a group of engineers got together one Monday morning to start developing a new ship design and for whatever reason decided that they would start with the sinks in the living compartments and work up to the overall design, how long do you think that would take? How correct do you think the designs would be? I think they would be very wrong indeed. Unfortunately since the design is from the lowest level to the highest, they would not discover that they had neglected to consider where the water for the sink was going to come from, or possibly the drain emptied into the fresh water tank below. In order to fix these problems, they would have to tear out huge sections of their work and constantly rework it. Sounding familiar?

Not that these things don't happen. My brother works for a major US shipbuilder as a welder. He told me an amazing story one day that illustrates that sometimes even shipbuilders get it wrong. He was working diligently to finish up a welding project on a wall insider the belly of a new ship. He noticed as he was working that another person was setting up a cutting torch and then just waiting. "Am I in your way?" my brother finally asked him. "Nope, go ahead.' was the answer. Half an hour later the same guy was still standing there waiting. Finally it began to bug my brother and he asked again, "Are you sure I'm not in your way?". "Nope, I can't start until you finish and get your work signed off" was the reply. "Oh, OK. What are you going to be doing?" Imagine the dismay when my brother heard his answer "Cut out that wall you are welding. We have a design change and that wall is moving." This may seem outrageous...OK, it IS outrageous, but if you really think about it, you have probably seen some comparable things happen in software projects.

Circling back around to Integrity Control Items, software projects are very complex, but one way of getting a handle on that complexity is to take a tip from shipbuilders and track each part individually. This is what IIs do for us in Integrity Management. They define a single unit (for example, a .dll file or a file server) and allow it to be treated, for the most part, exactly like any other unit. This greatly simplifies the day to day things that have to be tracked, but it also creates a large amount of data to be tracked. So, as was hinted at earlier, we have traded complex large items for simple, numerous small items. Like the cans of corn and bottle of salad dressing with UPC codes mentioned in an earlier article, we can now operate on these very different items with the exact same logic.

Hopefully this has gotten you thinking a little about how complexity can be conquered by decomposing the complex items into many simple items. Next time I will dive a little further into how this is actually done in the real world.

-M@

17.7.08

 

Digging in to Project Integrity Management

What is "Project Integrity Management"? How is it different from Configuration Management? How does it make it any easier to prevent the dreaded project meltdown? I'm glad you asked.

As mentioned in a previous article, Configuration Management attempts to control elements of a project, specifically controlling and managing the constant change of information associated with the project. The amount of information to be tracked can be overwhelming, and the task soon becomes too complex to be performed manually. This leads most projects to seek a "toolset" to manage their configuration for them. While many commercial and some open source toolsets exist today, they all seem to be solving different problems. This is most likely due to the variations in different people's understanding of what Configuration Management consists of.

What many projects call CM is actually not configuration management at all, but simply source code control. For this task, there are a number of software packages available that provide varying degrees of functionality and frustration. But source code control is not configuration management. I would even go so far as to say that it is not even part of the configuration management process, but I will save that argument for a later time when I feel like starting a flame war.

At this point you may be thinking "this guy is either crazy or high"! I assure you that I am neither. Read on.

In the interest of avoiding splitting hairs on what Configuration Management is or is not, I have opted to create a new term altogether that will represent the concepts that I feel are the ones that need to be defined in order to get a handle on the mess that many of us know as the "development project". This term is Project Integrity Management and although it is similar in some ways to Configuration Management, the two are not interchangeable, at least not with the definitions I have seen of CM. So from here on out we will be speaking of IM instead of CM unless I am referring specifically to a Configuration Management concept.

First of all, IM is not source code control. This has been done many times over and, although I don't think that the approaches that are currently being used are great, I have nothing to add to that debate. It isn't that I don't think source control is important, I just think that the process is well understood and we should focus on some of the more vexing problems that have not been so thoroughly studied and developed for.

At the highest level, IM consists of the two following areas:

Versioning
Environment Management

These two concepts represent everything needed to completely track and control a project's variables. As you may have guessed, there are many lower levels to these concepts which we will explore, but first I would like to describe what is meant by these two terms.

Versioning is the most important concept in IM. Without versioning, the rest is impossible. In IM, everything is versioned. If it is not versionable, we are not interested in it because the definition of "versionable" is "possible to change". If something is impossible to change throughout the life of a project, then why would we care about it? It is in an eternally known state and tracking it would be silly. What would we track? The catch in this definition is the very important word "possible" is a pretty wide-open one. For example, you may consider that the operating system that is being developed against is not going to change during the project, but you would likely be wrong. Service packs change operating systems all of the time. Servers can crash and be replaced with different machines. Documents can be edited, corrected, and revised. The upshot of this is that pretty much everything associated with a development project can change. Nothing is "set in stone". What that means in IM is that everything must be versioned. Yes, that includes operating systems, servers, and documents. If it can change and it is part of a project (i.e. part of a deliverable), it must be versioned. We will explore the finer points of versioning soon, but for now this short discussion will serve our purposes.

The other high level concept in IM is Environment Management. This may or may not be a new term to you; I didn't make this one up! But I am going to heavily redefine it to suite IM's purposes. What I am referring when I mention Environment Management is the control of all versioned items in each project environment. Examples of environments in this context are "development", "test", "training", and "production". However, as you will find out, this is not to say that there is only one development or test environment. In most projects there are several. The IM approach to Environment Management is one of "Just In Time" environment creation and destruction. Environments are like Krispy Kreme doughnuts. As soon as they are created, they start going stale. I will also provide a detailed approach to implementing this JIT environment control plan.

Those are the high level concepts. On their own they are not very useful though. In fact, at this point, they may not even be comprehendible to any meaningful degree. But if you could see the picture in my head (which I hope to bring you around to doing), you would see that not only is it comprehendible, but very elegant and flexible. Now that we have gotten the mile-high view, I want to jump to the other end of the spectrum and give you a peek into how this can be pulled off. But first, a quick trip to the store.

Some of you older folks may remember a time when grocery stores put price stickers on everything. Each can of green beans or salad dressing had a white or orange sticker on it with a price. No barcode, just a simple number like "$1.29". When you checked out, there was a cashier who could (brace yourself) actually type numbers into a cash register. In fact, she could typically type them very fast with one hand and move the groceries to the bagging area with the other.

One day I noticed that some things that I bought at the grocery store had a new "built in" tag with a lot of lines on it. It seemed completely useless to me because the only thing on it besides the row of lines was a seemingly random series of numbers. I eventually learned that these were called "UPC" or Universal Product Code tags. Since the tags were implemented long before most grocery stores received and installed the cash registers that could read these tags, they seemed rather pointless to most people. But soon all of that changed. Because each product carried a unique code, and that code was the same for that product whether it was on a store shelf in Akron Ohio or Pascagoula, Mississippi, products could be paid for, inventoried, and ordered all using the UPC number. In fact, those processes eventually became completely automated. Modern checkouts at large chain stores such as Wal-Mart or Target read the UPC code, look up the price, subtract a single item of the matching type from inventory, and if the supply is getting low, automatically order new stock. There are currently some experimental stores that constantly monitor supply and demand and adjust the price of each item in the store based on these real-time statistics!

It is difficult to overstate the far reaching impact the simple UPC has had on the retail industry. It has enabled stores with millions of items in stock to instantly know how many of each item is on the shelf, how many are on order, and how many were sold in the past hour.

That is all very interesting, but what has it go to do with what we are talking about? I am about to tell you!

Before the days of UPC, the task of pricing, inventorying, and ordering stock was very time consuming. Since the cashier only had the price that was paid for an item to enter (not what the item was), there was no way to know how much the inventory of a particular item had been depleted other than manually counting items or, more commonly, "eyeball ordering". A department manager ordered new stock when the stock on the shelf looked low. In many ways, the problems they faced had a lot in common with the problems we now face on software projects. We have a huge number of individual pieces which must be assembled in just the right way to work. To make things worse, they are always changing. It would be as if there were gremlins in the store that constantly rearranged the shelves when nobody was looking and mixed different items up. Actually, I think those really did exist. They were called kids. I know because I mixed up a few shelves myself. With this overwhelming amount of information to track and with many, many items changing all the time, something is needed to get the kind of control on project pieces in a similar way UPC codes allowed stores to get control of inventory. We need a UPC for our project pieces.

Enter the Integrity Control Item. This is a concept that treats anything that is versioned (i.e. everything we care about) as a unique item. Similar to the way a can of Green Giant whole kernel corn has a code that is unique throughout the food supply system, our ICI is unique throughout our project. Actually, properly implemented it is globally unique even between projects, but more on that later. For now it is sufficient to say that a ICI is unique in a given project. In our examples, I will just use a numeric value like "324398675" to represent an ICI.

So what does that do for us? Great. Now instead of worrying about the Oracle Client drivers, I can worry about keeping up ICI #324398675 and its thirty eight thousand brothers. Yea, buddy...that sure made my life easier! But wait...there's more!

If the only thing we got out of an ICI was a hard to remember number it would be no better than a cell phone, but that is not the case. When a grocery store orders something with a UPC code, when the order comes in, it has additional information with the UPC code. For example, it may have the manufacturer, the product name, the quantity, and the country of origin. You don't need to know the country of origin to order the corn, you just need to know the UPC code, but if you need to know what that UPC represents, a large amount of information about it is available in the inventory control system. Similarly, if we want to automatically check that an ICI is present in a particular release, we don't need to know what it is or what its characteristics are ahead of time, those can be looked up when they are needed or not looked up when they are not. There are many, many other advantages to tracking everything in your project with an ICI which will become more obvious as we dig deeper into this process, but for now I will let you ponder these ideas and try to shoot holes in them.

Next time I will discuss how versioning works with an ICI. I hope you come back to read about it.

-M@

16.7.08

 

Why creating software really is like building cars

I would like to start by admitting that I am cursed. (SPOILER WARNING: if you are considering hiring me, please stop reading now!) I am an albatross to companies that hire me. It is not that I sabotage them or fail to do my job, it is just that I have an uncanny ability to take jobs for companies that are starting to unravel. For whatever reasons, I have had a really bad run of projects in my career as a developer. It would be difficult to go into detail without naming particular companies that most of people in this industry would recognize or may have worked for, but suffice it to say that I have witnessed not only the crumbling and collapse of many large projects firsthand, but I have also been around when the pink slips started going out and servers and network equipment were boxed up and sold off and the office spaces put up for rent. Thorough all of these tragic projects I have been baffled by the inevitable decline, unraveling, then total collapse of projects under their own weight.

Certainly a lot of the problems I have witnessed have been poor management in the form of lack of market research, grossly underestimating required capital, reckless spending, questionable accounting practices, too heavy reliance on new technologies, and misreading of the bones. In some of the cases, the project was dragged down as part of a larger collapse of the entire company and it would be inaccurate to blame those failures on the project itself, although I am quite sure that if the companies' problems hadn't killed them, the projects themselves would have self-destructed further down the road. I know that some people reading this are puzzled by my eternal pessimistic view of software projects because they have had the extreme fortune of working on projects that were well planned, well managed, and held to the high standards that any complex project should be. To these people, I say "count your blessings!"

This particularly bad run of luck with projects has produced a feeling that the software development world is a wild west "free for all" and that even the largest companies seem to be incapable of managing a software project to the same level of professionalism that they would manage other large projects. This is what has led me to make the much criticized comparison of software engineering projects to other engineering projects. I have received a lot of comments that it is a brainless comparison because building software isn't the same as building airplanes, cars, or bridges. There is some truth to those criticisms, but I fear the folks making them are missing my point. My point isn't that they aren't different, or course they are! My point is that we seem to be making little or no progress towards stabilization of software projects.

The main argument I have seen for why everyday software is so buggy is the one of costs and market forces. The argument goes "yes, you could have solid software. NASA has shown that. But the price of such software would be astronomical (pun intended) and nobody would be able to afford it." This seems like a legitimate argument on the surface, but I think it is flawed for several reasons.

First, software is a weird creature when treated as a product. It is generally sold the same way computers or keyboards or cameras are...as a product. But in reality, it is more of a service. A group of developers has provided the service of codifying complex business or other types of processes into a set of rules that can be applied by anyone with general knowledge of the business and basic computer skills.

Don't believe me? Then think about this: what are you buying when you spend $499 for a piece of software (or 499K for that matter)? Surely you aren't paying that amount for the actual disc. Even if it has a printed manual (yes, they used to do that!), the actual value of a couple of CDs or a DVD and a book is somewhere below $10. Some will say that you are paying for the privilege of using software, not the media or the software itself, and they are closer to the truth. When you purchase software, you don't own it any more than you own the songs that you download from iTunes. You are purchasing the right to use it. But why would anyone part with their hard earned money for the right to use something without owning it unless they got value from using it? They wouldn't. It is a given that if someone purchases something, it has a value to them. In music that value is the pleasure given by hearing that "YMCA!" remix and with software that value is whatever it is that the software does, makes easier, or makes possible that otherwise wouldn't be. You are paying for the service that the company that produces it provided up front in automating some difficult task.

I am going down this path to make a point. The common argument I see is that NASA produces reliable software and it costs millions of dollars. Who could afford that level of reliability other than an agency with deep tax pockets? The common computer user certainly couldn't. But what this argument misses is that NASA is pretty much their only customer. By the same logic, if a car can only be made reliable by investing millions of dollars in research and development, then a company that builds just a single car (as in one physical car, not one model) would have to charge millions for it just to break even. By this argument which is commonly given to justify sloppy software, nobody but the most wealthy could afford a reliable car. But that is not the case because this is not what car manufacturers do. They invest millions in making a reliable design, that is true. But then they reproduce that design at a minute fraction of the original cost. That design, reproduced tens of thousands of times, is then shipped to dealers who can afford to charge $25K for that same car that took millions to design and produce and still make enough commission to take a week long vacation to the Bahamas. In other words, the cost of producing a reliable car CAN be compared to the cost of producing reliable software if both have a large customer base. It is a valid comparison.

Once this point is understood, it stands to reason that software should actually be even easier to produce both reliably and cheaply than cars or airplanes. After all, each car produced contains several thousand dollars worth of raw materials and requires an appreciable amount of labor to produce. In addition to that, there are shipping, storage, and depreciation costs to consider. Software, on the other hand, is an all up-front prospect. Once the original, fully operational version of it is produced, the cost of producing another copy is negligible. So is the cost of producing the next copy, and the next, and the next million. The only raw materials needed are a couple of CDs or, more commonly, bandwidth on a network for providing downloadable copies of the software.

In this way software is more like a book. The original takes the author many months to prepare, but once it is complete, little or no further work is needed to regenerate it. If it is an eBook, no effort at all is required. Another download is just another opportunity to make money from the original. The risk in this model is that the author will spend months of his life writing a book that nobody will pay for. If this turns out to be the case and the author has figured his time as worth a certain amount per hour, then it is all in vain and he has lost a lot of time and income. But if he suddenly got a buyer for the first copy of his book from an individual, he would have to charge tens of thousands of dollars to that one customer to just break even. We all know that isn't the way it works though. A book provides value, but not that much value! The market will support a certain price level and that is what is charged for a book. If enough books are sold, the author makes money. If not, it is a wash and he takes some level of loss. This is a system that has worked for many years and has rewarded authors who provide relevant, quality work with a proportional reward and rewarded authors who produce poor quality work (in respect to value to customers) with no reward.

My assertion is that reliable software CAN be produced at a market sustainable price, similar to the example of writing a book. I know that I will get some arguments that somehow comparing creating software to writing books is invalid because (whatever), but that is the problem with comparisons. They are ALWAYS invalid at some level, otherwise they would just be restatements.

The problems I have seen seem to come in with custom development projects. They usually fall in the category of "one customer who must bear all of the cost". These projects lack the economy of scale that mass produced products enjoy and are like NASA and FAA projects in this respect. One client must bear the full cost and the argument can be made that, faced with the enormous cost of producing extremely high quality software, most companies settle for "good enough" software for a much lower cost. There is a lot of truth in this argument and I believe it is valid. But the flaw here is the assumption that the only way to produce quality software is by dramatically increasing cost. The implication is that clients put up with sloppy work because they can't afford high quality work. What this ignores is that there are many ways of controlling both cost and quality. In my next article, I will talk about some of the main causes of high cost and poor quality on software projects. See you then.

M@

14.7.08

 

Your first big challenge...getting the developers on board.

Your In my last article I put for the theory that the most important thing to do when considering a new methodology, especially something as far reaching as CM (or as I have renamed it, Integrity Management - I can do that, its my article), is make sure you have the support of your developers. They are smart folks and it is much better to have them on your side than against you. So how do you do that? How do you get a bunch of "I'll do it my way, thank you" hermits to agree to do extra work for the good of the project? The most obvious way to me is to show them that it really is for the good of the project. Most developers really do want their projects see the light of day and very few I have met actually want a project to fail. But you have to keep in mind that if they have been doing this kind of work very long they have been fed many lines about a new framework, a new development methodology, or a new tool that is supposed to make their job easier and the end result more reliable. They are going to be skeptical, and that is a good thing. I personally don't want a bunch of gullible developers who jump on the next newest new thing that comes out. I like to see some caution when it comes to changing things that can make or break development project.

I once worked for a small company that prided itself as one of the only "Microsoft Premier Partner" shops in the area. They stuck Microsoft posters all over the place and hung the Premier Partner plaques (the clear plastic ones with Bill Gates "signature" on them) in the front lobby. They plastered their web site with Microsoft logos and "awards" (MVP, MCDS, MCP, etc.). It was pathetic.
Needless to say, when Microsoft came out with a new technology (or a repackaged existing one), our company president couldn't contain his ecstacy. He would have the admins immediately load it on a server and start "taking it for a spin". For the next two weeks we developers would get a constant barrage of emails with code snippets in them and subjects like "LDAP is so easy with this new _______" or "S-WEEET..Exchange interactivity via XML SOAP Protocol", or whatever. You get the idea.

Inevitably within a few weeks he would have satisfied himself that this new technology wsa top notch and ready for prime time, even if Microsoft hadn't (Or as he would yell from his office at regular intervals, "Rock On!"). He would then have all of the developers drop the work we were doing on real customer requested features and bugs and start recoding the entire app for whatever this new technology was. I was coding in .Net in late 2001 and early 2002, back when even Microsoft considered it Alpha. Not in "hello world" projects, but in the company's main product, a very large home healthcare system.

To keep this environment going, you can't afford to have any devlopers who were cautious or thought for themselves. They all had to toe the line and follow the Microsoft lead in everything. We added remote messaging to the application to report application errors. Why? Because these things called "sinks" in the Enterprise Application Architecture Library COULD. No other reason needed. He also had us require the user to enter a description of the error and what they were doing when it occurred. Apparently (as I predicted) there is a particularly troublesome area of the application commonly referred to as "asdf", because that is what most of the replies contained. Some of them were more colorful and described what the computer (and by extension, we developers) could do with the error message. Little was left to the imagination on those. Did I mention that once we got the entire application converted to .Net (by then it was in Beta), we started converting the whole UI to Infragistics .Net Ultragrids? Those were beta as well and oh, boy what a bucket of fun that was!
Looking back, I see now that what that shop needed was a few skeptical developers who would stand up and say "that's dumb and I am not going to do it". At the time I was so happy to be back at work after a six month dry spell that I wasn't going to do it. But I should have.

After this experience, I am naturally cautious of any developer that responds to an announcement that we are going to "re-architect", "re-design", or "re-implement" anything, much less the way we control the changes that happen in an application development cycle with anything other than objection. So the first rule is: If nobody objects, be careful; they are either new graduates who haven't learned yet, or they don't plan on doing it anyway. They are just trying to get you to go away so they can get back to coding.

If they are good at what they do, you will get instant pushback in the form of "Oh, so this is going to fix all of our problems just like “Agile” did? Or was it “CMMI” that was going to do that? No, wait, I think it was “RUP” or maybe it was ___________!

Don’t panic; this is a good sign because if they are skeptical and you bring them around, you will know that they are sincere about doing it, and if you have been led down the primrose path by me, they will let you know where the gaps are.

As I mentioned in my last article, the way to the developer's heart by making his job easier, but only if it appeals to his built-in logic processor. Less "stupid _ _ _ _" and more development power is the key. A good developer can spot the holes in any logic from five miles up, so you have to know that what you are advocating is sound. Once you appeal to their sense of logic ("hey, you know, that makes a lot of sense!") and you demonstrate how it will make their job easier by taking the bull off of them, you will win them over.

I know this because I am, at heart, a developer and I have never been convinced of the value of CM until recently. In fact, it took me approaching CM the same way I would any other project to get it broken down to the point that it made sense.

The next article will begin to describe a system of project integrity control that is based on very sound, very flexible and very simple logic. We will start at the most basic level of tracking and expand it to show how it can be applied to any project, no matter how complex it is. I hope you come back to read it.

M@

13.7.08

 

Where Does the Slippery Slope Start?

So where do projects go wrong? If you are like me, you have been on a lot of projects and they always seem to be going along pretty well, then all of a sudden it starts to crumble. Deadlines begin to loom, hours get longer, and weird things start happening in the code that never happened before. As if that weren't enough excitement, the client finally pops into reality and starts to actually look at what it is he is paying for and hates it...claims it is NOTHING like what he asked for and starts demanding that dozens of things be changed without affecting the schedule or the budget. Before you know it, you are working until midnight trying to get the database piece to match the business piece and both of them to match the UI.

Can anyone say where the slippery slope started? There are many possible culprits. For example, the requirements for most projects are woefully incomplete. Typically the person(s) designing the software are relying on the person(s) who create the requirements to know what they want and to express it in a way that fully conveys their desires. This is almost always a bad assumption. So one possible initial slide down the slippery slope of a software project crash could be incomplete requirements. Another one could be incomplete designs, inaccurate estimates of effort to implement it and reliance on new or unproven technologies or techniques.

Actually the question of where most software goes wrong will be answered very differently depending on who you ask. A business analyst may blame incomplete requirements while a programmer may blame clueless managers. Testers may blame programmers for not fully understanding the requirements or not unit testing their code while managers may blame unrealistic timeframes and deadlines by the customer. Clients may blame the whole development organization, who may blame the whole mess on a lack of reliable funding. The fact of the matter is that there are plenty of fingers pointing and plenty of people being pointed at when a project starts to unravel, and nobody seems to be able to agree on where it started going wrong or what the root cause is.

At this point I would like to put forth the hypothesis that there is always a root cause of a project failure and it always starts at a predictable, avoidable juncture in the project. In fact, I propose that a project is doomed to failure (partial or full) before the project even starts.

As I have hinted in earlier articles, I think that Configuration Management is a critical part of any software project. Unfortunately I don't think the traditional view of Configuration Management can make or break a project. Not that projects with good configuration management aren't more likely to pull through without crashing, but I feel that there are many things that typical views of CM miss. In fact, it seems a bad idea to me to even fall back onto the phrase "Configuration Management" to describe the concepts I intend to cover. Instead, a new term is needed which covers the full spectrum of controlling all of the variables of an ongoing project. That term is one that I coined: Integrity Control.

Unlike the traditional view of Configuration Management, which tends to focus on procedure and checklists, Integrity Control is a holistic view that has a single goal: Ensure that a project remains in a known (preferably known good) state.

This is not to say that procedures are not important, but in my experience Configuration Management tends to pile on layers of procedures that, in the end, become the focus and the purpose and the real goal is lost in the shuffle. Eventually there is a subtle shift from "Are we managing the configuration of this project" to "is the change control board meeting regularly and are they publishing their meeting minutes?" Like many religions, the purpose is lost behind the layers of tradition in the form of forms, minutes, and matrices. Occasionally these procedures result in a well managed project, but very often they just add to the workload of everyone involved without any tangible benefit.

I know that I will hear from some readers who will say "there is no tangible benefit because the procedures are not really followed...they are pencil-whipped and done just for the record. If the same people would actually follow the procedures, the results would always be predictable and reproducible!”. Maybe. But that misses the point. The point is that, from what I have seen, that doesn't happen very often in the real world. Whether or not it would, in theory, work as designed matters little when the implementation of such a process taxes the people doing the work far beyond any perceived benefit. In the end they fill out the forms and follow the workflows because they are required to. They create a trail that, if audited, can prove that they process was followed. It is a subtle difference that can't be pegged to one incident of person, sort of a collective wink and nod that everyone participates in. The “this is just useless bureaucracy” becomes a self-fulfilling prophecy as time is spent on a process that is being followed only for the record instead of time being spent on actually getting the work done, which reinforces the participants' perception that it is all just a big waste of time.

So just make people do their job. They are getting paid to follow the process, so make them. Problem solved, right? If that were the case, there would be far fewer projects in crisis mode and many more being completed on schedule and on budget. Numerous industry surveys have shown that this isn't the case though. Even among projects that follow rigid methodologies (CMMI, Six-Sigma, ITIL, Agile, whatever) tend to drift into the rocks. I have seen just the opposite. The harder you push developers to do “process stuff”, the more they resist. In one case they proved their point by following the process to the letter and the entire project came to a grinding halt.

I believe that the reason this happens so often is not because developers are a difficult bunch or because they a too focused on writing code to do the rest of their job. I believe that it happens because the people doing the work are really smart. The average software developer can spot a pointless procedure in a millisecond and can find a thousand subtle ways to avoid following it that can't be spotted by even the sharpest-eyed manager. In most cases, there isn’t even any hard evidence that the process is being avoided. But it is.

In short, when smart people are forced to do a lot of work with little or no perceived gain, they are certain to find ways to game the system. To illustrate my point, I want to relate a story that I know is true because I was there at Fort Gordon, GA in December, 1992 to participate. I call this one:

Rudolf the Red Faced Drill Sergeant

The US Army does not have the reputation of being the brains of the military, but there are many, many very smart people serving our country in the US Army. I know because I had the pleasure of serving with some of them in the Satellite Communications field. The training for the Strategic Satellite Communications Technician job was a very intense year-long technical school that started with basic electronics and progressed through extremely complex communications theory, protocols, transports, and routing. It was a tough school and, not to be tooting my own horn (there is a point to this), only the very sharp survived the full year. The only military school with a higher attrition rate was at the time the infamous Navy Nuclear School.

There was a problem with the schools though. The students were managed by combat-trained drill sergeants. I am not implying that all “drill instructors” (as they insisted on being called) are mentally-challenged, but ours pretty much were. They knew we were smart and they spent many long nights dreaming up tactics to keep us “in line”.

Army Basic Training is tough on some folks. Personally I didn’t find it very stressful. I can endure pretty much anything for two months and I was very good at gaming the system. The problem came in when we got to the year long SATCOM school. The Army was supposed to ease up on us a little in technical schools and let us concentrate on learning. Instead they put drill sergeants in charge who made no secret of their disdain for us “bunch of smart privates”. The idea that we should be kept in line was something that most of them shared and lived for. They would impose new rules constantly, do surprise inspections, call formations at all hours of the night, and generally use basic training tactics to keep us from getting away with anything. Or so they believed.

But we were smart. With each new rule that was dreamed up, we quickly found a handful of loopholes and workarounds that got us back to the level of freedom that we wanted. After several months of this type of treatment, beating the system becomes a survival tactic and, in our case, a game. I have heard of prisoners of war who find ways to get anything they want and do pretty much whatever they want to do right under the guards’ noses, and this is they type of mentality that developed at the SATCOM school. It came down to “Adapt or collapse”, which several of my fellow soldiers ended up doing. Several simply left the Army. Walked out and went home in the middle of the night. Several went crazy, some tried to kill themselves by various oddball means, and one stabbed his girlfriend and was found by the Military Police digging a hole to bury her out in the camping ranges. She survived and he is, as they say in the Army, “Making little rocks out of big rocks” at a federal hard labor prison. These were people who could not bring themselves to adapt to the overbearing environment we spent way too long in.

But most of us kept our sanity. We did it by gaming the system. We would find ways to avoid marching in formation, even though it was an absolute requirement. We would go on road trips instead of subjecting ourselves to the busy work that the military calls “TAC Detail”. We would spend days sleeping in the barracks instead of painting curbs without getting caught. We became experts of the military bureaucracy and policy and used it time and time again to our advantage. And very few of us were ever caught. I actually had a copy of the Army’s dress code in my pocket at all times in case I was accused of not complying with them. Several “drill instructors” were amazed to learn that the regulations actually forbid polishing boots until they are shiny. Shiny is bad in battle.

It is not that we were bad soldiers. We were what I would consider the elite among us, the highly admired “skaters” and “shammers” who got out of anything we didn’t want to do, but we were all top notch students. We were doing the job that the Army required us to do, which was learning SATCOM to the best of our ability. The rest was just pointless BS that we made a game of avoiding. It wasn’t policy, it was harassment.

To highlight just how blatant and skilled we became at using the rules to our advantage, we may be the only platoon in the history of the Army to take a full PT test (the physical fitness test that is used as a Billy-club by many drill sergeants because it can determine whether or not you pass the school you are in, get a promotion to a higher rank, or even stay in the military) in full formation. The PT test is serious business as anyone who is or has been in the Army knows.

We had already completed our year of school (over 2000 classroom hours) and we were waiting to go to our first real duty stations. We were being held for a week until Christmas Exodus started (the two week Christmas holidays during which the regular military all but shuts down) and the order came down from our company commander that everyone must pass a PT test before leaving. We had just passed one the week before and were all completely fed up with the training base and the drill sergeants. The final PT test was just adding to the insult. As we saw it, we had paid our dues and this was just harassment. So we came up with a last-minute plan as we walked over to the PT field.

Since the PT test is such an important part of the Army, it is highly regulated. There are rules which state that “drill instructors” cannot force anyone to exercise the morning before a PT test, which means no “Drop and give me twenty pushups!” or any other type of physical reprimands. We also knew that once the PT test started, the drill sergeants could not interfere with us in any way. They could not insult us, obstruct us, or otherwise affect our ability to do our best on the PT test. Further regulations stated that they could not force us to do any physically strenuous activity for four hours after the PT test. It was carte blanch for us to behave badly and get away with it.

When the time came for our run, we all got into formation on the track. This was just downright weird because a PT test run is supposed to be an individual activity that shows an individual’s best running time. They knew the rules though and could not interfere. All they could do is read the rules script and blow the start whistle. And cringe. Because, being so close to Christmas, our platoon leader, genius that he was, started calling cadence. This is also off the wall in a PT test. But the best part was what the cadence consisted of. Not “Airborne Rangers rolling down the strip!”, but “Rudolf the red-nosed reindeer”. The drill sergeants’ faces all turned beet red. They knew they had been beaten at their own game.

“All of the other Reindeer”
“Reindeer!”
“Used to laugh and call him names!”
“Like Pinocchio!”…

It is a tradition on Army bases for the base commander (usually a General) to have a “Commander’s Run” on the morning just prior to Christmas Exodus. This is a big event because every platoon on base (or thousands of soldiers) all participate, representing their particular company. It really is an awe-inspiring thing to see thousands of fit, proud US Army soldiers running in a formation that stretches over a mile around the parade field. And the traditional cadences they sing with all of the gusto they can muster, trying to out-sing everyone else is incredible to behold, their regimental flags fluttering ahead of each company. But as the formation rounded the end of the parade field, they started falling silent and then slowly gave way to chuckles, then uproarious laughter. There on the test track was a small formation taking a PT test singing Rudolf the Red Nosed Reindeer. In cadence. Very loudly. And at the edge of the track were five drill sergeants with extremely red faces staring holes into the ground as their commanders, one by one, rounded the corner and realized what was happening. The General was not impressed. It was one of the greatest moments of my military service.

My point in telling this story is to illustrate what I have already stated. Smart people will not be forced to do things that are pointless. If they are, chances are very good that it will not turn out as you expected. Developers are smart people, many of whom consider configuration management as pointless, and who can blame them? Their experience has taught them that it never makes much difference in the end, it just adds to their workload.

So how is Integrity Control any different? If developers are such a smart bunch, what makes you thing that you can pull the wool over their eyes by changing the name of a process that they already perceive as pointless? Will they not just once again cheat the system and go back to doing things the way they think they should be done?

If that were all I was advocating, then yes, they would. It would be no different that calling a reprimand a "counseling session". Changing the name doesn't change the content. What is needed is a change of perspective, a new analysis of what we are really trying to accomplish by managing changes, configuration, and dependencies in a project. In the end, the questions that need to be answered are:

1. How does this make the developer's life better?
2. How does this increase value to the customer or client?
3. Is the tradeoff (effort vs. gain) worth it?

It may seem odd that the first question is focused on the developer. After all, he is paid to do the client's work. Why should his life being made better be of high importance on implementing something that is, in the end, supposed to benefit the client through a cleaner development process? The answer to that is obvious to me: “If the developer ain't happy, ain't nobody happy.”

We can debate the wisdom of catering to the demands of the developer and whether or not their paycheck (or lack of it) should be incentive enough to do as they are told, but as I already pointed out, developers are smart. This is a good thing, you need smart developers. But being smart, if they don't believe in what you are implementing, it will be doomed to fail. They will inflict a death by a thousand invisible cuts that are impossible to track down but lethal nonetheless, and it will be the project managers standing red-faced as they do the equivalent of singing a goofy Christmas carol during a project status meeting. Developers must be on board with the plan to protect a project's integrity or it is doomed to fail. More about how to do this soon.

The second question is "How does this increase value to the customer?" The customer is paying the bills. The customer wants everything now for nothing. Adding additional layers of work to a project translates to more billable time and higher cost. What is the client getting for this additional expense? A report each week that says "We are doing good"? How will controlling project integrity translate to value to the client? It must either result in less expense or better results to be accepted. In the end the client is interested in having bug-free software that does what they asked for it to do with the least amount of money expended. Therefore, to show value for a process it must add value by either saving time (billable hours) or by making the end product more reliable, easier to use, or implement more features than could otherwise be implemented.

The last question is the sum of the first two. It the tradeoff worth it? If we expend this extra effort, dedicate ourselves to yet another methodology, and do all of the things needed to make it work, will the time saved by having fewer bugs and easier to troubleshoot environments be worth the additional expense of keeping it up? Will the client see enough improvement in the product or the cost of obtaining that product to make it worthwhile for them to sign off on it?

Over the next few articles I will begin exploring these aspects of the real goal of project integrity control and how, by changing our views of what the reasons are for doing it, we can start to formulate a system that really works for the developer, the project, and the client and gets us closer to the goal of functional, bug-free software with a minimum of rework. I hope you will stick around for it.

M@

3.7.08

 

The Six Mysteries of Software Lameness

As this series continues, I think it is becoming clear where it is going. Yes, I lured some folks in with some pretty attention-getting verbiage, but let's face it. If I had just come out and said "I am planning to do a series of articles about the value of Configuration Management in software projects!" you could have heard the wind blowing and crickets chirping as people avoided this blog like the plague. I might as well say I am doing a series on the value of flossing daily. Hopefully I have gotten the attention of some intelligent people who are really interested in the very real problem of why software is so bad and you will hang around for a while so I can present my views on the problems and what I see as the best solutions.

On the other hand, you may be a hard-core coder who really doesn't care whether or not users are miserable. After all, they are a really clueless herd and no matter what you do, you will never be able to dumb down your software to make it usable enough for them. If this is your view, then thanks for stopping by. I doubt anything here will interest you from this point on. However, if you are one of those programmers, developers, software engineers, or whatever title you have who wants to find the answers to the great mysteries of software lameness, please stick around and feel free to comment when I am clueless.

So what are the great mysteries of software lameness? As I see it, the major unsolved ones are:

1. Why don't users know what they want?
2. Why are requirements never any good?
3. Why does something I could code between midnight and four a.m. take development teams three months to complete?
4. Why does every software project I run across seem to run off the rails about 2/3 of the way through?
5. Why can't software be created like other products?
6. My is management so clueless?

I intend to shed light on all of these mysteries (well, #6 is questionable) with a single approach. As hard as it may be to imagine, I also intend to do it in a way that the phrase "Configuration Management" doesn't send you instantly into REM cycles.

Pretty lofty goals, I agree. But I know something that most "authorities" (meaning "people who have authored a book on the topic, not people who are experts) seem to miss. I did not learn Configuration Management by getting my MBA and studying process theory. I learned it by living years as an increasingly frustrated Software Developer (programmer, engineer, architect, whatever...I had all of those titles) who worked very hard to create good software only to have it changed beyond recognition and hacked up in a last minute panic because the requirements changed, the client ran out of time and/or money, or they suddenly decided that SQL Server was cheaper than Oracle two weeks before deployment to production.

For years I have been saying "There must be a better way to do this!". I believe now that there is. It is not a new development language; each of them have their own set of flaws and frustrations. It is not a new development methodology; I have tried them all and they all end in chaos. It is a simple process that is implemented early on and followed throughout the project.

Oh..I should have warned you before using the "Proce_ _" word. Sorry about that. But don't panic. Please, read on. I am not a Proce_ _ evangelist by any stretch. I believe that the ultimate end of any "Process methodology" is complete paralysis. What good does that do? I mean, if you aren't a government contractor on a cost-plus project?

I have had several readers ask me when I was going to get into anything of substance. The time is here. My next posting will lay the groundwork for what I believe is the best way to avoid the seemingly inevitable trap of death march projects. I promise.

M@

1.7.08

 

Configuration Management - A process, not a toolset

One of the first questions that usually comes up when talking about Configuration Management is "What software should I use?" While the choice of tools is important, this should not be the first decision made. I would argue that you can't even answer that question until you have gone through a lot of other stuff first.

In many ways implementing CM is similar to software development itself. With a typical software project the client comes to the table with a very broad, very sketchy idea of what they are hoping you can give them. They don't know much about the way software is developed, they know nothing of the available development languages and environments, and they aren't sure what the purpose of a database is. What they do know is that they want a system that allows their customer service folks to pull up a user account, view the account history, see any notes and alerts on the account, and see how long it has been since the customer called last, all based on the caller-ID information of the incoming phone call.

The funny thing about it (funny as in remarkable, not as in "ha ha") is that they THINK they know exactly what they want. Anyone who has done any work directly interacting with the client knows what I am talking about. The best thing to do is to listen intently without interrupting, take careful notes, and let them go through all of their ideas about how it should work. Then (probably at a later meeting), each of the features sets they identified should be clarified. You may ask a series of questions such as:

What if two callers have the same phone number (for example, a man and his child or spouse) but different account numbers?
What if there is no caller ID information with the call?
What if someone calls from a friend's phone? How do you know you are really speaking to the right person?
What if there is more than one phone number for a person(examples: cell phone, work phone, home phone) and they could call from any of them?

These types of Q & A sessions usually leave the client dumbfounded at the number of scenarios and situations that they simply did not think of when they were dreaming up this simple application. As each point is worked through, a clearer picture emerges of what the application really needs to be able to do and how it will do it. Eventually everyone has a clear understanding of the problem to be solved with the software and the real planning can start. It would make no sense to decide on a development environment and order software licenses before you even know what they really wanted their system to do.

CM is very similar. When I first meet with a client and they make the statement of "We want to implement a CM system", I know that this is just the very tip of the iceburg, and almost without exception they don't really know what is involved in what they are asking for. But it never fails that not very long into the conversation they will ask what CM softare tools I recommend that they look at.

Now let's think about this for a second. The choice of a tool for any job requires knowledge of how the tool will be used. For example, if you decide to hang a picture on a wall, you need to know how this will be done before selecting a tool. Is the wall wood, sheetrock, or concrete? How heavy is the picture? What height will it be hung at? If you fail to consider these questions, you may end up trying to drive a screw into sheetrock with a hammer or screw a nail into concrete with a screwdriver. Obviously these would be poor tools for the job, but without knowing the answers to the simple questions up front there is no way to know what tools you will use or how you will use them. Besides, trying to nail a screw into sheetrock with a hammer will probably not result in a happy client.

It makes just as much sense to try to select a configuration management tool before you have gone through the process of determining exactly what their goals and requirements are for configuration management.

Perhaps you are not a CM consultant attempting to implement CM for a client though. What if you are just trying to do the right thing on your project? The most natural thing to do is start looking for a software tool that will make implementation easier. After all, if the tool will do it, why bother going through the pain of figuring everything out up front? Unfortunately that logic does not hold up well. When you don't know what you really need the tool to do, they all look equally good. It then comes down to a matter of which one looks the most useful.

I used to have a friend who loved power tools. He would see one on some home improvement show and decide it was just the thing he needed, run out and buy it, and then have no idea what to do with it. I came by one day and he wanted to show me his new compound miter saw. I agreed that it was very cool and potentially useful, but questioned where he planned to use it since he lived in a smallish apartment. Of course this question had never entered his mind. Neither had what he was going to cut with it, or for what purpose. But it was a VERY nice saw...nobody could say any different. He also had several electric guitars although he couldn't so much as play "Wild Thing" or "Foxy Lady" on them. And several high powered rifles and a handgun or two even though he would have to drive 50 miles to be able to actually fire them legally. And a motorcycle that...never mind. You get the picture.

My friend's problem is that he bought the tool and then looked for something to use it on. This is the way most people shop for CM software. They purchase it (usually for a wholly unbelievably high price), install it, and then (and ONLY then), start thinking about how it will be used. Of course software vendors love this approach because it means they don't have to really solve the problems of CM, they just have to offer a longer feature list and cooler screen shots on their web site. And there you sit, staring at a computer screen with software that costs more than your annual salary, wondering what the heck you are supposed to do with it. To rely on prepackaged software is to turn part of the decision making process over to some programmers who you have never met and who don't really care whether or not you have to spend your time trying to make it work right. The goal of most software companies is to sell software, not make your job easier, despite what the sales guy tells you.

Over the next few months I will be going through the process of preparing a project for true configuration management. You will see that the topic of what software I recommend doesn't come up for quite some time. The truth is that it is my belief that a good CM plan does not assume the use of ANY software.

If you are serious about implementing Configuration Management in your project(s), stay with this blog and I will guide you through it. I find it to be a fascinating topic that actually does make a lot of sense when the details are better understood.

M@
 

The difference between a developer and a programmer.

Why is software development so hard? It could be argued that software development is so hard because programming is so easy. Since this seems to be contradictory, I will explain what that means.

Whether or not most people realize it, the titles "developer" and "programmer" are not interchangeable. Nearly all of us (myself included) start out as programmers. We learn the programming languages, the syntax, the data structures, the program flow controls, and how to assemble these elements to produce useful components. A good programmer knows the languages, the object models, and the techniques that are capable of producing stable, working pieces of code. But a good programmer is not a software developer. Some people remain very good programmers without ever moving to software development.

To become a software developer takes a good programmer who breaks out of the "program" mindset. A program does a specific task very well, but most professional development projects must go beyond specific tasks. When a company or government agency starts a project it is very seldom a project to create a program. Instead, it is typically to create a solution. This is a very important distinction. A program is a software solution to a single, focused problem while a system is a collection of programs that satisfy a much broader class of problems.

For example, a project to transform data from a legacy system to XML results in a program while a project to make data across legacy systems available to a web-based front end is a system. The difference is that while the transformation program does a single job very well, the legacy to web system consists of many, many components, any of which at their core are a program.

From this view of program vs. system, it makes sense that a software developer is not the same as a programmer. While a programmer must be an expert in a very narrow domain of the overall problem (depth), a developer must have a solid understanding of both the depth and breadth of the problem. Not only must the developer know how to create programs, but he must also know how to assemble the many programs that perform the various tasks required to solve the larger problem.

Going back to the original statement that software development is so difficult because programming is so easy can now be seen in a new light. As many modern programmers come into the trade, they cut their teeth on highly abstracted development environments such as Visual Studio and NetBeans. While this is good for productivity, it is not so good for learning what is going on "behind the curtain" as programs are compiled, and linked. With the excellent development front ends available for free today, it is entirely possible for someone to make a living as a programmer without really understanding the nuts and bolts of what they are building. Programming has become much easier.

While the same development IDEs make it easier to build solutions (i.e. systems) as well as programs, the context of what is being done is so hidden that programmers attempt to apply the programming mindset to system development. This results in some pretty convoluted code and produces a situation where the person or people writing the application don't really understand the logical separation of the pieces, how the pieces interact, or how to manage the interaction between them. They tend to flatten everything out and put business logic in the views along with data access. In other words, the tools available make it easy to jump from a program to a system without having the knowledge and experience of a real developer.

This is by far not the only reason that software development is difficult to get right, but it is a large contributing factor. The introduction of the "Software Engineer" title further blurs the line between programmer and developer because it gives no context as to what level the person is working at or is capable of working out. Many programmers graduate with a Computer Science or Software Engineering degree with no domain knowledge of the languages they are hired to work in, much less the entire development environments. They are often told by their professors in college that they are going into the working world perfectly prepared to produce world-class software and should expect the salaries to match those skills. This from a guy who has probably never worked on a business development project outside of the University he teaches at. This sets false expectations for the new programmer by giving them the impression that they already know all there is to know about software.

I was in the Army and went to SATCOM School. This is a very intense technical school that lasts for a full year. That is a full year of eight hours per day of classroom instruction, which comes to about 2000 classroom hours used to teach everything from electrical theory to satellite orbits, data encryption, error correction, and troubleshooting techniques. I can say without any doubt that the Army (and I would assume any branch of the military) is excellent at teaching highly technical concepts. You have no option; you learn it through complete immersion or else.

I finally graduated from SATCOM School after exactly one very long year, amazed at what I knew. I had just spent most of my waking hours for an entire year learning about a single subject: Satellite Communications. I was absolutely certain I was ready to jump into the job and perform it perfectly from day one. Imagine my shock upon arriving at my first duty station (Fort Belvoir, Virginia Earth Terminal Complex) when the NCO in charge told me "Forget everything they taught you at that school. We don't really do things that way." The amazing part was that he was telling the truth.

In many ways, that is the way of software development. In college you learn the theory, the science, the techniques, and then you land your first programming job as a Software Engineer. You walk in expecting to be producing world-class applications in a few days, but if you are lucky enough to be assigned to work with a senior developer, you quickly learn that most of what you learned is good information, but not the way things are really done. The real learning starts when the classes end.

It sometimes sounds as though I have a chip on my shoulder for new Software Engineering or Computer Science graduates, but this is not the case. I have a chip on my shoulder for anyone who is unwilling to learn and relearn as the situation requires and instead spends their time arguing the virtues of obscure programming techniques.

To become a good developer, a Software Engineer or Computer Science major (or anyone else) must accept that they know very little about the day to day work that they have been hired to do. They must strive to continue to learn every day and actually listen to the men and women around them who have been doing this work for eight or ten years. Eventually you will become a good programmer, and if you keep learning, you will become a good Software Developer. Only when you arrive at that level of knowledge will you realize how little you knew as a green recruit.

M@