ONLamp.com
oreilly.comSafari Books Online.Conferences.

advertisement


The Mythical Man-Month Revisited
Pages: 1, 2

Ideas Time Has Not Been Overly Kind To

Brooks wrote about waterfall lifecycle models:

"Much of present-day software acquisition procedures rests upon the assumption that one can specify a satisfactory system in advance, get bids for its construction, have it built, and install it. I think this assumption is fundamentally wrong, and that many software acquisition problems spring from that fallacy. Hence they can not be fixed without fundamental revision, one that provides for iterative development and specification of prototypes and products."



Here Brooks is clearly decrying the waterfall lifecycle model and is on the verge of embracing true iterative development, stopping seemingly just shy of recommending iterative development of the actual shipping implementation. Elsewhere he notes:

"Lehman and Belady offer evidence that quanta [of updates to software] should be very large and widely spaced or else very small and frequent. The latter strategy is more subject to instability according to their model. My experience confirms it: I would never risk that strategy in practice."

Recent history, at least, favors the opposite position. I think most organizations, likely in many cases without any real decision on the matter, practice something akin to the continuous integration favored by XP and end up with small and frequent quanta. I doubt there's much support for the other position now, but consider this later passage:

"In most projects, the first system built is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done piece-by-piece. But all large-system experience shows that it will be done."

The piece-wise redesign sounds like refactoring, but I don't think it is. I think he's saying you will invariably throw away the whole implementation either all in one go or a little bit at a time, so it's wise to "plan to throw one away." I still hear people say this sometimes. This is probably not acceptable now — certainly I'd be embarrassed to have to do this. But this is the world in which Brooks lived when he wrote this book. Even in a lifecycle that tried to reject change after gathering the requirements (or maybe because of this) we still end up throwing one away.

It's easy to see why Brooks couldn't fully justify the essential invitation of change through the development cycle that characterizes evolutionary prototyping, XP, and other iterative methodologies, although he could certainly see the possible value in it.

Bear in mind that Boehm hadn't yet finished his work in estimating the costs of change when Brooks wrote this. This work would ultimately lead to the widely held belief that failing to catch defects very close to the point of their introduction imposed costs exponential in the amount of time between introduction and discovery. This latter idea is still entrenched in our industry despite the fact that, if it were true, practices that allow for change throughout the development lifecycle would be exponentially more expensive than would be the waterfall model, which they clearly are not. (N.B. They are inarguably more expensive, they are just not exponentially so). Equally one still hears variants of this:

"The fundamental problem with software maintenance is that fixing a defect has a substantial (20-50 percent) chance of introducing another."

I do not believe the risks to be this high now in any reasonably well-run organization. They may come close to 20 but should be nowhere near 50 percent. In short, we can claim have become better at maintenance over the past 30 years. Brooks, though, had to play the ball where it lay at the time he was writing and so would not have seen some possibilities we enjoy today as being legitimate or responsible then.

On overall system design and requirements:

"Often the fresh concept does come from an implementer or from a user. However, all my own experience convinces me, and I have tried to show, that the conceptual integrity of a system determines its ease of use. Good features and ideas that do not integrate with a system's basic concepts are best left out. If there appear many such important but incompatible ideas, one scraps the whole system and starts again on an integrated system with different basic concepts."

There is a certain smugness at work in the idea that the architect will make better decisions here than the user will. Certainly this view is out of favor now. We normally try to find out what the user wants (somehow) and then find a way to design our software to provide this to them in the most sensible manner we can envision. I can't imagine saying "no" to the user regarding a feature just because it doesn't fit into my current conceptual view of the system, and the notion of throwing out the current system so we can devise a better one that embodies all the features we want is a luxury no one can afford.

Plainly put, our job as software developers is to distill the system's conceptual integrity given the user's requirements. It's not our job to pick over the user's requests, looking for some set of functions that makes sense as a whole to us. It is also our job to take our lemons and make lemonade. We don't have the option to throw out our organization's software inventory when it doesn't match up well enough with new requirements. We must find a way to refactor that inventory toward a design that accepts (however grudgingly) the complete requirements, both old and new.

"A discipline that will open an architect's eyes is to assign each little function a value: capability x is worth not more than m bytes of memory and n microseconds per invocation. These values will guide initial decisions and serve during implementation as a guide and warning to all."

Even in embedded development where I make my living, I rarely see anything like this level of budgeting detail. I'm sure it was an absolute necessity in the hardware-poor past, though it makes me awfully glad to live in the current age of hardware excess.

On source code control and configuration management:

"First, each group or programmer had an area where he kept copies of his programs, his test cases, and the scaffolding he needed for component testing. In this playpen area there were no restrictions on what a man could do with his own programs.... When a man had his component ready for integration into a larger piece, he passed a copy over to to the manager of that larger system, who put this copy into a system integration sublibrary..."

I shudder to think how miserable and invariably risky this manual approach must have been, but what alternative was there? Even the advent of RCS was still many years in the future.

Brooks spends a lot of time in the book on the subject of identifying and training architects and designers: "How to grow great designers? ... systematically identify top designers ... assign a career mentor to be responsible for the development of the prospect ... devise and maintain a career development plan for each prospect ..."

The sad thing here is that in most development organizations I know of, design is not a desirable thing in its own right. Outside of the development group itself no one knows how to design, or even if anyone does, the issues of how to identify, train, use, and retain top design talents never actually come up.

Essential Ideas

Brooks presents several essential ideas to consider.

The Surgical Team

The surgical team is Brooks' proposed development team model. At its head is the chief programmer or surgeon. Everyone else supports him. It defines the following roles:

  • Surgeon/Chief Programmer, who does actual development, design, testing, and documentating.
  • Copilot, the Surgeon's right hand man.
  • Administrator, who handles money, space, equipment, etc.
  • Editor, a technical writer who finishes the Surgeon's documentation.
  • Secretaries, one each for the Administrator and Editor.
  • Program Clerk, who manages technical records for the team./li>
  • Toolsmith, who manages custom tools for the team and the development environment.
  • Tester, who defines and executes test cases.
  • Language Lawyer, a programming language expert for some programming language of interest.

Even with the provisos listed in the book (one Language Lawyer can support two or three Surgeons, the Administrator may be able to look after two teams) this all seems excessive. I'm not clear at all on what the Programming Clerk does even after reading through the description a couple of times. I doubt the two Secretaries are truly necessary.

From what I do understand, one good Secretary or Administrator could likely look after all of the duties assigned to the two Secretaries, the Administrator, and the Programming Clerk. Also, the Tester or Toolsmith could handle many of the Programming Clerk's duties. I expect that the Toolsmith role would be controversial now — it's not clear to me that permitting each development team to vary with respect to their tool sets and development environments is either desirable or necessary, but people with these skills are obviously needed in the organization. I'd expect teams to share this role in practice.

Paring this back to the Surgeon, Copilot, Tester, Secretary/Administrator, and Editor would likely suffice for the team itself. An organization that supports multiple teams could provide the Language Lawyer and Toolsmith, or the team could subsume the roles itself.

Would such teams use their manpower effectively? Absolutely — if for no other reason than the well-defined roles. Each person knows what he must do and, perhaps more importantly, what's outside his purview. This alone is a welcome change from the relative chaos of role definition in most organizations. Frequently more than one person is doing the same thing, while unnervingly whole aspects of the work exist that no one is actually doing.

In many organizations also, technical decision-making has no clear process, with people haphazardly CC:d on emails, email threads that go on with no clear direction, and subjects eventually exhausted with no obvious outcome. At least in the proposal, technical decision-making clearly rests first and foremost with the Surgeon, who may delegate at her discretion. Being unambiguous about who is responsible for making the decisions is the first step in making them, I expect. It's clear that this is the Surgeon's responsibility.

This model also takes advantage of the well-documented disparity in measured productivity between programmers. The Surgeon is your star, the Copilot the understudy. Everyone else must ensure that these two people can maximize their contributions. That's smart.

Brooks puts forward his proposal as much for its scalability as for its optimization of productivity. Larger projects would have a set of these teams. Decisions requiring the input or agreement of more than one team — and ultimately the entirety of the architecture, which presumably the whole set of teams would have to ratify — would require the attention and involvement of the Surgeons, rather than the whole teams. In this, the surgical team proposal is reminiscent of the Scrum of Scrums and similar proposals for scaling XP projects.

The main disadvantage of the proposal is that it may reinforce the tribal boundaries within a larger organization where the characteristics and standards of each team diverge based on the tastes of the Surgeons that lead them. All in all, though, I'd expect the good points here to outweigh the bad. Allowing your A guys to do the A work and being painfully clear on who should make decisions would help many organizations enormously.

No Silver Bullets

If there's one thing that will stay with me from reading The Mythical Man-Month, it's Brooks' discussion of accident and essence in this essay. The central conjecture that drives this essay is this:

"There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability or simplicity."

Likely most have heard (or mis-heard) this as I did in a degenerate form of something like "development productivity can not increase by an order of magnitude." This is most definitely not what he's saying. He admits freely the possibility that combinations of improvements may yield this order-of-magnitude improvement — he draws the line at single factors. So there is no one, single silver bullet. This is an important distinction, because once understood, it becomes clear that this statement was probably true then and is in all likelihood true today. Knowing this is a tremendous boon in sorting out the nonsense from the truth in development.

As recently as a few weeks ago, I saw more than one order of magnitude improvement in development productivity attributed to the adoption of a particular set of process improvements. Without even trying to sort out whether manipulating a single factor produced this effect or more than one, I felt confident that either the presenter or Brooks was wrong. My money's on the presenter.

These people are typically not trying to pull the wool over your eyes intentionally — they believe what they're saying. Consider the many people still beating the "defect phase containment will save you orders of magnitude in effort" drum. Today, this is probably true only at the extremes, in really large projects, projects with really stringent quality requirements, or projects staffed with unusually bad teams. Counter evidence is ubiqutuous, but people still tell this story. As professionals, we have a responsibility to sort the wheat from the chaff. Brooks's conjecture is a great tool to bring to bear in this effort.

Brooks supports his conjecture with an inspired discussion that divides the world of software development into accident and essence:

"The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of function. The essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed. ... I believe the hard part of building software to be the specification, design and testing of this conceptual construct, not the labor or representing it and testing the fidelity on the representation."

This is the essence. The accident is everything else involved in software development. The details of the programming language, the configuration management, the modeling language, the packaging, documentation tools, libraries, build tools and so on are all accidental work in software development. Clearly there's lots of accidental work. Here's why what Brooks is saying makes so much sense — no one single area of software development is so badly burdened by accidental work that improving it can yield an order-of-magnitude improvement in overall productivity, reliability or simplicity.

My other realization from this reading is that, while I have never characterized myself (as so many do) as a C guy, a C++ guy, a Java guy, or what have you, I have most definitely prided myself on being a good generalist with a decent background on programming languages, build environments, libraries, installers, and operating systems.

The idea of the essence and accident of software development makes plain where continuing study has the most effect. Anything that improves skills in the accidents of development can only be of benefit in particular software niches whereas study to improve skills in the essence of development necessarily has to be a benefit in all domains.

Ed Willis works in telecommunications.

Producer's Note: Check out this discussion on The Mythical Man-Month that took place on java.net in May. The discussion has since been archived but there are many interesting posts and opinions.


Return to ONLamp.com.



Sponsored by: