ONLamp.com
oreilly.comSafari Books Online.Conferences.

advertisement


The Mythical Man-Month Revisited

by Ed Willis
06/17/2004

Surely everyone in development has heard of The Mythical Man-Month if for no other reason than its presentation of Brooks' Law: "Adding manpower to a late project makes it later." Having finally read it in its entirety, though, I can say that it's like a time capsule — simultaneously you can see just how much the field has changed since the original writing and just how much has stayed stubbornly the same.

I have had comparatively recent introduction to the field of software development, being only eight or so years out of school. Excepting my efforts in shell scripting to get in touch with my ancient Unix ancestors, I really know nothing about the history of my profession and how it was for those who worked through the early days that defined our industry. Brooks' writing style is fairly dry and formal — seemingly the prose comes to the reader from the time of Arthur Conan Doyle rather than from the time of the Rolling Stones — and this serves to intensify the perception of antiquity.

To say that this book has deepened my appreciation of just how much I take for granted in my field and how hard-won all those little gains were would be to understate the matter.

It was hard to keep my thoughts on what I was reading, as my reactions to the material took off in my mind. That said, I might characterize my varied reactions as:

  • Impudent reactions to the modus operandi of the elder days of yore.
  • Recognizing ideas that foreshadowed significant advents in the field later on.
  • Recognizing ideas that time has not been overly kind to.
  • Reacting to the essential ideas Brooks presents.

And so, without further ado ...

Impudent Reactions to the Modus Operandi of the Elder Days of Yore

Ultimately, most of the amusing excerpts I've singled out below trace back to assumptions regarding the existence of a technique, process, or technology that we take for granted now but which had yet to be invented or was in its infancy at the time of the writing. The field of software development, after all, started out not so long ago with only some hardware and some mathematics before it blossomed into a field full of craft and its own science. Brooks wrote from a time far closer to those origins. In a sense, that these things below seem funny now is an indication of how far the field has come.

I'm not likely doing myself any favors etching these in electronic stone. I can already hear the condemnation of the older professors I've worked with ringing in my ears as I write this. But the truth is, this book had me in hysterics so frequently that to avoid this aspect would seem utterly remiss to me. For example:

"Teams building new supervisors or other system-heart software will of course need machines of their own. Such systems will need operators and a system programmer or two who keeps the standard support on the machine current and serviceable."

I just bet this is the root of all my problems — I have not one but two machines all to myself at work. Do I have any systems programmers or operators? Not a one. It's a miracle I can accomplish anything at all, under the circumstances.

Regarding source code documentation:

"The most serious objection is the increase in the size of the source code that must be stored. As the discipline moves more and more toward on-line storage of source code, this has become a growing consideration. I find myself being briefer in comments to an APL program, which will live on disk, then on a PL/I one that I will store as cards."

For who among us is this not true? Honestly, you just can't shut me up on cards.

On estimation:

"My guideline in the morass of estimating complexity is that compilers are three times as bad as normal batch application programs, and operating systems are three times as bad as compilers."

Pretty much everything I've ever worked on would be "batch application programs" then. At least they're only one ninth as bad as operating systems.

"Consider the IBM APL interactive software system. It rents for $400 per month and, when used, takes at least 160K bytes of memory. On a Model 165, memory rents for about $12 per kilobyte per month. If the program is available full-time, one pays $400 software rent and $1920 memory rent for using the program."

I sure hope someone's been paying my software and memory rent. I haven't been, at least the last little while.

"... we went to allocating machine time in substantial blocks. The whole 15-man sort team, for example, would be given a system for a four-to-six-hour block."

A 15-man sort team — a whole baseball team, with pinch-hitters and relievers — for sorting!

"Operating systems, loudly decried in the 1960s for their memory and cycle costs, have proved to be an excellent form in which to use some of the MIPS and cheap memory bytes on the past hardware surge."

Is the jury really in on operating systems? Let's not be too hasty now. The irony of the situation is that, in at least embedded systems, this question is still very much up for debate.

"Many users now operate their own computers day in and day out on varied applications without ever writing a program. Indeed, many of these users cannot write new programs for their machines, but they are nevertheless adept at solving new problems with them."

Strange but true. They're also pretty good at causing new problems with them too, in my experience.

At various points through the book Brooks discusses the effective use of the "transient area" and how vital an issue it is to the success of any project. I was, of course, deeply alarmed to discover that I had no idea what he was talking about. At the end of the book, Brooks, writing much closer to the present day, critiques his earlier work and revisits this point:

"The size of the transient area, hence the amount of program per disk fetch, is a crucial decision, since performance is a super-linear function of that size. [This whole decision has been obsoleted, first by virtual memory, then by cheap real memory. Users now typically buy enough real memory to hold all the code of major applications.]"

I feel much better now.

Ideas That Foreshadowed Significant Advents in the Field

The earlier section aside, there is much to recommend in this book. Brooks is a smart guy looking critically at his field and finding much opportunity for improvement. Many later development practices embody his observations.

"The sooner one puts the pieces together, the sooner the system bugs will emerge. Somewhat less sophisticated is the notion that by using the pieces to test each other, one avoids a lot of test scaffolding. Both of these are obviously true, but experience shows that they are not the whole truth — the use of clean, debugged components saves much more time in system testing than that spent on scaffolding and thorough component test."

The key distinction here for me is the notion of the components being tested and debugged in isolation, separated from the rest of the system. This is essentially the same technique as is practiced in Extreme Programming (XP), that is, XUnit-style unit testing.

There, as well, adherents argue that creating these separate test scaffoldings for each component costs less than the expense of fixing component-level defects at integration time. In fact, XP's argument is even stronger; with a highly evolutionary and iterative model, these test sets are part of the weight that is moved around when it comes time to make new changes. Still it's telling that the value of this approach to the creation of quality was evident to Brooks 15 years or so in advance of XP. In another spot, Brooks mentions:

"... interesting results show that three times as much progress in interactive debugging is made on the first interaction of each session as on subsequent interactions."

The real irony here is that his main point is that debugging sessions should be short, whereas the XUnit take on this would be that the value of debugging drops dramatically as the developer does it, so we should aim to do as little of it as possible. This is certainly one of the motivations of XUnit testing. It's unsurprising that this idea would have eluded Brooks, as the interactivity of the short think-code-test-debug cycle upon which XUnit rests was a long way off in the future when Brooks wrote this.

"For picking milestones there is only one relevant rule. Milestones must be concrete, specific, measurable events, defined with knife-edge sharpness. Coding, for example, is '90 percent finished' for half of the total coding time. Debugging is '99 percent complete' most of the time. 'Planning complete' is an event one can proclaim almost at will."

The first time I encountered similar statements was in Steve McConnell's Rapid Development. As Brooks himself points out, software is ethereal "thought-stuff." Many things come out of this intangibility. William Burroughs talks about the inability to fake certain things (a good meal, for example). The only milestone that is like this in software development is the release itself. All others may be fake, and the further they are from the release, the easier they are to fake. This is all the more reason to be careful in defining and evaluating non-implementation milestones.

Going further, many Agile enthusiasts want to do away with non-implementation phases of development (or at least reduce the effort put into non-implementation work products) and put the emphasis much more squarely on the implementation itself.

Regarding non-implementation work products, especially documentation, Brooks wrote:

"A basic principle of data processing teaches the folly of trying to maintain independent files in synchronization ... Yet our practice in programming documentation violates our own teaching. We typically attempt to maintain a machine-readable form of a program and an independent set of human-readable documentation, consisting of prose and flowcharts ... The solution, I think, is to merge the files, to incorporate the documentation in the source program."

Obviously this foreshadows the movement toward placing ever greater emphasis on internal documentation as is witnessed in the advent of Javadoc, Doxygen, and the remaining host of source code documentation tools. I see these efforts leading to the minimization of external design artifacts as these tools make it tempting to forgo formal design documents altogether in favor of embedding the design documentation in the code itself.

XUnit-style testing reduces the need for requirements documents in a similar, though less dramatic fashion; unit requirements, at least, exist in an executable and readable form in the unit tests themselves. One motivation for both of these techniques is that, of all the work products we can create, we can't avoid creating and maintaining the implementation itself. Let's load the implementation with as much value as humanly possible. That Brooks saw the value of some of these possibilities so early is encouraging.

Pages: 1, 2

Next Pagearrow





Sponsored by: