Archive

Posts Tagged ‘schedule’

Seven Unsurprising Findings

In the National Acadamies Press’s “Summary of a Workshop for Software-Intensive Systems and Uncertainty at Scale“, the Committee on Advancing Software-Intensive Systems Producibility lists 7 findings from a review of 40 DoD programs.

  1. Software requirements are not well defined, traceable, and testable.
  2. Immature architectures; integration of commercial-off-the-shelf (COTS) products; interoperability; and obsolescence (the need to refresh electronics and hardware).
  3. Software development processes that are not institutionalized, have missing or incomplete planning documents, and inconsistent reuse strategies.
  4. Software testing and evaluation that lacks rigor and breadth.
  5. Lack of realism in compressed or overlapping schedules.
  6. Lessons learned are not incorporated into successive builds—they are not cumulative.
  7. Software risks and metrics are not well defined or well managed.

Well gee, do ya think they missed anything? What I’d like to know is what, if anything, they found right with those 40 programs. Anything? Maybe that would help more than ragging on the same issues that have been ragged on for 40 years.

My fave is number five (with number 1 a close second). When schedules concocted by non-technical managers without any historical backing or input from the people who will be doing the work are publicly promised to customers, how can anyone in their right mind assert that they’re “realistic“? The funny thing is, it happens all the time with nary a blink – until the fit hits the shan, of course. D’oh!

Meeting schedules based on historically tracked data and input from team members is challenging enough, but casting an unsubstantiated schedule in stone without an explicit policy of periodically reassessing it on the basis of newly acquired knowledge and learning as a project progresses is pure insanity. Same old, same old.

I love deadlines. I like the whooshing sound they make as they fly by. – Douglas Adams

Fudge Factors

This graphic from Steve McConnell‘s “Software Estimation” shows some of the fudge factors that should be included in project cost estimates. Of course, they never are included, right?

Holy cow, what a coincidence! I happened to stumble upon this mangled version of Mr. McConnell’s graphic somewhere online. D’oh!

What Really Happens…..

May 19, 2011 2 comments

The Boundary

March 29, 2011 6 comments

Mr. Watts Humphrey‘s final book, titled “Leadership, Teamwork, and Trust: Building a Competitive Software Capability” was recently released and I’ve been reading it online. Since I’m in the front end of the book, before the TSPPSP crap, I mean “stuff“, is placed into the limelight for sale, I’m enjoying what Watts and co-author James W. Over have written about the 21st century “management of knowledge workers problem“. Knowledge workers manipulate knowledge in the confines of their heads to create new knowledge. Physical laborers manipulate material objects to create new objects. Since, unlike physical work, knowledge work is invisible, Humphrey and Over (rightly) assert that knowledge work can’t be managed by traditional, early 20th century, management methods. In their own words:

Knowledge workers take what is known, and after modifying and extending it, they combine it with other related knowledge to actually create new knowledge. This means they are working at the boundary between what is known and what is unknown. They are extending our total storehouse of knowledge, and in doing so, they are creating economic value. – Watts Humphrey & James W. Over

But Watts and Over seem inconsistent to me (and it’s probably just me). They talk about the boundary ‘tween the known and the unknown, yet they advocate the heavyweight pre-planning of tasks down to the 10 hour level of granularity. When you know in advance that you’ll be spending a large portion of your time exploring and fumbling around in unknown territory, it’s delusional for others who don’t have to do the work themselves to expect you to chunk and pre-plan your tasks in 10 hour increments, no?

Nothing is impossible for the man who doesn’t have to do it himself. – A. H. Weiler

The Wevo Approach

February 11, 2011 Leave a comment

The figure below shows an example of a one-size-fits-all, waterfall schedule template that’s prevalent at many old school software companies. It sure looks nice, squeaky clean, and controllable, but as everyone knows, it’s always wrong. Out of fear or apathy, almost no one speaks out against this “best practice“, but those who do are quickly slapped down by the anointed controllers and meta-controllers of the project.

A more insidious, micro-grained, version of this waterboarding fiasco is shown below. It’s a self-medicating attempt to amplify the illusion of control that’s envisioned to take place throughout the execution of the project. Since schedules are concocted before an architecture or design has been reasonably sketched out and no one can possibly know up front what all the micro tasks are, let alone how long they’ll take (unless the project is to dig ditches), it’s monstrously wrong too. But shush, don’t say a word.

Once a monstrosity like this is baked into a huge Microsoft Project file or company proprietary scheduling document, those who conjured up the camouflage auto-become loathe to modify it, even as the situation dynamically changes during the death march. Once the project starts churning, new unforeseen “popup” tasks emerge and some pre-planned micro-tasks become obsolete. These events disconnect the schedule from reality quicker than you can say “WTF?“.

Moving on to a sunnier disposition, the template below shows a more “sane“, but not infallible, method of scheduling. It’s a model of the incremental “evo” strategy that I first stumbled upon from Tom Gilb – a bazillion years before the agile movement rose to prominence. In the evo(lutionary) approach, stable working software becomes visible early with each RDCT cycle and it grows and matures as the messy (it’s always messy) project lurches forward.

The figure below shows a tweaked version of the evo model. It’s a hybrid concoction of the waterboard and evolutionary development approaches – the “wevo“. Some upfront requirements and architecture exploration/definition/specification is performed by the elected team technical leaders before staffing up for the battle against the possibility of building a BBoM. The purpose of the upfront requirements and architecture efforts are to address major cross-cutting concerns and establish contextual boundaries – before letting the dogs loose.

Of course, the wevo approach is not enough. Another necessary but insufficient requirement is that the team leaders dive into the muck with the “coders” after the cross-cutting requirements and architecture definition activities have produced a stable, understandable blueprint. No jargon spewing software “rocketects” or “pure” software project leads allowed – everyone gets dirty – and for the duration.

Are You Still Working On That?

December 7, 2010 2 comments

It’s funny enough when you work for a one dimensional manager (one dimension = schedule), but it’s even funnier when another 1D manager that has nothing to do with your project stops by to chit chat and he/she inevitably asks you:

Are you still working on that?

LOL! Being 1D, and even though he/she has no idea what it takes (or should take) to finish a project, the question can be interpreted as: Since you’re not done, you’re lazy or you’re screwing up.

When the question pops up, try this Judo move:

Should I be done? How long should it have taken?

Or, you can be really nasty and retort with:

Yes I am still working on it. Sorry, but it’s not a shallow and superficial management task like signing off on a document I haven’t read or attending an agenda-less meeting that I could check off on my TODO list.

Come on, I dare you.

Estimation Deflation

October 15, 2010 1 comment

The best book I’ve read to date on the topic of software effort and schedule estimation is Steve McConnell‘s “Software Estimation: Demystifying the Black Art“. According to Mr. McConnell, two large influences on the amount of work required to develop a non-trivial piece of software are “size” and “kind“. Regardless of the units of measure (use cases, user stories, function points, Lines Of Code, etc), the greater the “size”, the greater the amount of work required to build the thang. Similarly, the harder “kinds” are associated with lower productivity than the simpler “kinds”.

In his book, McConnell provides the following handy, industry-data-backed,  “kinds” vs “productivity” table that’s parameterized by “size” (in Lines Of Code (LOC)). Note that the “kinds” are sort of arbitrary and by no means an industry standard.

The Real-Time, 10K-100K LOC entry is circled because that’s the type and typical size of software that I specify/design/write. Note the huge 15-to-1 range of productivity for the type. Also note that the table contains large ranges of productivity for all the kind-size entries. Hint, hint: estimating is hard.

Ideally, for psuedo-accurate planning purposes, a software development org maintains its own table (see bogus example below) with real, measured numbers for the sizes of the CSCIs (Computer Software Configuration Items) that its DICs have created.

Of course, for a variety of cultural, competence, and social reasons, a lot of orgs don’t measure or maintain a custom productivity table. Thus, estimators are forced to pull numbers out of their arses and anyone’s productivity estimate is as bad anyone else’s. Everyone who wasn’t born yesterday knows that the pressure to use ridiculously high productivity numbers in work estimates pervades the ether in most orgs. Even when some FAI bucks the trend and withstands the looks and sound bites of disdain for conjuring up a work estimate that is perceived by the management chain as “too high”, the final estimates that show up on “approved” schedules are magically deflated to what is wanted by some clueless BM, SCOL, or CGH.

%d bloggers like this: