Archive

Posts Tagged ‘Steve McConnell’

Under Or Over?

October 28, 2012 2 comments

In general, humans suck at estimating. In specific, (without historical data,) software engineers suck at estimating both the size and effort to build a product – a double whammy. Thus, the bard is wrong. The real thought worth pondering is:

To underestimate or overestimate, that is the question.

In “Software Estimation: Demystifying the Black Art“, Steve McConnell boldly answers the question with this graph:

As you can see, the fiscal penalty for underestimation rockets out of control much quicker than the penalty for overestimation. In summary, once a project gets into “late” status, project teams engage in numerous activities that they don’t need to engage in if they overestimated the effort: more status meetings with execs, apologies, triaging of requirements, fixing bugs from quick-dirty workarounds implemented under schedule duress, postponing demos, pulling out of trade shows, more casual overtime, etc.

So, now that the under/over question has been settled, what question should follow? How about this scintillating selection:

Why do so many orgs shoot themselves in the foot by perpetuating a culture where underestimation is the norm and disappointing schedule/cost performance reigns supreme?

Of course, BD00 has an answer for it (cuz he’s got an answer for everything):

Via the x-ray power of POSIWID, it’s simply what hierarchical command and control social orgs do; and you can’t ask such an org to be what it ain’t.

OMITTED ACTIVITIES!

The best book I’ve read (so far) on software estimation is Steve McConnell’s “Software Estimation: Demystifying the Black Art“. Steve is one of the most pragmatic technical authors I know. His whole portfolio of books is worth delving into.

Prior to describing many practical and “doable” estimation practices, Steve presents a dauntingly depressive list of estimation error sources:

  • Unstable requirements
  • Unfounded optimism
  • Subjectivity and bias
  • Unfamiliar application domain area
  • Unfamiliar technology area
  • Incorrect conversion from estimated time to project time (for example, assuming the project team will focus on the project eight hours per day, five days per week)
  • Misunderstanding of statistical concepts (especially adding together a set of “best case” estimates or a set of “worst case” estimates)
  • Budgeting processes that undermine effective estimation (especially those that require final budget approval in the wide part of the Cone of Uncertainty)
  • Having an accurate size estimate, but introducing errors when converting the size estimate to an effort estimate
  • Having accurate size and effort estimates, but introducing errors when converting those to a schedule estimate
  • Overstated savings from new development tools or methods
  • Simplification of the estimate as it’s reported up layers of management, fed into the budgeting process, and so on
  • OMITTED ACTIVITIES!

But wait! We’re not done. That last screaming bullet, OMITTED ACTIVITIES!, needs some elaboration:

  • Glue code needed to use third-party or open-source software
  • Ramp-up time for new team members
  • Mentoring of new team members
  • Management coordination/manager meetings
  • Requirements clarifications
  • Maintaining the scripts required to run the daily build
  • Participation in technical reviews
  • Integration work
  • Processing change requests
  • Attendance at change-control/triage meetings
  • Maintenance work on previous systems during the project
  • Performance tuning
  • Administrative work related to defect tracking
  • Learning new development tools
  • Answering questions from testers
  • Input to user documentation and review of user documentation
  • Review of technical documentation
  • Reviewing plans, estimates, architecture, detailed designs, stage plans, code, test cases
  • Vacations
  • Company meetings
  • Holidays
  • Sick days
  • Weekends
  • Troubleshooting hardware and software problems

It’s no freakin’ wonder that the vast majority of software-intensive projects are underestimated, no? To add insult to injury, the unspoken pressure from the “upper layers” to underestimate the activities that ARE actually included in a project plan seals the deal for “perceived” future failure, no? It’s also no wonder that after a few years, good technical people who feel that hands-on creative work is their true calling start agonizing over whether to get the hell out of such a failure-inducing system and make the move on up into the world of politics, one-upsmanship, feigned collaboration, dubious accomplishment, and strategic self-censorship. Bummer for those people and the orgs they dwell in. Bummer for “the whole“.

Fudge Factors

This graphic from Steve McConnell‘s “Software Estimation” shows some of the fudge factors that should be included in project cost estimates. Of course, they never are included, right?

Holy cow, what a coincidence! I happened to stumble upon this mangled version of Mr. McConnell’s graphic somewhere online. D’oh!

Fierce Protection

December 6, 2010 4 comments

Delicious, just delicious. Pitches from Fred Brooks, Scott Berkun, Tom DeMarco, Tim Lister, and Steve McConnell all in one place:  the Construx (McConnell’s company) Software Executive Summit. You can download them from here:  Summit Materials.

Here’s a snapshot of one of Fred Brooks’s slides that struck me as paradoxical:

So…. who’s the “we” that Fred is addressing here and what’s the paradox? I’m pretty sure that Fred is addressing managers, right? The paradox is that he’s admonishing managers to protect great designers from…… managers. WTF?

But wait, I think I get it now. Fred is telling PHOR managers to “fiercely” protect designers from Bozo Managers (but in a non-offensive and politically correct way, of course). Alas, the fact that this slide appears at all in Fred’s deck implies that PHORs are rare and BMs are plentiful, no?

How do you interpret this slide?

An Estimation Example

October 17, 2010 Leave a comment

The figure below shows the derivation of an estimate of work in staff-hours to design/develop/test a Computer Software Configuration Item (CSCI) named YYYY. The estimate is based on the size of an  existing CSCI named XXXX and the productivity numbers assigned to the “Real Time” category of software from the productivity chart in Steve McConnell‘s “Software Estimation: Demystifying the Black Art“.

Of course, the simple equation used to compute effort and all of the variables in it can be challenged, but would it improve the accuracy of the range of estimates?

Estimation Deflation

October 15, 2010 1 comment

The best book I’ve read to date on the topic of software effort and schedule estimation is Steve McConnell‘s “Software Estimation: Demystifying the Black Art“. According to Mr. McConnell, two large influences on the amount of work required to develop a non-trivial piece of software are “size” and “kind“. Regardless of the units of measure (use cases, user stories, function points, Lines Of Code, etc), the greater the “size”, the greater the amount of work required to build the thang. Similarly, the harder “kinds” are associated with lower productivity than the simpler “kinds”.

In his book, McConnell provides the following handy, industry-data-backed,  “kinds” vs “productivity” table that’s parameterized by “size” (in Lines Of Code (LOC)). Note that the “kinds” are sort of arbitrary and by no means an industry standard.

The Real-Time, 10K-100K LOC entry is circled because that’s the type and typical size of software that I specify/design/write. Note the huge 15-to-1 range of productivity for the type. Also note that the table contains large ranges of productivity for all the kind-size entries. Hint, hint: estimating is hard.

Ideally, for psuedo-accurate planning purposes, a software development org maintains its own table (see bogus example below) with real, measured numbers for the sizes of the CSCIs (Computer Software Configuration Items) that its DICs have created.

Of course, for a variety of cultural, competence, and social reasons, a lot of orgs don’t measure or maintain a custom productivity table. Thus, estimators are forced to pull numbers out of their arses and anyone’s productivity estimate is as bad anyone else’s. Everyone who wasn’t born yesterday knows that the pressure to use ridiculously high productivity numbers in work estimates pervades the ether in most orgs. Even when some FAI bucks the trend and withstands the looks and sound bites of disdain for conjuring up a work estimate that is perceived by the management chain as “too high”, the final estimates that show up on “approved” schedules are magically deflated to what is wanted by some clueless BM, SCOL, or CGH.

Reuse Based Estimation

October 5, 2010 Leave a comment

“It’s called estimation, not exactimation” – Scott Ambler

One of the pragmatically simple, down to earth equations in Steve McConnell‘s terrific “Software Estimation” defines the schedule for a new software development project in terms of past performance as:

Of course, in order to use the equation to compute a guesstimate, as the table below shows, you must have tracked and recorded past efforts along with the calendar times it took to get those jobs completed.

Of course, not many orgs keep a running tab of past projects in an integrated, simple to use, easily accessible form like the above table, or do they? The info may actually be available someplace in the corpo data dungeon, but it’s likely fragmented, scattered, and buried within all kinds of different and incompatible financial forms and Microsoft project files. Why is this the case? Because it’s a management task and thus, no one’s responsible for doing it. In elegant corpo-speak, managers are responsible for “getting work done through others“. The catch phrase used to be “getting work done“, but to remove all ambiguity and increase clarity, the “through others” was cleverly or unconsciously tacked on.

How about you? How do you guesstimate effort and schedule?

%d bloggers like this: