Archive

Posts Tagged ‘schedule’

Zero Time, Zero Cost

August 19, 2010 1 comment

In “The Politics Of Projects“, Robert Block states that orgs “don’t want projects, they want products“. Thus, the left side of the graph below shows the ideal project profile; zero cost and zero time. A twitch of Samantha Stevens’s nose and, voila, a marketable product appears out of thin air and the revenue stream starts flowin’ into the corpo coffers.

Based on a first order linear approximation, all earthly product development orgs get one of the performance lines on the right side of the figure. There are so many variables involved in the messy and chaotic process from viable idea to product that it’s often a crap shoot at predicting the slope and time-to-100-percent-complete end point of the performance line:

  • Experience of the project team
  • Cohesiveness of the project team
  • Enthusiasm of the project team
  • Clarity of roles and responsibilities of each team member
  • Expertise in the product application domain
  • Efficacy of the development tool set
  • Quality of information available to, and exchanged between, project members
  • Amount and frequency of meddling from external, non-project groups and individuals
  • <Add your own performance influencing variable here>

To a second order approximation, the S-curve below shows real world project performance as a function of time. The slope of the performance trajectory (% progress per unit time) is not constant as the previous first order linear model implies. It starts out small during the chaotic phase, increases during the productive stage, then decreases during the closeout phase. The objective is to minimize the time spent in phases P1, P2, and P3 without sacrificing quality or burning out the project team via overwork.

Assume (and it’s a bad assumption) that there’s an objective and accurate way of measuring “% complete” at any given time for a project. Now, assume that you’ve diligently tracked and accumulated a set of performance curves for a variety of large and small projects and a variety of teams over the years. Armed with this data and given a new project with a specific assigned team, do you think you could accurately estimate the time-to-completion of the new project? Why or why not?

Wishful And Realistic

August 11, 2010 2 comments

As software development orgs grow, they necessarily take on larger and larger projects to fill the revenue coffers required to sustain the growth. Naturally, before embarking on a new project, somebody’s gotta estimate how much time it will take and how many people will be needed to get it done in that guesstimated time.

The figure below shows an example of the dumbass linear projection technique of guesstimation. Given a set of past performance time-size data points, a wishful estimate for a new and bigger project is linearly extrapolated forward via a neat and tidy, mechanistic, textbook approach. Of course, BMs, DICs, and customers all know from bitter personal experience that this method is bogus. Everyone knows that software projects don’t scale linearly, but (naturally) no one speaks up out of fear of gettin’ their psychological ass kicked by the pope du jour. Everyone wants to be perceived as a “team” player, so each individual keeps their trap shut to avoid the ostracism, isolation, and pariah-dom that comes with attempting to break from clanthink unanimity. Plus, even though everyone knows that the wishful estimate is an hallucination, no one has a clue of what it will really take to get the job done. Hell, no one even knows how to define and articulate what done means. D’oh! (Notice the little purple point in the lower right portion of the graph. I won’t even explain its presence because you can easily figure out why it’s there.)

OK, you say, so what works better Mr. Smarty-Pants? Since no one knows with any degree of certainty what it will take to “just get it done” (<- tough management speak – lol!) nothing really works in the absolute sense, but there are some techniques that work better than the standard wishful/insane projection technique. But of course, deviation from the norm is unacceptable, so you may as well stop reading here and go back about your b’ness.

One such better, but forbidden, way to estimate how much time is needed to complete a large hairball software development project is shown below. A more realistic estimate can be obtained by assuming an exponential growth in complexity and associated time-to-complete with increasing project size. The trick is in conjuring up values for the constant K and exponent M. Hell, it requires trickery to even come up with an accurate estimate of the size of the project; be it function points, lines of code, number of requirements or any other academically derived metric.

An even more effective way of estimating a more accurate TTC is to leverage the dynamic learning (gasp!) that takes place the minute the project execution clock starts tickin’. Learning? Leverage learning? No mechanistic equations based on unquantifiable variables? WTF is he talkin’ bout? He’s kiddin’ right?

Push Back

June 29, 2010 4 comments

Besides being volatile, unpredictable, and passionate, I “push back” against ridiculous schedules. While most fellow DICs passively accept hand-me-down schedules like good little children and then miss them by a mile, I rage against them and miss them by a mile. Duh, stupid me.

How about you? What do you do, and why?

A Contrast In Usability

March 15, 2010 Leave a comment

I own two wildly different books dedicated to the topic of software estimation:

I have several of McConnell’s books and I think that he is a brilliant, understandable teacher of all things software. Steve’s concern for, and empathy towards, the layman software engineer shows. Stutzke, on the other hand, is an impressive equation-wielder and master complexity amplifier who seems more concerned with showing off his IQ to fellow elites than transmitting usable information to the dudes in the trenches. It could take more time to apply Stutzke’s work for estimating the size and effort to develop a large software-intensive system than to build the actual system itself.

Since McConnell’s book is half the price of Stutzke’s, buy two of them – one for yourself and one for your manager. On the other hand, give the second one to a colleague since most managers, BM or otherwise, don’t read technical stuff. They also don’t believe in estimation. They delusionally believe in certainty so they can populate their massive and useless Microsoft project files with exact numbers and never revise them until the fit hits the shan and it’s time to apportion blame to the DICforce.

Percent Complete

October 22, 2009 Leave a comment

In order to communicate progress to someone who requires a quantitative number attached to it, some sort of consistent metric of accomplishment is needed. The table below lists some of the commonly used size metrics in the software development world.

Common Metrics

All of these metrics suffer to some extent from a “consistency” problem. The problem (as exemplified in the figure below)  is that,  unlike a standard metric such as the “meter”, the size and meaning of each unit is different from unit to unit within an application, and across applications. Out of all the metrics in the list, the definition of what comprises a “Function Point”  unit seems to be the most rigorous, but it still suffers from a second, “translation” problem. The translation problem manifests when an analyst attempts to convert messy and ambiguous verbal/written user needs  into neat and tidy requirement metrics using one of the units in the list.

FP Sizes

Nevertheless, numerically-trained MBA and PMI certified managers and their higher up executive bosses still obsessively cling to progress reports based on these illusory metrics. These STSJs (Status Takers and Schedule Jockeys) love to waste corpo time passing around status reports built on quicksand like the “percent done” example below.

Percent Done

The problems with using graphs like this to “direct” a project are legion. First, it is assumed that the TNFP is known with high accuracy at t=0 and, more erroneously, that its value stays constant throughout the duration. A second problem with this “best practice” is that lots, if not all, non-trivial software development projects do not progress linearly with the passage of time. The green trace in the graph is an example of a non-linearly progressing project.

Since most managers are sequential, mechanistic, left-brain-trained thinkers, they falsely conclude that all projects progress linearly. These bozelteens also operate under the meta-assumption that no initial assumptions are violated during project execution (regardless of what items they initially deposited in their “risk register” at t=0). They mistakenly arrive at conclusions like: ” if it took you two weeks to get to 50% done, you will be expected to be done in two more weeks”. Bummer.

Even after trashing the “percent complete” earned-value-management method in the previous paragraphs, I think there is a chance to acquire a long term benefit by tracking progress this way. The benefit can accrue IF AND ONLY IF the method is not taken too seriously and it’s not used to impose undue stress upon the software creators and builders who are trying their best to balance time-cost and quality. Performing the “percent complete” method over a bunch of projects and averaging the results can yield decent, but never 100% accurate, metrics that can be used to more effectively estimate future project performance. What do you think?

A Professional Failure

October 5, 2009 Leave a comment

I’m a professional failure. Why? Because I’m pretty sure that I’ve never satisfied any unreasonable schedule that I was ever  “given” to meet. Since almost all schedules are unreasonable, then, by definition, I’m a professional failure. Hell, it didn’t even matter if I was the one who created the unreasonable schedule in the first place, I’ve failed. Bummer.

Total Failure

Looking back, I think that I’ve figured out why I  underperformed (<– that’s management-speak for “failed”). It’s simply that the problem solving projects that I’ve worked on have been grossly underestimated. Why is that? Because they all required learning something new and acquiring new knowledge in the problem area of pecuniary interest.

So, how can you know  if a given schedule is unreasonable, and does it matter if you conclude that meeting the schedule is a lost cause? You most likely can’t, and no, it doesn’t matter. Assume that, based on personal experience and a deep “knowing” of what’s involved in a project, you actually can determine that the schedule is a laughable, but innocent, lie. There’s nothing you can do about it. If you speak up, at best, you’ll be ignored. At worst, you’ll receive multiple peek-a-boo visits from one or more STSJs (Status Taker and Schedule Jockey) who don’t have to do any of the project work themselves.

How about you, have you been a perpetual failure like me? Of course not. Your resume says here that you have been 100% successful on every project you’ve worked on; and that implies that you’ve met every schedule. But wait, every other resume in my stack says the same thing. Damn! How am I gonna decide among all of these perfect people who gets the job?

Schedule Policy

Sched Allegiance

Just about every corpo mediocracy in the world has a proverbial “Quality Policy” that it proudly displays all over the place. The inspirational words of wisdom, that hierarchs profess staunch adherence to, are inscribed on framed posters, and/or cute little magnetic sheets. These false idols are distributed far and wide within the cathedral walls for everyone to worship.

However, everyone down in the boiler room knows that the true corpo allegiance is to schedule. How do the serfs know this? It’s easy, and it doesn’t require an Einsteinian intellect to figure it out. Just walk around the cubicle farm and count the number of times you hear managers mention the word “quality”. Then count the number of times that you hear “schedule”. Voila, you then have your answer, and it doesn’t involve rocket science math. Companies that have higher schedule to quality ratios are much more likely to fill the ranks of the average and boring herd.

Schedule worship, at whatever the human and organizational cost, is one of those issues that Chris Argyris calls “undiscussable”. Anyone who points out the fact that the quality policy is actually a lame attempt to camouflage the true and unconditional allegiance to schedule, gets beheaded in true shoot-the-messenger form. Nobody in their right mind “discusses” the quality versus schedule irony because, well, it’s “undiscussable”.  🙂

I propose that all companies develop, distribute, and display their very own authentic schedule policy. One could go something like this:

“The Duefiss corporation is proudly dedicated to meeting schedule. Our allegiance is unconditional. At Duefiss Inc., we will aggressively cut every corner and apply any amount of pressure to our human resources to meet any schedule. It doesn’t matter how laughable or unrealistic  any given schedule is. We will commit our minions to it, no matter what the consequences to them, their families, our product quality, or our long term credibility and profitability. When we fall behind the hallowed schedule, we guarantee to turn up the heat on those responsible for the slip, and increase the frequency of status meetings to reinforce our commitment.”

What would your schedule policy be?

%d bloggers like this: