Convergence To Zero

March 28, 2015 Leave a comment

Maybe it’s just me, but I think that some people, especially managers and executives, auto-equate decreasing “cost to create” (CTC) and decreasing “time to market” (TTM) with increasing “quality“. Actually, since they’re always yapping about CTC and TTM, but rarely about about quality, perhaps they think there is no correlation between CTC/TTM and quality.

Increasing Quality

But consider the polar opposite, where decreasing the CTC and decreasing the TTM decreases quality.

Decreased Quality

I think both of the above cases are extreme generalities, but unless I’m missing something big, I do know one thing. In the limit, if you decrease the CTC and TTM to zero, you won’t produce anything; nada. Hence, quality converges to zero – even though the first graph in this post gives the illusion that it doesn’t.

Zero QualityCurrently, in the real world, the iron triangle still rules:

Time, Cost, Quality: Pick any two at the expense of the third.

Note: If you’re a hip, new age person who likes to substitute the vague word “value” for the just-as-vague word “quality”, then by all means do so.

Categories: technical Tags:

Convex, Not Linear

March 25, 2015 Leave a comment

 

SysSubUnit

For a large, complex software system that can be represented via an instantiation of the above template, three levels of testing are required to be performed prior to fielding a system: unit, integration, and system. Unit testing shakes out some (hopefully most) of the defects in the local, concretely visible, micro-behavior, of each of possibly thousands of units. Integration testing unearths the emergent behaviors, both intended and unintended, of each individual subsystem assembly. System level, end-to-end, testing is designed to do the same for the intended/unintended behaviors of the whole shebang.

UIS

Now that the context has been set for this post, let’s put our myopic glasses on and zero in on the activity of unit testing. Unit testing is unarguably a “best practice“. However, just because it’s a best practice, does it mean we should, as one famous software character has hinted, strive to “turn the dials up to 10” on unit testing (or any other best practice?).

Check out these utterly subjective, unbacked-by-data, convex, ROI graphs:

Unit Test ROI

If you believe these graphs hold true, then you would think that at some point during unit testing, you’d get more bang for the buck by investing your time in some other value-added task – such as writing another software unit or defining and running “higher level” integration and/or system tests on your existing bag of units.

Now, check out these utterly subjective, unbacked-by-data, linear, “turn the dials up to 10“, ROI graphs:

Linear UT ROI

People who have drank the whole pitcher of unit-testing koolaid tend to believe that the linear model holds true. These miss-the-forest-for-the-trees people have no qualms about requiring their developers to achieve arbitrarily high unit test coverage percentages (80%, 85%, 90%, 100%) whilst simultaneously espousing the conflicting need to reduce cost and delivery time.

Given a fixed amount of time for unit + integration + system testing in a finite resource environment, how should that time be allocated amongst the three levels of testing? I don’t have any definitive answer, but when schedules get tight, as so often happens in the real world, something has gotta give. If you believe the convex unit testing model, then lowering unit test coverage mandates should be higher on your list of potential sacrificial lambs than cutting integration or system test time – where the emergent intended, and more importantly, unintended, behaviors are made visible.

Like Big Requirements Up Front (BRUF) and its dear sibling, Big Design Up Front (BDUF), perhaps we should add “Big Unit Testing Tragedy” (BUTT) to our bag of dysfunctional practices. But hell, it won’t be done. The linear thinkers seem to be in charge.

Note: Jim Coplien has been eloquently articulating the wastefulness of too much unit testing for years. This post is simply a similar take on his position:

Why-Most-Unit-Testing-is-Waste

Coplien Segue

Categories: technical Tags: ,

Toxic, Typical, Supportive

March 22, 2015 Leave a comment

Native, Managed, Interpreted

March 19, 2015 4 comments

Someone asked on Twitter what a “native” app was. I answered with:

“There is no virtual machine or interpreter that a native app runs on. The code compiles directly into machine code.”

I subsequently drew up this little reference diagram that I’d like to share:

Native Managed

Given the source code for an application written in a native programming language, before you can run it, you must first compile the text into an executable image of raw machine instructions that your “native” device processor(s) know how to execute. After the compilation is complete, you command the operating system to load and run the application program image.

Given the source code for an application written in a “managed” programming language, you must also run the text through a compiler before launching it. However, instead of producing raw machine instructions, the compiler produces an intermediate image comprised of byte code instructions targeted for an abstract, hardware-independent, “virtual” machine. To launch the program, you must command the operating system to load/run the virtual machine image, and you must also command the virtual machine to load/run the byte code image that embodies the application functionality. During runtime, the virtual machine translates the byte code into instructions native to your particular device’s processor(s).

Given the source code for an application written in an “interpreted” programming language, you don’t need to pass it through a compiler before running it. Instead, you must command the operating system to load/run the interpreter, and you must also command the interpreter to load/run the text files that represent the application.

In general, moving from left to right in the above diagram, program execution speed decreases and program memory footprint increases for the different types of applications. A native program does not require a virtual machine or interpreter to be running and “managing” it while it is performing its duty. The tradeoff is that development convenience/productivity generally increases as you move from left to right. For interpreted programs, there is no need for any pre-execution compile step at all. For virtual machine based programs, even though a compile step is required prior to running the program, compilation times for large programs are usually much faster than for equivalent native programs.

Based on what I know, I think that pretty much sums up the differences between native, managed, and interpreted programs. Did I get it right?

 

Planet Agile

March 16, 2015 1 comment

Because methodologists need an “enemy” to make their pet process look good, Agilistas use Traditional methods as their whipping boy. In this post, I’m gonna turn the tables by arguing as a Traditionalista (yet again) and using the exalted Agile methodology as my whipping boy. First, let’s start with two legends:

Legends

Requirements And User Stories

As you can see, the Agile legend is much simpler than the Traditional legend. On planet Agile, there aren’t any formal requirements artifacts that specify system capabilities, application functions, subsystems, inter-subsystem interfaces/interactions, components, or inter-component interfaces/interactions. There are only lightweight, independent, orthogonal, bite-sized “user stories“. Conformant Agile citizens either pooh-pooh away any inter-feature couplings or they simply ignore them, assuming they will resolve themselves during construction via the magical process of “emergence“.

Infrastructure Code And Feature Code

Unlike in the traditional world, in the Agile world there is no distinction between application-specific Infrastructure Code and Feature Code. Hell, it’s all feature code on planet Agile. Infrastructure Code is assumed as a given. However, since developers (and not external product users) write and use infrastructure code, utilizing “User Stories” to specify infrastructure code doesn’t cut it. Perhaps the Agilencia should rethink how they recommend capturing requirements and define two types of “stories“:  “End User Stories” and “Infrastructure User Stories“.

Product Models

 

Non-Existent Design

Regarding the process of “Design“, on planet Agile, thanks to TDD, the code is the design and the design is the code. There is no need to conceptually partition the code (which is THE one and only important artifact on planet Agile) beforehand into easily digestible, visually inspect-able, critique-able, levels of abstraction. To do so would be to steal precious coding time and introduce “waste” into the process. With the exception of the short, bookend events in a sprint (the planning & review/retrospective events), all non-coding activities are “valueless” in the mind of citizen Agile.

Traditional-Agile Map

No Front End

When asked about “What happens before sprint 0?”, one agile expert told me on Twitter that “agile only covers software development activities“.

Sprint-1

As the process timeline template below shows, there is no Sprint -1, otherwise known as “the Front End phase“, on planet Agile. Since the Agile leadership doesn’t recognize infrastructure code, or the separation of design from code, and no feature code is produced during its execution, there is no need for any investment in any Front End work. But hey, as you can plainly see, deliveries start popping out of an Agile process way before a Traditional process. In the specific example below, the nimble Agile team has popped out 4 deliveries of working software before the sluggish Traditional team has even hatched its first iteration. It’s just like planet Agile’s supreme leader asserts in his latest book – a 4X productivity improvement (twice the work in half the time).

Trad Agile Timelines

Process Scalability

The flexible, dare I say “agile“, aspect of the Traditional development template is that it scales down. If the software system to be developed is small enough, or it’s an existing system that simply needs new features added to it, the “Front End” phase can indeed be entirely skipped. If so, then voila, the traditional development template reduces to a parallel series of incremental, evolutionary, sprints – just like the planet Agile template – except for the fact that infrastructure code development and integration testing are shown as first class citizens in the Traditional template.

Scaled Down Traditional

On the other hand, the planet Agile template does not scale up. Since there is no such concept as a “Front End” phase on planet Agile, as a bona fide Agilista, you wouldn’t dare to plan and execute that phase even if you intuited that it would reduce long term development and maintenance costs for: you, your current and future fellow developers, and your company. To hell with the future. Damn it, your place on planet Agile is to get working software out the door as soon as possible. To do otherwise would be to put a target on your back and invite the wrath of the planet Agile politburo.

The Big Distortion

When comparing Agile with Traditional methods, the leadership on planet Agile always simplifies and distorts the Traditional development process. It is presented as a rigid, inflexible monster to be slain:

Big Bang

In the mind of citizen Agile, simply mentioning the word “Traditional” conjures up scary images of Niagara Falls, endless BRUF (Big Requirements Up Front), BDUF (Big Design Up Front), Big Bang Integration (BBI), and late, or non-existent, deliveries of working software. Of course, as the citizenry on planet Agile (and planet Traditional) knows, many Traditional endeavors have indeed resulted in failed outcomes. But for an Agile citizen to acknowledge Agile failures, let alone attribute some of those failures to the lack of performing any Front End due diligence, is to violate the Agile constitution and again place herself under the watchful eye of the Agile certification bureaucracy.

The Most Important Question

You may be wondering, since I’ve taken on the role of an unapologetic Traditionalista in this post, if I am an “Agile-hater” intent on eradicating planet Agile from the universe. No, I’m not. I try my best to not be an Absolutist about anything. Both planet Agile and planet Traditional deserve their places in the universe.

Perhaps the first, and most important, question to ask on each new software development endeavor is: “Do we need any Front End work, and if so, how much do we need?

 

Strip The Stewards

March 12, 2015 6 comments

To absolutely no one’s surprise, the “takers” on Wall St. seem to be at it again. It’s funny how capitalism abhors “takers“, yet the biggest schleppers of your money dwell at the celebrated heart of unbridled capitalism – Wall St.

Capitalism is the worst “ism”, except for all the other “isms” – Unknown?

While government gets reviled for catering to average Joe Schmoe welfare “takers“, the hulking buttheads on Wall St. not only get off with a token monetary fine and a promise to “behave honorably” in the future, they get to set their own definition of “honorably“. As long as they’re too big to fail or jail, these psycho banks and the “innocents” who run them will continue to do as they please, knowing full well that small potatoes “takers” like you and me will bail them out when their wreckless, self-serving behavior triggers the next global crisis.

In “Prosecutors Suspect Repeat Offenses on Wall Street”, the NY Times states:

Just two years after avoiding prosecution for a variety of crimes, some of the world’s biggest banks are suspected of having broken their promises to behave. Typically, when banks have repeatedly run afoul of the law, they have returned to business as usual with little or no additional penalty — a stark contrast to how prosecutors mete out justice for the average criminal.

So, now that prosecutors are hot on the tail of some repeat big bank offenders, what do you suppose will happen to the guilty? The same old, same old:

Even now that prosecutors are examining repeat offenses on Wall Street, they are likely to seek punishments more symbolic than sweeping. Top executives are not expected to land in prison, nor are any problem banks in jeopardy of shutting down.

I think that the only way to medicate the psycho-orgs in our midst is to hit their stewards, the real people hiding behind the abstract corpo Hannibal Lectors among us, where it hurts – right in the pocketbook. Make it unambiguously transparent that the top tier(s) of executive management will be stripped of all their personal wealth if the narcissist monsters they run are found guilty of wiping out the IRA’s of thousands of the little people.

We don’t even have to jail them or demand their resignations. Simply treat them as big time drug dealers by confiscating all their property and making them start over again at step one. No more unconditional yearly bonuses and “but, I didn’t know” defense strategies. Put some teeth into “the buck stops here” and force them to, as Nassim Taleb suggests, place some “skin in the game“.

As the figure below illustrates, a “strip the stewards” punishment policy may not inhibit future normal-to-criminal behavior transitions after a reset to normal behavior, but it has a better chance of doing so than the current slap-on-the-wrist policy. What do you think can work?

Strip The Stewards

Ill Served

Follow

Get every new post delivered to your Inbox.

Join 480 other followers

%d bloggers like this: