Assume that the figure below faithfully represents two platform infrastructures developed by two different teams for the same application domain. Secondly, assume that both the JAD and UAS designs provide the exact same functionality to their developer users. Thirdly, assume that the JAD design was more expensive to develop (relative depth) and is more frustrating for developers to use (relative jaggy-ness) than the UAS design.
Fourthly, assume that you know that an agile team created one of the platforms and a traditional team produced the other – but you don’t know which team created which platform.
Now that our four assumptions have been espoused, can you confidently state, and make a compelling case for, which team hatched the JAD monstrosity and which team produced the elegant UAS foundation? I can’t.
Agile, Traditional, Lean, Burndown Charts, Kanban Boards, Earned Value Management Metrics, Code Coverage, Static Code Analysis, Coaches, Consultants, Daily Standup Meetings, Weekly Sit Down Meetings, Periodic Program/Project Reviews. All the shit managers obsess over doesn’t matter. It’s all about that Jell, ’bout that jell, ’bout that jell.
First, we have VCID:
In VCID mode, we iteratively define, at a coarse level of granularity, what the Domain-Specific Architecture (DSA) is and what the revenue-generating portfolio of Apps that we’ll be developing are.
Next up, we have ACID:
In ACID mode, we’ll iteratively define, at at finer level of detail, what each of our Apps will do for our customers and the components that will comprise each App.
Then, we have SCID, where we iteratively cut real App & DSA code and implement per-App stories/use cases/functions:
But STOP! Unlike the previous paragraphs imply, the “CID”s shouldn’t be managed as a sequential, three step, waterfall execution from the abstract world of concepts to the real world of concrete code. If so, your work is perhaps doomed. The CIDs should inform each other. When work in one CID exposes an error(s) in another CID, a transition into the flawed CID state should be executed to repair the error(s).
Managed correctly, your product development system becomes a dynamically executing, inter-coupled, set of operating states with error-correcting feedback loops that steer the system toward its goal of providing value to your customers and profits to your coffers.
To promote their infallible expertise, fathead consultants love to present apples-to-oranges comparisons as though they were apples-to-apples comparisons. One such prominent consultant (hint: a famous Forbes columnist) recently gushed over the success of music provider Spotify.com; contrasting its success to the team that built the buggy, slow, initial, version of the healthcare.gov web site.
As is almost always the case, these clever dudes leave out the oft-hidden, process-independent, contextual, forces that relentlessly work against success:
To imply that the Healthcare.gov team would have been successful if they simply employed agile practices (like Spotify.com) is to either be naive, disingenuous, or both.
Have you noticed that the press isn’t clamoring over the inadequacy of the healthcare.gov site anymore? Obviously, the development team must have righted the ship by morphing into a high performing juggernaut under the tutelage of a cadre of agile process consultant(s) – regardless of whether they did or they didn’t.
Let’s start this post off by setting some context. What I’m about to spout concerns the development of large, complex, software systems – not mobile apps or personal web sites. So, let’s rock!
The UML class diagram below depicts a taxonomy of methods for representing and communicating system requirements.
On the left side of the diagram, we have the traditional methods: expressing requirements as system capabilities/functions/”shalls”. On the right side of the diagram, we have the relatively newer artifacts: use cases and user stories.
When recording requirements for a system you’re going to attempt to build, you can choose a combination of methods as you (or your process police) see fit. In the agile world, the preferred method (as evidenced by 100% of the literature) is to exclusively employ fine-grained user stories – classifying all the other, more abstract, overarching, methods as YAGNI or BRUF.
As the following enhanced diagram shows, whichever method you choose to predominantly start recording and communicating requirements to yourself and others, at least some of the artifacts will be inter-coupled. For example, if you choose to start specifying your system as a set of logically cohesive capabilities, then those capabilities will be coupled to some extent – regardless of whether you consciously try to discover and expose those dependencies or not. After all, an operational system is a collection of interacting parts – not a bag of independent parts.
Let’s further enhance our class diagram to progressively connect the levels of granularity as follows:
If you start specifying your system as a set of coarse-grained, interacting capabilities, it may be difficult to translate those capabilities directly into code components, packages, and/or classes. Thus, you may want to close the requirements-to-code intellectual gap by thoughtfully decomposing each capability into a set of logically cohesive, but loosely coupled, functions. If that doesn’t bridge the gap to your liking, then you may choose to decompose each function further into a finer set of logically cohesive, but loosely coupled, “shall” statements. The tradeoff is time upfront for time out back:
- Capabilities -> Source Code
- Use Cases – > Source Code
- Capabilities -> Functions -> Source Code
- Use Case -> User Story -> Source Code
- Capabilities -> Functions -> “Shalls” -> Source Code
Note that, taken literally, the last bullet implies that you don’t start writing ANY code until you’ve completed the full, two step, capabilities-to-“shalls” decomposition. Well, that’s a croc o’ crap. You can, and should, start writing code as soon as you understand a capability and/or function well enough so that you can confidently cut at least some skeletal code. Any process that prohibits writing a single line of code until all the i’s are dotted and all the t’s are crossed and five “approval” signatures are secured is, as everyone (not just the agile community) knows, both inane and insane.
Of course, simple projects don’t need no stinkin’ multi-step progression toward source code. They can bypass the Capability, Function, and Use Case levels of abstraction entirely and employ only fine grained “shalls” or User Stories as the method of specification.
On the simplest of projects, you can even go directly from thoughts in your head to code:
The purpose of this post is to assert that there is no one and only “right” path in moving from requirements to code. The “heaviness” of the path you decide to take should match the size, criticality, and complexity of the system you’ll be building. The more the mismatch, the more the waste of time and effort.
Because methodologists need an “enemy” to make their pet process look good, Agilistas use Traditional methods as their whipping boy. In this post, I’m gonna turn the tables by arguing as a Traditionalista (yet again) and using the exalted Agile methodology as my whipping boy. First, let’s start with two legends:
Requirements And User Stories
As you can see, the Agile legend is much simpler than the Traditional legend. On planet Agile, there aren’t any formal requirements artifacts that specify system capabilities, application functions, subsystems, inter-subsystem interfaces/interactions, components, or inter-component interfaces/interactions. There are only lightweight, independent, orthogonal, bite-sized “user stories“. Conformant Agile citizens either pooh-pooh away any inter-feature couplings or they simply ignore them, assuming they will resolve themselves during construction via the magical process of “emergence“.
Infrastructure Code And Feature Code
Unlike in the traditional world, in the Agile world there is no distinction between application-specific Infrastructure Code and Feature Code. Hell, it’s all feature code on planet Agile. Infrastructure Code is assumed as a given. However, since developers (and not external product users) write and use infrastructure code, utilizing “User Stories” to specify infrastructure code doesn’t cut it. Perhaps the Agilencia should rethink how they recommend capturing requirements and define two types of “stories“: “End User Stories” and “Infrastructure User Stories“.
Regarding the process of “Design“, on planet Agile, thanks to TDD, the code is the design and the design is the code. There is no need to conceptually partition the code (which is THE one and only important artifact on planet Agile) beforehand into easily digestible, visually inspect-able, critique-able, levels of abstraction. To do so would be to steal precious coding time and introduce “waste” into the process. With the exception of the short, bookend events in a sprint (the planning & review/retrospective events), all non-coding activities are “valueless” in the mind of citizen Agile.
No Front End
When asked about “What happens before sprint 0?”, one agile expert told me on Twitter that “agile only covers software development activities“.
As the process timeline template below shows, there is no Sprint -1, otherwise known as “the Front End phase“, on planet Agile. Since the Agile leadership doesn’t recognize infrastructure code, or the separation of design from code, and no feature code is produced during its execution, there is no need for any investment in any Front End work. But hey, as you can plainly see, deliveries start popping out of an Agile process way before a Traditional process. In the specific example below, the nimble Agile team has popped out 4 deliveries of working software before the sluggish Traditional team has even hatched its first iteration. It’s just like planet Agile’s supreme leader asserts in his latest book – a 4X productivity improvement (twice the work in half the time).
The flexible, dare I say “agile“, aspect of the Traditional development template is that it scales down. If the software system to be developed is small enough, or it’s an existing system that simply needs new features added to it, the “Front End” phase can indeed be entirely skipped. If so, then voila, the traditional development template reduces to a parallel series of incremental, evolutionary, sprints – just like the planet Agile template – except for the fact that infrastructure code development and integration testing are shown as first class citizens in the Traditional template.
On the other hand, the planet Agile template does not scale up. Since there is no such concept as a “Front End” phase on planet Agile, as a bona fide Agilista, you wouldn’t dare to plan and execute that phase even if you intuited that it would reduce long term development and maintenance costs for: you, your current and future fellow developers, and your company. To hell with the future. Damn it, your place on planet Agile is to get working software out the door as soon as possible. To do otherwise would be to put a target on your back and invite the wrath of the planet Agile politburo.
The Big Distortion
When comparing Agile with Traditional methods, the leadership on planet Agile always simplifies and distorts the Traditional development process. It is presented as a rigid, inflexible monster to be slain:
In the mind of citizen Agile, simply mentioning the word “Traditional” conjures up scary images of Niagara Falls, endless BRUF (Big Requirements Up Front), BDUF (Big Design Up Front), Big Bang Integration (BBI), and late, or non-existent, deliveries of working software. Of course, as the citizenry on planet Agile (and planet Traditional) knows, many Traditional endeavors have indeed resulted in failed outcomes. But for an Agile citizen to acknowledge Agile failures, let alone attribute some of those failures to the lack of performing any Front End due diligence, is to violate the Agile constitution and again place herself under the watchful eye of the Agile certification bureaucracy.
The Most Important Question
You may be wondering, since I’ve taken on the role of an unapologetic Traditionalista in this post, if I am an “Agile-hater” intent on eradicating planet Agile from the universe. No, I’m not. I try my best to not be an Absolutist about anything. Both planet Agile and planet Traditional deserve their places in the universe.
Perhaps the first, and most important, question to ask on each new software development endeavor is: “Do we need any Front End work, and if so, how much do we need?”
Take a minute to look around yourself and imagine how many interdependent chains of specialists and handoffs were required to produce the materials that make your life quite a bit more comfortable than our cave dwelling ancestors.
Total self sufficiency is a great concept, but if you had to do all the work to produce all of the things you take for granted, you wouldn’t be able to. You simply wouldn’t have enough time.
In software development, above a certain (unknown) level of complexity and size, specialists, dependencies, and handoffs are required to have any chance of getting the work done and the product integrated. Cross-functional teams, where every single person on the team knows how to design and code every functional area of a product, is an unrealistic pipe dream for anything but trivial products.