The word “hierarchy” gets no respect. Except for popes, generals, executives and managers, who tend to thrive exquisitely in command and control hierarchies, many people associate hierarchical social structures with ineffectual bureaucracy, back-stabbing politics, patronization, unfair distribution of status and rewards, and suppression of individual initiative.
Despite all the bad press, hierarchically structured social systems do have benefits; even for those residing in the lowest tiers of the pyramid. One benefit that hierarchy serves up is… orderly execution of operations:
Imagine if students argued with their teachers, workers challenged their bosses, and drivers ignored traffic cops anytime they asked them to do something they didn’t like. The world would descend into chaos in about five minutes. – Duncan J. Watts
In “Influence” Robert Cialdini writes:
A multi-layered and widely accepted system of authority confers an immense advantage upon a society. It allows the development of sophisticated structures for resource production, trade, defense, expansion, and social control that would otherwise be impossible. The other alternative, anarchy, is a state that is hardly known for its beneficial effects on cultural groups and one that the social philosopher Thomas Hobbes assures us would render life “solitary, poor, nasty, brutish, and short.”
I don’t agree with Mr. Cialdini that the alternative to hierarchy is pure anarchy, but his point, like Mr. Watts’s, is a good one.
Management “guru” Tom Peters (to whom I used to closely listen to prior to reading Matt Stewart’s brilliant “The Management Myth“), sums it up nicely with:
Hierarchy will never go away. Never!
Check out this slide I plucked from a pitch that will remain unnamed:
Notice the note under the waterfall diagram. Now, let’s look at the original, “unadapted” version and accompanying quote from Winston W. Royce’s classic 1970 paper:
Notice that Mr. Royce clearly noted in his paper that the sequential, never-look-back, waterfall process is a stone cold loser. Next, let’s look at another diagram from Mr. Royce’s paper; one that no fragilista ever mentions or shows:
OMG! An iterative waterfall with feedback loops? WTF!
Finally, let’s look at BD00′s syntegrated version of the agile, lower half of our consultant’s diagram and the iterative waterfall diagram from Mr. Royce’s paper:
Comparing the agile and “chunked“, iterative, waterfall models shows that, taken in the right context, they’re not that different…. no?
BD00 is still down in NOLA for Mardi Gras 2014, but he needs your help! The “donate” button will be up shortly.
It took me about 15 minutes to conjure up the inane picture below. It took me another frustrating fifteen minutes attempting to come up with something that rhymes with “team-work” for the lower half of it. As you can see, I failed miserably. Do you, dear reader, have a better label for the “crappy” half? (@riczwest, my money is on you!)
Rewritten in “old school” terminology, the five Scrum process events can be expressed as follows:
- Sprint Planning = Requirements definition and capture
- Sprint = Requirements analysis, design, coding, unit testing, integration testing, code review
- Daily Stand Up = Daily status meeting
- Sprint Review = Post-mortem
- Sprint Retrospective = Continuous process improvement
So, someone with an intentionally warped mind like BD00 may interpret a series of Scrum sprints as nothing more than a series of camouflaged Mini-Waterfalls (MW).
But ya know what? Executing a project as a series of MWs may be a good thing – as long as an arbitrary, fixed-size, time-box is not imposed on the team. After all, since everything else is allowed to dynamically change during a Scrum project, why not the size of the Sprint too?
Instead of estimating what features can be done in the next 30 days, why not simply estimate how many days will be needed to complete the set of features that marks the transition into the next MW? If, during the MW, it is learned that the goal won’t be achieved, then in addition to cancelling the MW outright, two other options can be contemplated:
- Extend the length of the MW
- Postpone the completion of one or more of the features currently being worked on
I’m not a fan of “emergent global architecture“, but I AM a fan of “emergent local design“. To mitigate downstream technical and financial risk, I believe that one has to generate and formally document an architecture at a high level of abstraction before starting to write code. To do otherwise would be irresponsible.
The figure below shows a portion of an initial “local” design that I plucked out of a more “global” architectural design. When I started coding and unit testing the cluster of classes in the snippet, I “discovered” that the structure wasn’t going work out. The API of the architectural framework within which the class cluster runs wouldn’t allow it to work without some major, internal, restructuring and retesting of the framework itself.
After wrestling with the dilemma for a bit, the following workable local design emerged out of the learning acquired via several wretched attempts to make the original design work. Of course, I had to throw away a bunch of previously written skeletal product and test code, but that’s life. Now I’m back on track and moving forward again. W00t!
Assume we have a valuable, revenue-critical software system in operation. The figure below shows one nice and tidy, powerpoint-worthy way to model the system; as a static, enumerated set of executables and libraries.
Given the model above, we can express the size of the system as:
Now, say we run a tool on the code base and it spits out a system size of 200K “somethings” (lines of code, function points, loops, branches, etc).
What does this 200K number of “somethings” absolutely tell us about the non-functional qualities of the system? It tells us absolutely nothing. All we know at the moment is that the system is operating and supporting the critical, revenue generating processes of our borg. Even relatively speaking, when we compare our 200K “somethings” system against a 100K “somethings” system, it still doesn’t tell us squat about the qualities of our system.
So, what’s missing here? One missing link is that our nice and tidy enumerations view and equation don’t tell us nuttin’ about what Russ Ackoff calls “the product of the interactions of the parts” (e.g Lib-to-Lib, Exe-Exe). To remedy the situation, let’s update our nice and tidy model with the part-to-part associations that enable our heap of individual parts to behave as a system:
Our updated model is still nice and tidy, but just not as nice and tidy as before. But wait! We are still missing something important. We’re missing a visual cue of our system’s interactions with “other” systems external to us; you know, “them”. The “them” we blame when something goes wrong during operation with the supra-system containing us and them.
Our updated model is once again still nice and tidy, but just not as nice and tidy as before. Next, let’s take a single snapshot of the flow of (red) “blood” in our system at a given point of time:
Finally, if we super-impose the astronomic number of all possible blood flow snapshots onto one diagram, we get:
D’oh! We’re not so nice and tidy anymore. Time for some heroic debugging on the bleeding mess. Is there a doctor in da house?