The Last Retrospective

April 20, 2014 Leave a comment

In an ongoing agile endeavor, the practice for eliciting and applying freshly learned knowledge going forward is the periodic “retrospective” (aka periodic post-mortem to the “traditional” old fogeys). Theoretically, retrospectives are temporary way points where individuals: stop, step back, cerebrally inspect what they’ve accomplished and how they’ve accomplished it, share their learning experiences, suggest new process/product improvements, and evaluate previously implemented improvements.

As the figure below depicts, the fraction of newly acquired knowledge applied going forward is a function of group culture. In macho, command and control hierarchies like culture B, the application of lessons learned is suppressed relative to more flexible cultures like A due to the hierarchical importance of opinions.

Gained Vs Applied

Regardless of culture type, during schedule-challenged projects with a fixed, do-or-don’t-get-paid deadline (yes, those projects do indeed still exist), this may happen:

Last Retrospective

When the elites upstairs magically determine that a panic point has appeared (sometimes seemingly out of nowhere): retrospectives get jettisoned, the daily standup morphs into the daily inquisition, corner-cutting “practices” become best practices, and the application of newly acquired knowledge stops cold. Humans being humans, learning still naturally occurs and new knowledge is accrued. However, it is not likely the type of knowledge that will help on future projects.

Importance Of Opinion

April 18, 2014 6 comments

Regardless of whether a project is managed as an agile or traditional endeavor, it is well known that the execution team learns and acquires new knowledge as the project lurches forward. It is also well known that individual team members learn and formulate opinions that may be at odds with each other.

In spite of the “we’re all (equal)” scrum mantra, some individual opinions will always be “more important” than others in a hierarchy… because that’s what a social hierarchy does (POSIWID). The taller the hierarchy, the larger the gap of importance between opinions. And the larger the gap of importance between opinions, the lesser the chance that a diverse subset of newly acquired knowledge will be applied to future project activities.

The figure below shows two concepts of “Importance Of Opinion“. On the left, we have the Scrum ideal – we’re all one and all opinions carry the same weight. On the right, we have the reality. The opinions within the pyramid of elite titles strongly influence/skew/suppress the PO’s opinions, the PO does the same to the SM, and the SM does the same to the group of DEVs. Even within the so-called flat structure of the DEVs, all opinions are not created equal.

Theory And Reality

Categories: management Tags: , , ,

Context Sensitive Keywords

April 17, 2014 Leave a comment

If maintaining backward compatibility is important, then introducing a new keyword into a programming language is always a serious affair. Introducing a new keyword that is already used in millions of lines of existing code as an identifier (class name, namespace name, variable name, etc), can break a lot of product code and discourage people from upgrading their compilers and using the new language/library features.

Unlike other languages, backward compatibility is a “feature” (not a “meh“) in C++. Thus, the proposed introduction of a new keyword is highly scrutinized by the standards committee when core language improvements are considered. This is the main reason why the verbosity-reducing “auto” keyword wasn’t added until the C++11 standard was hatched.

Even though Bjarne Stroustrup designed/implemented “auto” in C++ over 30 years ago to provide for convenient compiler type deduction, the priority for backward compatibility with C caused it to be sh*t-canned for decades. However, since (hopefully) nobody uses it in C code anymore (“auto” is the default storage type for all C local variables – so why be redundant?), the time was deemed right to include it in the C++11 release. Of course, some really, really old C/C++ code will get broken when compiled under a new C++11 compiler, but the breakage won’t be as dire as if “auto” had made it into the earlier C++98 or C++03 standards.

With the seriousness of keyword introduction in mind, one might wonder why the “override” and “final” keywords were added to C++11. Surely, they’re so common that millions of lines of C/C++ legacy code will get broken. D’oh!

But wait! To severely limit code-breakage, the “override” and “final” keywords are defined to be context sensitive. Unless they’re used in a purposefully narrow, position-specific way, C++11 compilers will reject the source code that contains them.

The “final” keyword (used to explicitly prevent further inheritance and/or virtual function overriding) can only be used to qualify a class definition or a virtual function declaration. The closely related “override” keyword (used to prevent careless mistakes when declaring a virtual function) can only be used to qualify a virtual function. The figure below shows how “final” and “override” clarify programmer intention and prevent accidental mistakes.

final keyword

override keyword

override final

Because of the context-specific constraints imposed on the “final” and “override” keywords, this (crappy) code compiles fine:

final and override

The point of this last code fragment is to show that the introduction of context-specific keywords is highly unlikely to break existing code. And that’s a good thing.

Gilbitecture

April 15, 2014 2 comments

Plucked from his deliciously titled “Real Architecture: Engineering or Pompous Bullshit?” slide deck, I give you Tom Gilb‘s personal principles of software architecture engineering:

Gilbertecture

Tom’s proactive approach seems like a far cry from the reactive approaches of the “emergent architecture” and TDA (Test Driven Architecture) communities, doesn’t it?

OMG! Tom’s list actually uses the words “engineering” and “the architect“. Maybe that’s why I have always appreciated his work so much. :)

Daunting Challenges

April 13, 2014 Leave a comment

Fresh from Tom Gilb’s “Advanced Agile Practices” presentation, I give you Dave Rico’s 14 pitfalls of agile methods:

Agile Pitfalls

If you look closely at the list, the entries don’t just apply to attempts at agilization. They are daunting challenges for any aspiring corpo change agent who wishes to make a sweeping change to “the way we develop products“.

Daunting Challenges

Don’t Be Fooled

April 11, 2014 2 comments

Check out the hypothetical agile burndown and EVM (Earned Value Management) charts below. Like in the “real” world, the example project (or sprint, if you prefer) ended up being underestimated. The shortfall is indicated by the dotted line on the right.

uninverted

When we literally flip the agile burndown chart in the vertical dimension, we get this:

inverted

The moral of the story is: “Don’t be fooled by the agilista herd; an agile burndown chart is nothing but an inverted version of the despised EVM chart.

Regardless Of Agile Or Waterfall


The figure below depicts an architectural view of a real-time embedded sub-system that I and a team of 8 others built and delivered 10 (freakin!) years ago. At revision number 9, the diagram ended up being the final “as-built” model of the 20,000+ lines-of-code system. Since the software was written in C and, thus, not object-oriented, I chose not to use UML to capture the design at the time. Doing so would have introduced an impedance mismatch and a large intellectual gap of misunderstanding between the procedural C code base and the OO design artifacts. I used structured analysis and functional decomposition to concoct the design and I employed “pseudoData Flow Diagrams (DFD) instead.

At the beginning of this “waterfall” project, I created revision 0 of the diagram as the first “build-to” snapshot. Of course, as learning accrued and the system evolved throughout development, I diligently kept the diagram updated and synchronized with the code base in true PAYGO fashion.

MP_Top_Level_SW_Design

As you can see from the picture, the system of 30+ asynchronous application tasks ran under the tutelage of the industrial-strength VxWorks Real Time Operating System (RTOS). Asynchronous inter-task communication was performed via message passing through a series of lock-protected queues. The embedded physical board was powered by a Motorola PowerPC CPU (remember  those dinosaurs?). The board housed a myriad of serial and ethernet interface ports for communication to other external sub-systems.

The above diagram was not the sole artifact that I used to record the design. It was simply the highest level, catch-all, overview of the system. I also developed a complementary set of lower level functional diagrams; each of which captured a sliced view of an end-to-end strand of critical functionality. One of these diagrams, the “Uplink/Downlink Processing View“, is shown below. Note that the final “as-built” diagram settled out as revision number 5.

UpDownLink

The purpose of this post was simply to give you a taste of how I typically design and evolve a non-trivial software-intensive system that I can’t entirely keep in my head. I use the same PAYGO process for all of my efforts regardless of whether the project is being managed as an agile or waterfall endeavor. To me, project process is way over-emphasized and overblown. “Business Value” creation ultimately distills down to architecture, design, coding, and testing at all levels of abstraction.

Follow

Get every new post delivered to your Inbox.

Join 390 other followers

%d bloggers like this: