Archive

Author Archive

Importance Of Opinion

April 18, 2014 6 comments

Regardless of whether a project is managed as an agile or traditional endeavor, it is well known that the execution team learns and acquires new knowledge as the project lurches forward. It is also well known that individual team members learn and formulate opinions that may be at odds with each other.

In spite of the “we’re all (equal)” scrum mantra, some individual opinions will always be “more important” than others in a hierarchy… because that’s what a social hierarchy does (POSIWID). The taller the hierarchy, the larger the gap of importance between opinions. And the larger the gap of importance between opinions, the lesser the chance that a diverse subset of newly acquired knowledge will be applied to future project activities.

The figure below shows two concepts of “Importance Of Opinion“. On the left, we have the Scrum ideal – we’re all one and all opinions carry the same weight. On the right, we have the reality. The opinions within the pyramid of elite titles strongly influence/skew/suppress the PO’s opinions, the PO does the same to the SM, and the SM does the same to the group of DEVs. Even within the so-called flat structure of the DEVs, all opinions are not created equal.

Theory And Reality

Categories: management Tags: , , ,

Context Sensitive Keywords

April 17, 2014 Leave a comment

If maintaining backward compatibility is important, then introducing a new keyword into a programming language is always a serious affair. Introducing a new keyword that is already used in millions of lines of existing code as an identifier (class name, namespace name, variable name, etc), can break a lot of product code and discourage people from upgrading their compilers and using the new language/library features.

Unlike other languages, backward compatibility is a “feature” (not a “meh“) in C++. Thus, the proposed introduction of a new keyword is highly scrutinized by the standards committee when core language improvements are considered. This is the main reason why the verbosity-reducing “auto” keyword wasn’t added until the C++11 standard was hatched.

Even though Bjarne Stroustrup designed/implemented “auto” in C++ over 30 years ago to provide for convenient compiler type deduction, the priority for backward compatibility with C caused it to be sh*t-canned for decades. However, since (hopefully) nobody uses it in C code anymore (“auto” is the default storage type for all C local variables – so why be redundant?), the time was deemed right to include it in the C++11 release. Of course, some really, really old C/C++ code will get broken when compiled under a new C++11 compiler, but the breakage won’t be as dire as if “auto” had made it into the earlier C++98 or C++03 standards.

With the seriousness of keyword introduction in mind, one might wonder why the “override” and “final” keywords were added to C++11. Surely, they’re so common that millions of lines of C/C++ legacy code will get broken. D’oh!

But wait! To severely limit code-breakage, the “override” and “final” keywords are defined to be context sensitive. Unless they’re used in a purposefully narrow, position-specific way, C++11 compilers will reject the source code that contains them.

The “final” keyword (used to explicitly prevent further inheritance and/or virtual function overriding) can only be used to qualify a class definition or a virtual function declaration. The closely related “override” keyword (used to prevent careless mistakes when declaring a virtual function) can only be used to qualify a virtual function. The figure below shows how “final” and “override” clarify programmer intention and prevent accidental mistakes.

final keyword

override keyword

override final

Because of the context-specific constraints imposed on the “final” and “override” keywords, this (crappy) code compiles fine:

final and override

The point of this last code fragment is to show that the introduction of context-specific keywords is highly unlikely to break existing code. And that’s a good thing.

Gilbitecture

April 15, 2014 2 comments

Plucked from his deliciously titled “Real Architecture: Engineering or Pompous Bullshit?” slide deck, I give you Tom Gilb‘s personal principles of software architecture engineering:

Gilbertecture

Tom’s proactive approach seems like a far cry from the reactive approaches of the “emergent architecture” and TDA (Test Driven Architecture) communities, doesn’t it?

OMG! Tom’s list actually uses the words “engineering” and “the architect“. Maybe that’s why I have always appreciated his work so much. :)

Daunting Challenges

April 13, 2014 Leave a comment

Fresh from Tom Gilb’s “Advanced Agile Practices” presentation, I give you Dave Rico’s 14 pitfalls of agile methods:

Agile Pitfalls

If you look closely at the list, the entries don’t just apply to attempts at agilization. They are daunting challenges for any aspiring corpo change agent who wishes to make a sweeping change to “the way we develop products“.

Daunting Challenges

Don’t Be Fooled

April 11, 2014 2 comments

Check out the hypothetical agile burndown and EVM (Earned Value Management) charts below. Like in the “real” world, the example project (or sprint, if you prefer) ended up being underestimated. The shortfall is indicated by the dotted line on the right.

uninverted

When we literally flip the agile burndown chart in the vertical dimension, we get this:

inverted

The moral of the story is: “Don’t be fooled by the agilista herd; an agile burndown chart is nothing but an inverted version of the despised EVM chart.

Regardless Of Agile Or Waterfall


The figure below depicts an architectural view of a real-time embedded sub-system that I and a team of 8 others built and delivered 10 (freakin!) years ago. At revision number 9, the diagram ended up being the final “as-built” model of the 20,000+ lines-of-code system. Since the software was written in C and, thus, not object-oriented, I chose not to use UML to capture the design at the time. Doing so would have introduced an impedance mismatch and a large intellectual gap of misunderstanding between the procedural C code base and the OO design artifacts. I used structured analysis and functional decomposition to concoct the design and I employed “pseudoData Flow Diagrams (DFD) instead.

At the beginning of this “waterfall” project, I created revision 0 of the diagram as the first “build-to” snapshot. Of course, as learning accrued and the system evolved throughout development, I diligently kept the diagram updated and synchronized with the code base in true PAYGO fashion.

MP_Top_Level_SW_Design

As you can see from the picture, the system of 30+ asynchronous application tasks ran under the tutelage of the industrial-strength VxWorks Real Time Operating System (RTOS). Asynchronous inter-task communication was performed via message passing through a series of lock-protected queues. The embedded physical board was powered by a Motorola PowerPC CPU (remember  those dinosaurs?). The board housed a myriad of serial and ethernet interface ports for communication to other external sub-systems.

The above diagram was not the sole artifact that I used to record the design. It was simply the highest level, catch-all, overview of the system. I also developed a complementary set of lower level functional diagrams; each of which captured a sliced view of an end-to-end strand of critical functionality. One of these diagrams, the “Uplink/Downlink Processing View“, is shown below. Note that the final “as-built” diagram settled out as revision number 5.

UpDownLink

The purpose of this post was simply to give you a taste of how I typically design and evolve a non-trivial software-intensive system that I can’t entirely keep in my head. I use the same PAYGO process for all of my efforts regardless of whether the project is being managed as an agile or waterfall endeavor. To me, project process is way over-emphasized and overblown. “Business Value” creation ultimately distills down to architecture, design, coding, and testing at all levels of abstraction.

Where To Start?

April 6, 2014 4 comments

The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. — Edsger Dijkstra

With Edsger’s delicious quote in mind, let’s explore seven levels of abstraction that can be used to reason about big, distributed, systems:

seven-levels

At level zero, we have the finest grained, most concrete unit of design, a single puny line of “source code“. At level seven, we have the coarsest grained, most abstract unit of design, the mysterious and scary “system” level. A line of code is simple to reason about, but a “system” is not. Just when you think you understand what a system does, BAM! It exhibits some weird, perhaps dangerous, behavior that is counter-intuitive and totally unexpected – especially when humans are the key processing “nodes” in the beast.

Here are some questions to ponder regarding the seven level stack: Given that you’re hired to build a big, distributed system, at what level would you start your development effort? Would you start immediately coding up classes using the much revered TDD “best practice” and let all the upper levels of abstraction serendipitously “emerge”? Relatively speaking, how much time “up front” should you spend specifying, designing, recording, communicating the structures and behaviors of the top 3 levels of the stack? Again, relatively speaking, how much time should be allocated to the unit, integration, functional, and system levels of testing?

Game Day!


GO HUSKIES!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

game day

Categories: miscellaneous Tags: ,

Looks Weird, But Don’t Be Afraid To Do It

April 3, 2014 9 comments

Modern C++ (C++11 and onward) eliminates many temporaries outright by supporting move semantics, which allows transferring the innards of one object directly to another object without actually performing a deep copy at all. Even better, move semantics are turned on automatically in common cases like pass-by-value and return-by-value, without the code having to do anything special at all. This gives you the convenience and code clarity of using value types, with the performance of reference types. – C++ FAQ

Before C++11, you’d have to be tripping on acid to allow a C++03 function definition like this to survive a code review:


std::vector<int> moveBigThingsOut() {

  const int BIGNUM(1000000);

  std::vector<int> vints;

  for(int i=0; i<BIGNUM; ++i) {
    vints.push_back(i);
  }

  return vints;
}

Because of: 1) the bolded words in the FAQ at the top of the page, 2) the fact that all C++11 STL containers are move-enabled (each has a move ctor and a move assignment member function implementation), 3) the fact that the compiler knows it is required to destruct the local  vints object just before the return statement is executed, don’t be afraid to write code like that in C++11. Actually, please do so. It will stun reviewers who don’t yet know C++11 and may trigger some fun drama while you try to explain how it works. However, unless you move-enable them, don’t you dare write code like that for your own handle classes. Stick with the old C++03 pass by reference idiom:


void passBigThingsOut(std::vector<int>& vints) {

  const int BIGNUM(1000000);

  vints.clear();

  for(int i=0; i<BIGNUM; ++i) {
    vints.push_back(i);
  }

}

Which C++11 user code below do you think is cleaner?


//This one-liner?

auto vints = moveBigThingsOut();

//Or this two liner?

std::vector<int> vints{};
passBigThingsOut(vints);

Formal Waterfall Events

March 31, 2014 2 comments

The customers of all the big government-financed sensor system programs I’ve ever worked on have required the aforementioned, waterfall, dog-and-pony shows as part of their well-entrenched acquisition process. Even prior to commencing a waterfall death march, as part of the pre-win bidding process, customers also (still) require contractors to provide detailed schedule and cost commitments in their proposal submissions – right down to the CSCI level of granularity.

If you think it’s tough to get your internal executive customers to wholeheartedly embrace an “agile adoption” or “no estimates” initiative, try to wrap your mind around the cosmic difficulty of doing the same to a large, fragmented, distributed authority, external acquisition machine whose cogs are fine-tuned to: cover their ass, defend their turf, and doggedly fight to keep the extant process that justifies their worth in place. Good luck with that.

Go Agile

Follow

Get every new post delivered to your Inbox.

Join 381 other followers

%d bloggers like this: