Archive

Archive for the ‘technical’ Category

Gilbitecture

April 15, 2014 2 comments

Plucked from his deliciously titled “Real Architecture: Engineering or Pompous Bullshit?” slide deck, I give you Tom Gilb‘s personal principles of software architecture engineering:

Gilbertecture

Tom’s proactive approach seems like a far cry from the reactive approaches of the “emergent architecture” and TDA (Test Driven Architecture) communities, doesn’t it?

OMG! Tom’s list actually uses the words “engineering” and “the architect“. Maybe that’s why I have always appreciated his work so much. :)

Daunting Challenges

April 13, 2014 Leave a comment

Fresh from Tom Gilb’s “Advanced Agile Practices” presentation, I give you Dave Rico’s 14 pitfalls of agile methods:

Agile Pitfalls

If you look closely at the list, the entries don’t just apply to attempts at agilization. They are daunting challenges for any aspiring corpo change agent who wishes to make a sweeping change to “the way we develop products“.

Daunting Challenges

Don’t Be Fooled

April 11, 2014 2 comments

Check out the hypothetical agile burndown and EVM (Earned Value Management) charts below. Like in the “real” world, the example project (or sprint, if you prefer) ended up being underestimated. The shortfall is indicated by the dotted line on the right.

uninverted

When we literally flip the agile burndown chart in the vertical dimension, we get this:

inverted

The moral of the story is: “Don’t be fooled by the agilista herd; an agile burndown chart is nothing but an inverted version of the despised EVM chart.

Regardless Of Agile Or Waterfall


The figure below depicts an architectural view of a real-time embedded sub-system that I and a team of 8 others built and delivered 10 (freakin!) years ago. At revision number 9, the diagram ended up being the final “as-built” model of the 20,000+ lines-of-code system. Since the software was written in C and, thus, not object-oriented, I chose not to use UML to capture the design at the time. Doing so would have introduced an impedance mismatch and a large intellectual gap of misunderstanding between the procedural C code base and the OO design artifacts. I used structured analysis and functional decomposition to concoct the design and I employed “pseudoData Flow Diagrams (DFD) instead.

At the beginning of this “waterfall” project, I created revision 0 of the diagram as the first “build-to” snapshot. Of course, as learning accrued and the system evolved throughout development, I diligently kept the diagram updated and synchronized with the code base in true PAYGO fashion.

MP_Top_Level_SW_Design

As you can see from the picture, the system of 30+ asynchronous application tasks ran under the tutelage of the industrial-strength VxWorks Real Time Operating System (RTOS). Asynchronous inter-task communication was performed via message passing through a series of lock-protected queues. The embedded physical board was powered by a Motorola PowerPC CPU (remember  those dinosaurs?). The board housed a myriad of serial and ethernet interface ports for communication to other external sub-systems.

The above diagram was not the sole artifact that I used to record the design. It was simply the highest level, catch-all, overview of the system. I also developed a complementary set of lower level functional diagrams; each of which captured a sliced view of an end-to-end strand of critical functionality. One of these diagrams, the “Uplink/Downlink Processing View“, is shown below. Note that the final “as-built” diagram settled out as revision number 5.

UpDownLink

The purpose of this post was simply to give you a taste of how I typically design and evolve a non-trivial software-intensive system that I can’t entirely keep in my head. I use the same PAYGO process for all of my efforts regardless of whether the project is being managed as an agile or waterfall endeavor. To me, project process is way over-emphasized and overblown. “Business Value” creation ultimately distills down to architecture, design, coding, and testing at all levels of abstraction.

Where To Start?

April 6, 2014 4 comments

The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. — Edsger Dijkstra

With Edsger’s delicious quote in mind, let’s explore seven levels of abstraction that can be used to reason about big, distributed, systems:

seven-levels

At level zero, we have the finest grained, most concrete unit of design, a single puny line of “source code“. At level seven, we have the coarsest grained, most abstract unit of design, the mysterious and scary “system” level. A line of code is simple to reason about, but a “system” is not. Just when you think you understand what a system does, BAM! It exhibits some weird, perhaps dangerous, behavior that is counter-intuitive and totally unexpected – especially when humans are the key processing “nodes” in the beast.

Here are some questions to ponder regarding the seven level stack: Given that you’re hired to build a big, distributed system, at what level would you start your development effort? Would you start immediately coding up classes using the much revered TDD “best practice” and let all the upper levels of abstraction serendipitously “emerge”? Relatively speaking, how much time “up front” should you spend specifying, designing, recording, communicating the structures and behaviors of the top 3 levels of the stack? Again, relatively speaking, how much time should be allocated to the unit, integration, functional, and system levels of testing?

Trivial Trivia

March 26, 2014 Leave a comment

I was going through some old project stuff and stumbled upon the chart below. I developed it back when I was the software lead of a nine person sub-team on an embedded system product development effort:

GBT MP Builds

Putting all those indecipherable acronyms adorning the chart aside, note that the project was performed in 2004 – a mere 3 years after the famous “Agile Manifesto” was hatched. I can’t remember if I knew about (or read) the manifesto at the time, but I do know that Tom Gilb’s “Evo and Barry Boehm’s “Spiral processes had radically changed my worldview of software development. Specifically, the (now-obvious) concept of incremental development and delivery rang my bell as the best way to mitigate risk on challenging, software-intensive, projects.

As the chart illustrates, the actual hand-off of each of the seven builds (to the system integration test team) was pretty much dead nuts right on target. Despite the fact that the project front end (requirements definition and software design) was managed as a “waterfall” endeavor, the targets were met. Thus, I’m led to believe the following trivial trivia:

Not all agile projects succeed and not all waterfall projects fail.

Dueling Quagmires

March 21, 2014 2 comments

To BD00, the agile movement, even though it is a refreshing backlash against the “Process Models And Standards Quagmire” (PMASQ) perpetrated by a well-meaning but clueless mix of government and academic borgs who don’t have to create and build anything, has spawned its own quagmire of “Agile Process Frameworks And Practices Quagmire” (APFAPQ). Like the PMASQ community has ignited a cottage industry of expensive consultants, certifiers, assessors, trainers, and auditors, the APFAPQ movement has jump-started an equivalent community of expensive consultants, coaches, trainers, certifiers.

Dueling Quags

Government Governance

March 19, 2014 5 comments

The figure below highlights one problem with government “governance” of big software systems development. Sure, it’s dated, but it drives home the point that there’s a standards quagmire out there, no?

Stds Quagmire

Imagine that you’re a government contractor and, for every system development project you “win“, you’re required to secure “approval” from a different subset of authorities in a quagmire standards “system” like the one above. Just think of the overhead cost needed to keep abreast of, to figure out which, and to comply with, the applicable standards your product must conform to. Also think of the cost to periodically get your company and/or its products assessed and/or certified. If you ever wondered why the government pays $1000 for a toilet seat, look no further.

I look at this random, fragmented standards diagram as a paranoid, cover-your-ass strategy that government agencies can (and do) whip out when big systems programs go awry: “The reason this program is in trouble is because standards XXX and YYY were not followed“. As if meeting a set of standards guarantees robust, reliable, high-performing systems. What a waste. But hey, it’s other people’s money (yours and mine), so no problemo.

Not That Different, No?

March 7, 2014 4 comments

Check out this slide I plucked from a pitch that will remain unnamed:

Agile Vs WF

Notice the note under the waterfall diagram. Now, let’s look at the original, “unadapted” version and accompanying quote from Winston W. Royce’s classic 1970 paper:

Seq WF

Notice that Mr. Royce clearly noted in his paper that the sequential, never-look-back, waterfall process is a stone cold loser. Next, let’s look at another diagram from Mr. Royce’s paper; one that no fragilista ever mentions or shows:

Iterative WF

OMG! An iterative waterfall with feedback loops? WTF!

Finally, let’s look at BD00′s syntegrated version of the agile, lower half of our consultant’s diagram and the iterative waterfall diagram from Mr. Royce’s paper:

Agile WF

Comparing the agile and “chunked“, iterative, waterfall models shows that, taken in the right context, they’re not that different…. no?

Variable Sized MWs

February 25, 2014 Leave a comment

Rewritten in “old school” terminology, the five Scrum process events can be expressed as follows:

  1. Sprint Planning = Requirements definition and capture
  2. Sprint = Requirements analysis, design, coding, unit testing, integration testing, code review
  3. Daily Stand Up = Daily status meeting
  4. Sprint Review = Post-mortem
  5. Sprint Retrospective = Continuous process improvement

So, someone with an intentionally warped mind like BD00 may interpret a series of Scrum sprints as nothing more than a series of camouflaged Mini-Waterfalls (MW).

Sprint MiniW

But ya know what? Executing a project as a series of MWs may be a good thing – as long as an arbitrary, fixed-size, time-box is not imposed on the team. After all, since everything else is allowed to dynamically change during a Scrum project, why not the size of the Sprint too?

Var Size MiniW

Instead of estimating what features can be done in the next 30 days, why not simply estimate how many days will be needed to complete the set of features that marks the transition into the next MW? If, during the MW, it is learned that the goal won’t be achieved, then in addition to cancelling the MW outright, two other options can be contemplated:

  1. Extend the length of the MW
  2. Postpone the completion of one or more of the features currently being worked on
Follow

Get every new post delivered to your Inbox.

Join 381 other followers

%d bloggers like this: