Archive

Posts Tagged ‘software architecture’

Induce, Deduce, Reflect

August 20, 2014 Leave a comment

Induction is the process of synthesizing a generalization from a set of particulars; a mental step up in abstraction from many-to-one.

Induction

 

Deduction is the process of decomposing one generalization into a set of particulars; a mental step down in abstraction from one to many.

Deduction

A good personal software design process requires iterative execution of both types of sub-processes; with liberal doses of random reflection thrown into the timeline just to muck things up enough so that you can never fully retrace your steps. It’s pure alchemy!

butdreflect

Old School, But Still Relevant

August 14, 2014 Leave a comment

“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” – Leslie Lamport

I’ve always loved that quote. But that’s only one reason why I was overjoyed when I stumbled upon this article written by Turing award winner Leslie Lamport: “Why We Should Build Software Like We Build Houses”. The other reason is that what he wrote is old school, but still relevant in many contexts:

Most programmers regard anything that doesn’t generate code to be a waste of time. Thinking doesn’t generate code, and writing code without thinking is a recipe for bad code. Before we start to write any piece of code, we should understand what that code is supposed to do. Understanding requires thinking, and thinking is hard.  – Leslie Lamport

I recently modified some code I hadn’t written to add one tiny feature to a program. Doing that required understanding an interface. It took me over a day with a debugger to find out what I needed to know about the interface — something that would have taken five minutes with a spec. To avoid introducing bugs, I had to understand the consequences of every change I made. The absence of specs made that extremely difficult. Not wanting to find and read thousands of lines of code that might be affected, I spent days figuring out how to change as little of the existing code as possible. In the end, it took me over a week to add or modify 180 lines of code. And this was for a very minor change to the program. – Leslie Lamport

New age software gurus and hard-core agilistas have always condescendingly trashed the “building a house” and “building a bridge” metaphors for software development. The reasoning is that houses and bridges are made of  hard-to-reconfigure atoms, whilst software is forged from simple-to-reconfigure bits. Well, yeah, that’s true, but… size matters.

In small systems, if you discover you made a big mistake three quarters of the way through the project, you can rewrite the whole shebang in short order without having to bend metal or cut wood. But as software systems get larger, at some point the “rewrite-from-scratch” strategy breaks down – and often spectacularly. Without house-like blueprints or bridge-like schema to consult, finding and reasoning about and fixing mistakes can be close to impossible – regardless of which state-of-the-art process you’re using.

rebuild

Myopia And Hyperopia


Assume that we’ve just finished designing, testing, and integrating the system below:

SoS

Now let’s zoom in on the “as-built“, four class, design of SS2 (SubSystem 2). Assume its physical source tree is laid out as follows:

source tree

Given this design data after the fact, some questions may come to mind: How did the four class design cluster come into being? Did the design emerge first, the production code second, and the unit tests come third in a neat and orderly fashion?  Did the tests come first and the design emerge second? Who gives a sh-t what the order and linearity of creation was, and perhaps more importantly, why would someone give a sh-t?

It seems that the TDD community thinks the way a design manifests is of supreme concern. You see, some hard core TDD zealots think that designing and writing the test code first ala a strict “red-green-refactor” personal process guarantees a “better” final design than any other way. And damn it, if you don’t do TDD, you’re a second class citizen.

BD00 thinks that as long as refactoring feedback loops exist between the designing-coding-testing efforts, it really doesn’t freakin’ matter which is the cart and which is the horse, nor even which comes first. TDD starts with a local, myopic view and iteratively moves upward towards global abstraction. DDT (Design Driven Test) starts with a global, hyperopic view and iteratively moves downward towards local implementation. A chaotic, hybrid, myopia-hyperopia approach starts anywhere and jumps back and forth as the developer sees fit. It’s all about the freedom to choose what’s best in the moment for you.

DDT TDD

Notice that TDD says nothing about how the purely abstract, higher level, three-subsystem cluster (especially the inter-subsystem interfaces) that comprise the “finished” system should come into being. Perhaps the TDD community can (should?) concoct and mandate a new and hip personal process to cover software system level design?

Gilbitecture

April 15, 2014 2 comments

Plucked from his deliciously titled “Real Architecture: Engineering or Pompous Bullshit?” slide deck, I give you Tom Gilb‘s personal principles of software architecture engineering:

Gilbertecture

Tom’s proactive approach seems like a far cry from the reactive approaches of the “emergent architecture” and TDA (Test Driven Architecture) communities, doesn’t it?

OMG! Tom’s list actually uses the words “engineering” and “the architect“. Maybe that’s why I have always appreciated his work so much. :)

Regardless Of Agile Or Waterfall

April 9, 2014 2 comments

The figure below depicts an architectural view of a real-time embedded sub-system that I and a team of 8 others built and delivered 10 (freakin!) years ago. At revision number 9, the diagram ended up being the final “as-built” model of the 20,000+ lines-of-code system. Since the software was written in C and, thus, not object-oriented, I chose not to use UML to capture the design at the time. Doing so would have introduced an impedance mismatch and a large intellectual gap of misunderstanding between the procedural C code base and the OO design artifacts. I used structured analysis and functional decomposition to concoct the design and I employed “pseudoData Flow Diagrams (DFD) instead.

At the beginning of this “waterfall” project, I created revision 0 of the diagram as the first “build-to” snapshot. Of course, as learning accrued and the system evolved throughout development, I diligently kept the diagram updated and synchronized with the code base in true PAYGO fashion.

MP_Top_Level_SW_Design

As you can see from the picture, the system of 30+ asynchronous application tasks ran under the tutelage of the industrial-strength VxWorks Real Time Operating System (RTOS). Asynchronous inter-task communication was performed via message passing through a series of lock-protected queues. The embedded physical board was powered by a Motorola PowerPC CPU (remember  those dinosaurs?). The board housed a myriad of serial and ethernet interface ports for communication to other external sub-systems.

The above diagram was not the sole artifact that I used to record the design. It was simply the highest level, catch-all, overview of the system. I also developed a complementary set of lower level functional diagrams; each of which captured a sliced view of an end-to-end strand of critical functionality. One of these diagrams, the “Uplink/Downlink Processing View“, is shown below. Note that the final “as-built” diagram settled out as revision number 5.

UpDownLink

The purpose of this post was simply to give you a taste of how I typically design and evolve a non-trivial software-intensive system that I can’t entirely keep in my head. I use the same PAYGO process for all of my efforts regardless of whether the project is being managed as an agile or waterfall endeavor. To me, project process is way over-emphasized and overblown. “Business Value” creation ultimately distills down to architecture, design, coding, and testing at all levels of abstraction.

Where To Start?

April 6, 2014 4 comments

The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. — Edsger Dijkstra

With Edsger’s delicious quote in mind, let’s explore seven levels of abstraction that can be used to reason about big, distributed, systems:

seven-levels

At level zero, we have the finest grained, most concrete unit of design, a single puny line of “source code“. At level seven, we have the coarsest grained, most abstract unit of design, the mysterious and scary “system” level. A line of code is simple to reason about, but a “system” is not. Just when you think you understand what a system does, BAM! It exhibits some weird, perhaps dangerous, behavior that is counter-intuitive and totally unexpected – especially when humans are the key processing “nodes” in the beast.

Here are some questions to ponder regarding the seven level stack: Given that you’re hired to build a big, distributed system, at what level would you start your development effort? Would you start immediately coding up classes using the much revered TDD “best practice” and let all the upper levels of abstraction serendipitously “emerge”? Relatively speaking, how much time “up front” should you spend specifying, designing, recording, communicating the structures and behaviors of the top 3 levels of the stack? Again, relatively speaking, how much time should be allocated to the unit, integration, functional, and system levels of testing?

Let the Design Emerge, But Not The Architecture

February 21, 2014 Leave a comment

I’m not a fan of “emergent global architecture“, but I AM a fan of “emergent local design“. To mitigate downstream technical and financial risk, I believe that one has to generate and formally document an architecture at a high level of abstraction before starting to write code. To do otherwise would be irresponsible.

The figure below shows a portion of an initial “local” design that I plucked out of a more “global” architectural design. When I started coding and unit testing the cluster of classes in the snippet, I “discovered” that the structure wasn’t going work out. The API of the architectural framework within which the class cluster runs wouldn’t allow it to work without some major, internal, restructuring and retesting of the framework itself.

Original Design

After wrestling with the dilemma for a bit, the following workable local design emerged out of the learning acquired via several wretched attempts to make the original design work. Of course, I had to throw away a bunch of previously written skeletal product and test code, but that’s life. Now I’m back on track and moving forward again. W00t!

Updated Design

Follow

Get every new post delivered to your Inbox.

Join 457 other followers

%d bloggers like this: