The following C++14 code fragment represents a general message layout along with a specific instantiation of that message:
Side note: Why won’t a C++98/03 compiler accept the above code?
Assume that we are “required” to send thousands of these X-Y position messages per second between two computers over a finite bandwidth communication link:
There are many ways we can convert the representation of the message in memory into a serial stream of bytes for transmittal over the communication link, but let’s compare a simple binary representation against an XML equivalent:
The tradeoff is simple: human readability for performance. Even though the XML version is self-describing and readable to a human being, it is 6.5 times larger than the tight, fixed-size, binary format. In addition, the source code required to serialize/deserialize (i.e. marshal/unmarshal) the XML version is computationally denser than the code to implement the same functionality for the fixed-size, binary representation. In the software industry, this tradeoff is affectionately known as “the angle bracket tax” that must be payed for using XML in the critical paths of your system.
If your system requires high rates of throughput and low end-to-end latency for streaming data over a network, you may have no choice but to use a binary format to send/receive messages. After all, what good is it to have human readable messages if the system doesn’t work due to overflowing queues and lost messages?
In his terrific “Effective architecture sketches” slide deck, Simon Brown rightly states that you don’t need UML to sketch up your software architecture. However, if you don’t, you need to consider documenting the documentation:
The utility of using a standard like UML is that you don’t have to spend any time on all the arcane subtleties of meta-documentation. And if you’re choosing to bypass the UML, you’re probably not going to spend much time, if any, doing meta-documentation to clarify your architecture decisions. After all, doing less documentation, let alone writing documentation about the documentation, is why you eschewed UML in the first place.
So, good luck in unambiguously communicating the software architecture to your stakeholders; especially those poor souls who will be trying to build the beast with you.
Induction is the process of synthesizing a generalization from a set of particulars; a mental step up in abstraction from many-to-one.
Deduction is the process of decomposing one generalization into a set of particulars; a mental step down in abstraction from one to many.
A good personal software design process requires iterative execution of both types of sub-processes; with liberal doses of random reflection thrown into the timeline just to muck things up enough so that you can never fully retrace your steps. It’s pure alchemy!
“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” – Leslie Lamport
I’ve always loved that quote. But that’s only one reason why I was overjoyed when I stumbled upon this article written by Turing award winner Leslie Lamport: “Why We Should Build Software Like We Build Houses”. The other reason is that what he wrote is old school, but still relevant in many contexts:
Most programmers regard anything that doesn’t generate code to be a waste of time. Thinking doesn’t generate code, and writing code without thinking is a recipe for bad code. Before we start to write any piece of code, we should understand what that code is supposed to do. Understanding requires thinking, and thinking is hard. – Leslie Lamport
I recently modified some code I hadn’t written to add one tiny feature to a program. Doing that required understanding an interface. It took me over a day with a debugger to find out what I needed to know about the interface — something that would have taken five minutes with a spec. To avoid introducing bugs, I had to understand the consequences of every change I made. The absence of specs made that extremely difficult. Not wanting to find and read thousands of lines of code that might be affected, I spent days figuring out how to change as little of the existing code as possible. In the end, it took me over a week to add or modify 180 lines of code. And this was for a very minor change to the program. – Leslie Lamport
New age software gurus and hard-core agilistas have always condescendingly trashed the “building a house” and “building a bridge” metaphors for software development. The reasoning is that houses and bridges are made of hard-to-reconfigure atoms, whilst software is forged from simple-to-reconfigure bits. Well, yeah, that’s true, but… size matters.
In small systems, if you discover you made a big mistake three quarters of the way through the project, you can rewrite the whole shebang in short order without having to bend metal or cut wood. But as software systems get larger, at some point the “rewrite-from-scratch” strategy breaks down – and often spectacularly. Without house-like blueprints or bridge-like schema to consult, finding and reasoning about and fixing mistakes can be close to impossible – regardless of which state-of-the-art process you’re using.
Assume that we’ve just finished designing, testing, and integrating the system below:
Now let’s zoom in on the “as-built“, four class, design of SS2 (SubSystem 2). Assume its physical source tree is laid out as follows:
Given this design data after the fact, some questions may come to mind: How did the four class design cluster come into being? Did the design emerge first, the production code second, and the unit tests come third in a neat and orderly fashion? Did the tests come first and the design emerge second? Who gives a sh-t what the order and linearity of creation was, and perhaps more importantly, why would someone give a sh-t?
It seems that the TDD community thinks the way a design manifests is of supreme concern. You see, some hard core TDD zealots think that designing and writing the test code first ala a strict “red-green-refactor” personal process guarantees a “better” final design than any other way. And damn it, if you don’t do TDD, you’re a second class citizen.
BD00 thinks that as long as refactoring feedback loops exist between the designing-coding-testing efforts, it really doesn’t freakin’ matter which is the cart and which is the horse, nor even which comes first. TDD starts with a local, myopic view and iteratively moves upward towards global abstraction. DDT (Design Driven Test) starts with a global, hyperopic view and iteratively moves downward towards local implementation. A chaotic, hybrid, myopia-hyperopia approach starts anywhere and jumps back and forth as the developer sees fit. It’s all about the freedom to choose what’s best in the moment for you.
Notice that TDD says nothing about how the purely abstract, higher level, three-subsystem cluster (especially the inter-subsystem interfaces) that comprise the “finished” system should come into being. Perhaps the TDD community can (should?) concoct and mandate a new and hip personal process to cover software system level design?
Plucked from his deliciously titled “Real Architecture: Engineering or Pompous Bullshit?” slide deck, I give you Tom Gilb‘s personal principles of software architecture engineering:
Tom’s proactive approach seems like a far cry from the reactive approaches of the “emergent architecture” and TDA (Test Driven Architecture) communities, doesn’t it?
:)! Tom’s list actually uses the words “engineering” and “the architect“. Maybe that’s why I have always appreciated his work so much.
The figure below depicts an architectural view of a real-time embedded sub-system that I and a team of 8 others built and delivered 10 (freakin!) years ago. At revision number 9, the diagram ended up being the final “as-built” model of the 20,000+ lines-of-code system. Since the software was written in C and, thus, not object-oriented, I chose not to use UML to capture the design at the time. Doing so would have introduced an impedance mismatch and a large intellectual gap of misunderstanding between the procedural C code base and the OO design artifacts. I used structured analysis and functional decomposition to concoct the design and I employed “pseudo” Data Flow Diagrams (DFD) instead.
At the beginning of this “waterfall” project, I created revision 0 of the diagram as the first “build-to” snapshot. Of course, as learning accrued and the system evolved throughout development, I diligently kept the diagram updated and synchronized with the code base in true PAYGO fashion.
As you can see from the picture, the system of 30+ asynchronous application tasks ran under the tutelage of the industrial-strength VxWorks Real Time Operating System (RTOS). Asynchronous inter-task communication was performed via message passing through a series of lock-protected queues. The embedded physical board was powered by a Motorola PowerPC CPU (remember those dinosaurs?). The board housed a myriad of serial and ethernet interface ports for communication to other external sub-systems.
The above diagram was not the sole artifact that I used to record the design. It was simply the highest level, catch-all, overview of the system. I also developed a complementary set of lower level functional diagrams; each of which captured a sliced view of an end-to-end strand of critical functionality. One of these diagrams, the “Uplink/Downlink Processing View“, is shown below. Note that the final “as-built” diagram settled out as revision number 5.
The purpose of this post was simply to give you a taste of how I typically design and evolve a non-trivial software-intensive system that I can’t entirely keep in my head. I use the same PAYGO process for all of my efforts regardless of whether the project is being managed as an agile or waterfall endeavor. To me, project process is way over-emphasized and overblown. “Business Value” creation ultimately distills down to architecture, design, coding, and testing at all levels of abstraction.