Induction is the process of synthesizing a generalization from a set of particulars; a mental step up from many-to-one.
Deduction is the process of decomposing one generalization into a set of particulars; a mental step down from one to many.
A good personal software design process requires iterative execution of both types of sub-processes, with liberal doses of random reflection thrown into the timeline just to muck things up enough so that you can never fully retrace your steps. It’s pure alchemy!
When all is said and done, more is said then done. – Unknown
Savvy politicians and bureaucrats seem to always say the right thing, but they rarely back up their proclamations with effective action. In “Military’s focus on big systems is now killing us”, DARPA Director Arati Prabhaker states the patently obvious:
The Pentagon must break this monolithic, high-cost, slow-moving, inflexible approach that we have.
Well, duh! I’ve been hearing this rally cry from incoming and outgoing appointees for decades.
Yet another insightful DARPA director states:
The services have largely failed to take advantage of an emerging “software-defined world.” The result has been skyrocketing weapons costs.
Say what? “Sofware-Defined World“? I must have missed the debut of this newly minted jargon. The “Internet Of Things” and its pithy acronym, IoT, must be so yesterday. The “Software-Defined World“, SDW, must be so today. W00t!!
If you read the article carefully, you’ll see that the interviewees have no clue on how to solve the grandiose cost/schedule/quality problems posed by the currently entrenched, docu-centric, waterfall (SRR, PDR, CDR, Fab, Test, Deploy) acquisition process and, especially, the hordes of civil servants whose livelihoods depend on the acquisition system remaining “as is“. But BD00 does know how to solve it: Certified Scrum Master and Certified Product Owner training for all!
“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” – Leslie Lamport
I’ve always loved that quote. But that’s only one reason why I was overjoyed when I stumbled upon this article written by Turing award winner Leslie Lamport: “Why We Should Build Software Like We Build Houses”. The other reason is that what he wrote is old school, but still relevant in many contexts:
Most programmers regard anything that doesn’t generate code to be a waste of time. Thinking doesn’t generate code, and writing code without thinking is a recipe for bad code. Before we start to write any piece of code, we should understand what that code is supposed to do. Understanding requires thinking, and thinking is hard. – Leslie Lamport
I recently modified some code I hadn’t written to add one tiny feature to a program. Doing that required understanding an interface. It took me over a day with a debugger to find out what I needed to know about the interface — something that would have taken five minutes with a spec. To avoid introducing bugs, I had to understand the consequences of every change I made. The absence of specs made that extremely difficult. Not wanting to find and read thousands of lines of code that might be affected, I spent days figuring out how to change as little of the existing code as possible. In the end, it took me over a week to add or modify 180 lines of code. And this was for a very minor change to the program. – Leslie Lamport
New age software gurus and hard-core agilistas have always condescendingly trashed the “building a house” and “building a bridge” metaphors for software development. The reasoning is that houses and bridges are made of hard-to-reconfigure atoms, whilst software is forged from simple-to-reconfigure bits. Well, yeah, that’s true, but… size matters.
In small systems, if you discover you made a big mistake three quarters of the way through the project, you can rewrite the whole shebang in short order without having to bend metal or cut wood. But as software systems get larger, at some point the “rewrite-from-scratch” strategy breaks down – and often spectacularly. Without house-like blueprints or bridge-like schema to consult, finding and reasoning about and fixing mistakes can be close to impossible – regardless of which state-of-the-art process you’re using.
Since Nassim Taleb trashes Nobel prize winning economists Robert Merton and Myron Scholes so often in his books, I decided to look deeper into the disdain he harbors for these two men by reading Roger Lowenstein’s “When Genius Failed: The Rise And Fall Of Long Term Capital Management“.
In case you didn’t know, LTCM was a high-falutin’ hedge fund that racked up huge investment gains for four years in 1998 before exploding with a bang that almost shook the financial system to its core. As the following graph shows, if you were privileged enough to have invested $1 in LTCM in March 1994, you would have quadrupled your money by March 1998. Four hundred percent in four years. W00t!
But wait! Looky at how LTCM’s vaunted fund performed between March and October 1998. From $4 to $.25 in eight months. D’oh!
Mssrs. Merton and Scholes were a pair of highly regarded academicians hired by LTCM as the brains behind the mathematically elegant economic models that eventually drove the firm into the gutter. In addition to this dynamic duo, LTCM’s head honcho, John Meriwether, hired several MIT PhDs and a former federal reserve banker to round out his superstar roster. Even before LTCM’s performance began its mercurial ascent, all the big Wall St. banks were tripping over themselves to loan money to, and do business with, the wizards at LTCM.
As you might think, the LTCM partnership thought quite highly of themselves. Thus, they treated everyone else like shit – because they could. They were loaned money at rock bottom prices while charging astronomical fees for their money management “expertise“. Even though LTCM kept their numerous, obscure, huge, derivative-laden trade positions and their precious models secret, the greed of their investors and lenders allowed the elites to do as they pleased.
“There is no way you can make that kind of money in Treasury markets.” Scholes angled forward in his leather-backed chair and said, “You’re the reason—because of fools like you we can.” – Lowenstein, Roger (2001-01-18). When Genius Failed: The Rise and Fall of Long-Term Capital Management (Kindle Locations 693-694). Random House Publishing Group. Kindle Edition.
The LTCM success story began to unravel when Russia defaulted on their national debt in 1998. Since it was “unthinkable” that a nuclear power could ever default on its obligations, the panic quickly spread to other “supposedly uncorrelated” markets around the globe. Of course, the rational Merton/Scholes equations didn’t account for this irrational event. Because of their mammoth size, LTCM couldn’t dump any of their assets like the hysterical herd was doing. Everyone was selling and no one was buying – the market for LTCM’s holdings simply disappeared. LTCM stood by helplessly as their equity tanked faster than you can say WTF! Their fund lost an unprecedented $500M in one day- and they did that twice. Now THAT, takes genius.
Since LTCM was so intertwined with virtually every big player on Wall St., the US Federal Reserve feared that if LTCM went bankrupt the financial system itself could collapse. Thus, the public Fed ended up orchestrating a 3.6 BILLION dollar bailout of the private LTCM by a consortium of Wall St. banks (better them than us taxpayers; but we would come to the rescue in the next panic, 10 years later in 2008). The same people whom LTCM treated like inferior beings had begrudgingly come to the rescue. Even though the LTCM partners were (thankfully) wiped out, the bailers ended up recouping their $3.6B over the next few years. But don’t think of them as heroes. They only signed up for the bailout because they were terrified of sinking too; and their brazen disregard for how LTCM spent their money helped precipitate the meltdown in the first place.
Incredibly, after the bailout, the LTCM fatheads, who were superficially contrite in public, claimed that the panic was a fluke “10 sigma” event. They were (and perhaps still are?) convinced they could mathematically model the human race as a rational-thinking aggregate mass of matter whose “parameters” are dictated by the Gaussian probability density function. A subset of LTCM partners, spearheaded yet again by perpetual loser John Meriwether, started another hedge fund with more complicated models accounting for “fat tail” rare events, but, incredulously, still anchored on the utterly wrong Gaussian distribution. Somehow, they raised $250M from a fresh set of rich idiots.
People caught in such financial cataclysms typically feel singularly unlucky, but financial history is replete with examples of “fat tails”— unusual and extreme price swings that, based on a reading of previous prices , would have seemed implausible. -Lowenstein, Roger (2001-01-18). When Genius Failed: The Rise and Fall of Long-Term Capital Management (Kindle Locations 4176-4178). Random House Publishing Group. Kindle Edition.
As of today, both LTCM and the eggheads’ later fund, JWM Partners, don’t exist – poof!
Make a measurement, one measurement, of any personal metric you might fancy… right now. Next, plot your sample point on a graph where time is the independent variable on the x-axis:
Next, even though you most likely have no prior measurements to plot, reflect on the path that got you to “now“. You’re likely to concoct a smooth, logical, linear narrative like this:
However, because of our propensity to be, as Nassim Taleb says, easily “fooled by randomness“, you’re most likely to have traveled one of the ragged, noisy paths plotted on this graph:
Because of the malady of linear-think, you’re most likely to envision the future as a continued journey on the smooth, forward projection of your made-up narrative. Good luck with that.
I’m currently designing/writing the software component of a new air surveillance radar data processor that interfaces the radar to an existing, legacy combat control system. In order to test the software, I have to interact with the combat system’s GUI to issue radar commands and ensure that detected targets are received and displayed properly on the combat system display.
As the figure above shows, the acronym “SS2000” appears prominently on the GUI display. When I saw it for the first time, a sense of deja-vu came over me, but I couldn’t figure out why. After a few days of testing, I experienced an AHA! epiphany. Out of the blue, I suddenly remembered where I saw “SS2000” before. Ding!
Ya see, back in the early 2000’s, I read a Software Engineering Institute (SEI) case study on the concept of “software product lines“. It featured a Swedish company called “Celsius Tech“. The report meticulously described how Celsius Tech painfully transformed itself in the 90’s from an expensive developer of one-off naval combat systems into an efficient, low cost, high quality, producer of systems. Instead of starting from scratch on each new system development, Celsius Tech “instantiated” each new system from an in-place, reusable set of product line assets (code and requirements/design/test documentation) that they diligently built upfront.
I was so enamored with Celsius Tech’s technical and financial success with the concept of software product lines that I concocted an executive summary of the report and aggressively pitched it internally to everyone and anybody who would listen. But alas, I utterly failed to jumpstart an internal effort to transform my employer at the time, Sensis Corp., into a software product line enterprise.
The name of Celsius Tech’s software product line infrastructure was…… SS2000 = Ship System 2000! But wait, the story gets eerily better. Celsius Tech was subsequently purchased by Swedish defense company Saab AB (yes, they used to make cars but they sold off that business a long time ago) – the same company that bought my employer, Sensis Corp., in 2011. As a result of the buyout, I currently work for Saab Sensor Systems. Quite the coincidence, no?
When a control system is humming along, the gap between the desired and current states is so small that the frequency of command issuance by the Decision Maker component is essentially zero; all is well and goal attainment is on track. However, with the universe being as messy as it is, unseen and unpredictable “disturbances” can, and do, enter the system at any point of access to the structure.
If the sensors and/or actuators can’t filter out the disturbances or are malfunctioning themselves, then true control of the production system may be lost. Perceptions and commands get distorted and the distance between goal attainment and “reality” will be perceived as shorter or longer than they are. D’oh! I hate when that happens.
I can’t rave enough about how great a safaribooksonline.com subscription is for writing code. Take a look at this screenshot:
As you can hopefully make out, I have five Firefox tabs open to the following C++11 programming books:
- The C++ Programming Language, 4th Edition – Bjarne Stroustrup
- The C++ Standard Library: A Tutorial and Reference (2nd Edition) – Nicolai M. Josuttis
- C++ Primer Plus (6th Edition) – Stephen Prata
- C++ Primer (5th Edition) – Stanley B. Lippman, Josée Lajoie, Barbara E. Moo
- C++ Concurrency in Action: Practical Multithreading – Anthony Williams
A subscription is a bit pricey, but if you can afford it, I highly recommend buying one. You’ll not only write better code faster, you’ll amplify your learning experience by an order of magnitude from having the capability to effortlessly switch between multiple, independent sources of information on the same topic. W00t!
Assume that we’ve just finished designing, testing, and integrating the system below:
Now let’s zoom in on the “as-built“, four class, design of SS2 (SubSystem 2). Assume its physical source tree is laid out as follows:
Given this design data after the fact, some questions may come to mind: How did the four class design cluster come into being? Did the design emerge first, the production code second, and the unit tests come third in a neat and orderly fashion? Did the tests come first and the design emerge second? Who gives a sh-t what the order and linearity of creation was, and perhaps more importantly, why would someone give a sh-t?
It seems that the TDD community thinks the way a design manifests is of supreme concern. You see, some hard core TDD zealots think that designing and writing the test code first ala a strict “red-green-refactor” personal process guarantees a “better” final design than any other way. And damn it, if you don’t do TDD, you’re a second class citizen.
BD00 thinks that as long as refactoring feedback loops exist between the designing-coding-testing efforts, it really doesn’t freakin’ matter which is the cart and which is the horse, nor even which comes first. TDD starts with a local, myopic view and iteratively moves upward towards global abstraction. DDT (Design Driven Test) starts with a global, hyperopic view and iteratively moves downward towards local implementation. A chaotic, hybrid, myopia-hyperopia approach starts anywhere and jumps back and forth as the developer sees fit. It’s all about the freedom to choose what’s best in the moment for you.
Notice that TDD says nothing about how the purely abstract, higher level, three-subsystem cluster (especially the inter-subsystem interfaces) that comprise the “finished” system should come into being. Perhaps the TDD community can (should?) concoct and mandate a new and hip personal process to cover software system level design?
In the beginning of Robert Virding’s brilliant InfoQ talk on Erlang, he distinguishes between parallelism and concurrency. Parallelism is “physical“, having to do with the static number of cores and processors in a system. Concurrency is “abstract“, having to do with the number of dynamic application processes and threads running in the system. To relate the physical with the abstract, I felt compelled to draw this physical-multi-core, physical-multi-node, abstract-multi-process, abstract-multi-thread diagram:
It’s not much different than the pic in this four year old post: PTCPN. It’s simply a less detailed, alternative point of view.