Induction is the process of synthesizing a generalization from a set of particulars; a mental step up in abstraction from many-to-one.
Deduction is the process of decomposing one generalization into a set of particulars; a mental step down in abstraction from one to many.
A good personal software design process requires iterative execution of both types of sub-processes; with liberal doses of random reflection thrown into the timeline just to muck things up enough so that you can never fully retrace your steps. It’s pure alchemy!
When all is said and done, more is said then done. – Unknown
Savvy politicians and bureaucrats seem to always say the right thing, but they rarely back up their proclamations with effective action. In “Military’s focus on big systems is now killing us”, DARPA Director Arati Prabhaker states the patently obvious:
The Pentagon must break this monolithic, high-cost, slow-moving, inflexible approach that we have.
Well, duh! I’ve been hearing this rally cry from incoming and outgoing appointees for decades.
Yet another insightful DARPA director states:
The services have largely failed to take advantage of an emerging “software-defined world.” The result has been skyrocketing weapons costs.
Say what? “Sofware-Defined World“? I must have missed the debut of this newly minted jargon. The “Internet Of Things” and its pithy acronym, IoT, must be so yesterday. The “Software-Defined World“, SDW, must be so today. W00t!!
If you read the article carefully, you’ll see that the interviewees have no clue on how to solve the grandiose cost/schedule/quality problems posed by the currently entrenched, docu-centric, waterfall (SRR, PDR, CDR, Fab, Test, Deploy) acquisition process and, especially, the hordes of civil servants whose livelihoods depend on the acquisition system remaining “as is“. But BD00 does know how to solve it: Certified Scrum Master and Certified Product Owner training for all!
“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” – Leslie Lamport
I’ve always loved that quote. But that’s only one reason why I was overjoyed when I stumbled upon this article written by Turing award winner Leslie Lamport: “Why We Should Build Software Like We Build Houses”. The other reason is that what he wrote is old school, but still relevant in many contexts:
Most programmers regard anything that doesn’t generate code to be a waste of time. Thinking doesn’t generate code, and writing code without thinking is a recipe for bad code. Before we start to write any piece of code, we should understand what that code is supposed to do. Understanding requires thinking, and thinking is hard. – Leslie Lamport
I recently modified some code I hadn’t written to add one tiny feature to a program. Doing that required understanding an interface. It took me over a day with a debugger to find out what I needed to know about the interface — something that would have taken five minutes with a spec. To avoid introducing bugs, I had to understand the consequences of every change I made. The absence of specs made that extremely difficult. Not wanting to find and read thousands of lines of code that might be affected, I spent days figuring out how to change as little of the existing code as possible. In the end, it took me over a week to add or modify 180 lines of code. And this was for a very minor change to the program. – Leslie Lamport
New age software gurus and hard-core agilistas have always condescendingly trashed the “building a house” and “building a bridge” metaphors for software development. The reasoning is that houses and bridges are made of hard-to-reconfigure atoms, whilst software is forged from simple-to-reconfigure bits. Well, yeah, that’s true, but… size matters.
In small systems, if you discover you made a big mistake three quarters of the way through the project, you can rewrite the whole shebang in short order without having to bend metal or cut wood. But as software systems get larger, at some point the “rewrite-from-scratch” strategy breaks down – and often spectacularly. Without house-like blueprints or bridge-like schema to consult, finding and reasoning about and fixing mistakes can be close to impossible – regardless of which state-of-the-art process you’re using.
Since Nassim Taleb trashes Nobel prize winning economists Robert Merton and Myron Scholes so often in his books, I decided to look deeper into the disdain he harbors for these two men by reading Roger Lowenstein’s “When Genius Failed: The Rise And Fall Of Long Term Capital Management“.
In case you didn’t know, LTCM was a high-falutin’ hedge fund that racked up huge investment gains for four years in 1998 before exploding with a bang that almost shook the financial system to its core. As the following graph shows, if you were privileged enough to have invested $1 in LTCM in March 1994, you would have quadrupled your money by March 1998. Four hundred percent in four years. W00t!
But wait! Looky at how LTCM’s vaunted fund performed between March and October 1998. From $4 to $.25 in eight months. D’oh!
Mssrs. Merton and Scholes were a pair of highly regarded academicians hired by LTCM as the brains behind the mathematically elegant economic models that eventually drove the firm into the gutter. In addition to this dynamic duo, LTCM’s head honcho, John Meriwether, hired several MIT PhDs and a former federal reserve banker to round out his superstar roster. Even before LTCM’s performance began its mercurial ascent, all the big Wall St. banks were tripping over themselves to loan money to, and do business with, the wizards at LTCM.
As you might think, the LTCM partnership thought quite highly of themselves. Thus, they treated everyone else like shit – because they could. They were loaned money at rock bottom prices while charging astronomical fees for their money management “expertise“. Even though LTCM kept their numerous, obscure, huge, derivative-laden trade positions and their precious models secret, the greed of their investors and lenders allowed the elites to do as they pleased.
“There is no way you can make that kind of money in Treasury markets.” Scholes angled forward in his leather-backed chair and said, “You’re the reason—because of fools like you we can.” – Lowenstein, Roger (2001-01-18). When Genius Failed: The Rise and Fall of Long-Term Capital Management (Kindle Locations 693-694). Random House Publishing Group. Kindle Edition.
The LTCM success story began to unravel when Russia defaulted on their national debt in 1998. Since it was “unthinkable” that a nuclear power could ever default on its obligations, the panic quickly spread to other “supposedly uncorrelated” markets around the globe. Of course, the rational Merton/Scholes equations didn’t account for this irrational event. Because of their mammoth size, LTCM couldn’t dump any of their assets like the hysterical herd was doing. Everyone was selling and no one was buying – the market for LTCM’s holdings simply disappeared. LTCM stood by helplessly as their equity tanked faster than you can say WTF! Their fund lost an unprecedented $500M in one day- and they did that twice. Now THAT, takes genius.
Since LTCM was so intertwined with virtually every big player on Wall St., the US Federal Reserve feared that if LTCM went bankrupt the financial system itself could collapse. Thus, the public Fed ended up orchestrating a 3.6 BILLION dollar bailout of the private LTCM by a consortium of Wall St. banks (better them than us taxpayers; but we would come to the rescue in the next panic, 10 years later in 2008). The same people whom LTCM treated like inferior beings had begrudgingly come to the rescue. Even though the LTCM partners were (thankfully) wiped out, the bailers ended up recouping their $3.6B over the next few years. But don’t think of them as heroes. They only signed up for the bailout because they were terrified of sinking too; and their brazen disregard for how LTCM spent their money helped precipitate the meltdown in the first place.
Incredibly, after the bailout, the LTCM fatheads, who were superficially contrite in public, claimed that the panic was a fluke “10 sigma” event. They were (and perhaps still are?) convinced they could mathematically model the human race as a rational-thinking aggregate mass of matter whose “parameters” are dictated by the Gaussian probability density function. A subset of LTCM partners, spearheaded yet again by perpetual loser John Meriwether, started another hedge fund with more complicated models accounting for “fat tail” rare events, but, incredulously, still anchored on the utterly wrong Gaussian distribution. Somehow, they raised $250M from a fresh set of rich idiots.
People caught in such financial cataclysms typically feel singularly unlucky, but financial history is replete with examples of “fat tails”— unusual and extreme price swings that, based on a reading of previous prices , would have seemed implausible. -Lowenstein, Roger (2001-01-18). When Genius Failed: The Rise and Fall of Long-Term Capital Management (Kindle Locations 4176-4178). Random House Publishing Group. Kindle Edition.
As of today, both LTCM and the eggheads’ later fund, JWM Partners, don’t exist – poof!
Make a measurement, one measurement, of any personal metric you might fancy… right now. Next, plot your sample point on a graph where time is the independent variable on the x-axis:
Next, even though you most likely have no prior measurements to plot, reflect on the path that got you to “now“. You’re likely to concoct a smooth, logical, linear narrative like this:
However, because of our propensity to be, as Nassim Taleb says, easily “fooled by randomness“, you’re most likely to have traveled one of the ragged, noisy paths plotted on this graph:
Because of the malady of linear-think, you’re most likely to envision the future as a continued journey on the smooth, forward projection of your made-up narrative. Good luck with that.
Did you ever wonder why companies are obsessed with growth? Check out the growth trajectories of four hypothetical companies below. Although company stewards may not explicitly state it, the goal of every for-profit enterprise is to become Too Big Too Fail (TBTF). At the moment the TBTF threshold is crossed, the risk of going bust disappears. Well, it doesn’t disappear, it simply gets transferred from the TBTF menace itself to the public via taxpayer bailouts by politicians who love to play with other people’s money – yours and mine.
In nature, nothing organic is Too Big To Fail. I wish the same could be said for man-made systems. Timber!!!!!
Somehow, I stumbled upon an academic paper that compares programming language performance in the context of computing the results for a well-known, computationally dense, macro-economics problem: “the stochastic neoclassical growth model“. Since the results hoist C++ on top of the other languages, I felt the need to publish the researchers’ findings in this blog post :). As with all benchmarks, take it with a grain of salt because… context is everything.
The irony of this post is that I’m a big fan of Nassim Taleb, whose lofty goal is to destroy the economics profession as we know it. He thinks all the fancy, schmancy mathematical models and metrics used by economists (including famous Nobel laureates) to predict the future are predicated on voodoo science. They cause more harm than good by grossly misrepresenting and underestimating the role of risk in their assumptions and derived equations.
In “The Politics Of Projects“, Robert Block rightly states: “People want products, not projects“. The ideal project takes zero time, no labor, and no financial investment. The holy grail is to transition from abstract desire to concrete outcome in no time flat :). Nevertheless, for any non-trivial product development effort requiring a diverse team of people to get the job done, some sort of project (or, “coordinated effort” for you #noprojects advocates) is indeed required. Whether self-organized or dictator-directed, there has to be some way of steering, focusing the effort of a team of smart people to achieve the outcomes that a project is expected to produce.
At the simplistic BD00 level of comprehension, a project is one of two binary types: a potential revenue generator or a potential cost reducer.
Startups concentrate solely on projects that raise revenue. At this stage of the game, not a second thought is given to cost-reduction projects – the excitement of creating value reigns. As a startup grows and adds layers of “professional” management to control the complexity that comes with that growth, an insidious shift takes place. The mindset at the top flips from raising revenue to reducing costs and increasing efficiency. In large organizations, every employee has experienced multiple, ubiquitous, top-down “cost reduction initiatives“, the worst of which is the dreaded reduction-in-force initiative. On the other hand, org-wide initiatives to increase revenues are rare.
Since his philosophical ideas are refreshingly new, counter-intuitive, and mind-boggingly deep, I decided to re-read all four of Nassim Taleb’s books. I just finished re-reading “Antifragile” and am now well into my second pass through “The Black Swan“.
As with all good books that resonate with me, I find that re-reading them brings new learning, excitement, and joy. It’s almost like I’m reading them for the first time.
The reason I’m magnetically drawn to Mr. Taleb’s work is because his mission is truly noble and humanitarian. It is to make the world a better place by creating a system in which so-called elites (e.g. economists, politicians, academicians, Harvard-trained managers, high frequency traders) with no “skin in the game” cannot harm millions who follow their predictions/advice/policies without being harmed themselves. Requiring big-wigs to place some “skin in the game” (Mr. Expert, does the content f your portfolio align with your forecasts/advice?) precludes the alarming and increasingly asymmetric transfer of anti-fragility from regular Joe Schmoes like you and me to smug, self-serving elites.
In case you are new to the concept of antifragility, consider the figure below. A fragile system is one in which, as the magnitude of an external stressor is applied, the harm it experiences increases non-linearly. An antifragile system is the exact opposite. It is more than simply resilient or robust. It actually gains from volatility (up to a point, of course).
Since you can’t know what’s going to happen in the next five minutes, let alone far into the future, you can’t guarantee your own personal antifragility. But you can take concrete action to reduce your fragility and minimize the risk of someone stealing whatever antifragility you do have. Eliminating debt decreases fragility. Adding redundancy (e.g. two kidneys, two lungs) and “having options” reduce fragility. Government bailouts transfer antifragility from taxpayers to executives and shareholders. Lack of term limits transfers antifragility from voters to politicians. Corporate mergers and buyouts transfer antifragility from employees to executives. Increasing size and centralization increases fragility. Lack of exercise increases fragility. Long periods of obsessively manufactured stability increase fragility. The ultimate fragilizer, and the one in which we can only accept, is…….. TIME.
I’m currently designing/writing the software component of a new air surveillance radar data processor that interfaces the radar to an existing, legacy combat control system. In order to test the software, I have to interact with the combat system’s GUI to issue radar commands and ensure that detected targets are received and displayed properly on the combat system display.
As the figure above shows, the acronym “SS2000” appears prominently on the GUI display. When I saw it for the first time, a sense of deja-vu came over me, but I couldn’t figure out why. After a few days of testing, I experienced an AHA! epiphany. Out of the blue, I suddenly remembered where I saw “SS2000” before. Ding!
Ya see, back in the early 2000’s, I read a Software Engineering Institute (SEI) case study on the concept of “software product lines“. It featured a Swedish company called “Celsius Tech“. The report meticulously described how Celsius Tech painfully transformed itself in the 90’s from an expensive developer of one-off naval combat systems into an efficient, low cost, high quality, producer of systems. Instead of starting from scratch on each new system development, Celsius Tech “instantiated” each new system from an in-place, reusable set of product line assets (code and requirements/design/test documentation) that they diligently built upfront.
I was so enamored with Celsius Tech’s technical and financial success with the concept of software product lines that I concocted an executive summary of the report and aggressively pitched it internally to everyone and anybody who would listen. But alas, I utterly failed to jumpstart an internal effort to transform my employer at the time, Sensis Corp., into a software product line enterprise.
The name of Celsius Tech’s software product line infrastructure was…… SS2000 = Ship System 2000! But wait, the story gets eerily better. Celsius Tech was subsequently purchased by Swedish defense company Saab AB (yes, they used to make cars but they sold off that business a long time ago) – the same company that bought my employer, Sensis Corp., in 2011. As a result of the buyout, I currently work for Saab Sensor Systems. Quite the coincidence, no?