In this interesting 2006 slide deck, “C++ in safety-critical applications: the JSF++ coding standard“, Bjarne Stroustrup and Kevin Carroll provide the rationale for selecting C++ as the programming language for the JSF (Joint Strike Fighter) jet project:
First, on the language selection:
- “Did not want to translate OO design into language that does not support OO capabilities“.
- “Prospective engineers expressed very little interest in Ada. Ada tool chains were in decline.“
- “C++ satisfied language selection criteria as well as staffing concerns.“
They also articulated the design philosophy behind the set of rules as:
- “Provide “safer” alternatives to known “unsafe” facilities.”
- “Craft rule-set to specifically address undefined behavior.”
- “Ban features with behaviors that are not 100% predictable (from a performance perspective).”
Note that because of the last bullet, post-initialization dynamic memory allocation (using new/delete) and exception handling (using throw/try/catch) were verboten.
Interestingly, Bjarne and Kevin also flipped the coin and exposed the weaknesses of language subsetting:
What they didn’t discuss in the slide deck was whether the strengths of imposing a large coding standard on a development team outweigh the nasty weaknesses above. I suspect it was because the decision to impose a coding standard was already a done deal.
Much as we don’t want to admit it, it all comes down to economics. How much is the lowering of the risk of loss of life worth? No rule set can ever guarantee 100% safety. Like trying to move from 8 nines of availability to 9 nines, the financial and schedule costs in trying to achieve a Utopian “certainty” of safety start exploding exponentially. To add insult to injury, there is always tremendous business pressure to deliver ASAP and, thus, unconsciously cut corners like jettisoning corner-case system-level testing and fixing hundreds of ”annoying” rules violations.
Does anyone have any data on whether imposing a strict coding standard actually increases the safety of a system? Better yet, is there any data that indicates imposing a standard actually decreases the safety of a system? I doubt that either of these questions can be answered with any unbiased data. We’ll just continue on auto-believing that the answer to the first question is yes because it’s supposed to be self-evident.
Uncertain: Not able to be relied on; not known or definite
Every spiritual book I’ve ever read advises its readers to learn to become comfortable with “uncertainty” because “uncertainty is all there is“. BD00 agrees with this sage advice because fighting with reality is not good for the psyche.
Ambiguity, on the other hand, is a related but different animal. It’s not open-ended like its uncertainty parent. Ambiguity is the next step down from uncertainty and it consists of a finite number of choices; one of which is usually the best choice in a given context. The perplexing dilemma is that the best choice in one context may be the worst choice in a different context. D’oh!
Many smart people that I admire say things like “learn to be comfortable with ambiguity“, but BD00 is not onboard with that approach. He’s not comfortable with ambiguity. Whenever he encounters it in his work or (especially) in the work of others that his work depends on for progressing forward, he doggedly tries to resolve it – with whatever means available.
The main reason behind BD00′s antagonistic stance toward ambiguity is that ambiguity doesn’t compile – but the code must compile for progress to occur. Thus, if ambiguity isn’t consciously resolved between the ambiguity creator (ambiguator) and the ambiguity implementer (ambiguatee) before slinging code, it will be unconsciously resolved somehow. To avoid friction and perhaps confrontation between ambiguator and ambiguatee (because of differences in stature on the totem pole?), an arbitrary and “undisclosed” decision will be made by the implementer to get the code to compile and progress to occur- most likely leading to functionally incorrect code and painful downstream debugging.
So, BD00′s stance is to be comfortable with uncertainty but uncomfortable with ambiguity. Whenever you encounter the ambiguity beast, consciously attack it with a vengeance and publicly resolve it ASAP with all applicable stakeholders.
Out of the bazillions of definitions of “software architecture” out in the wild, my favorite is:
“the initial set of decisions that are costly to change downstream“.
It’s my fave because it encompasses the “whole” product development ecosystem and not just the structure and behavior of the product itself. Here are some example decisions that come to mind:
- Programming language(s)
- Third party libraries
- Architectural pattern/style
- Build system selection
- Targeted operating system(s)
- Version control system
- Automated testing framework
- Communication middleware
- GUI framework
- Requirements management tool
- Non-functional requirements (latency, throughput, availability, security, safety)
- Development process
Once your production code gets intimately bound with the set of items on the list, there comes a point of no return on the project timeline where changing any of them is pragmatically impossible. Once this mysterious time threshold is crossed, changing the product source code may be easier than changing anything else on the list.
Got any other items to add to the list?
In the 1980s, Ed Yourdon was very visible and influential in the software development industry. Arguably, Ed was the equivalent to what Martin Fowler is today – a high profile technical guru. Then at some point, it seemed like Mr. Yourdon disappeared off the face of the earth. He really didn’t vanish, but he’s much less visible now than he was back then. He’s actually making big bux serving as an expert witness in software disaster court cases.
Since I enjoyed Ed’s work on structured systems analysis/design (before the OO juggernaut took off) and I’ve read several of his articles and books over the years, I was delighted to discover this InfoQ interview: “Ed Yourdon on the State of Agile and Trends in IT“.
In the interview, Ed states that he’s asked many CIOs what “agile” means to them. Unsurprisingly, the vast majority of them said that it enabled developers to write software faster. Of course, there was no mention of the higher quality and/or the elevated espirit de corps that’s “supposed” to hitch a free ride on the agile gravy train.
Comprehensiveness is the enemy of comprehensibility – Martin Fowler
Martin’s quote may be the main reason why this preference was written into the Agile Manifesto…
Working software over comprehensive documentation
Obviously, it doesn’t say “Working software and no documentation“. I’d bet my house that Martin and his fellow colleagues who conjured up the manifesto intentionally stuck the word “comprehensive” in there for a reason. And the reason is that “good” documentation reduces costs in both the short and long runs. In addition, check out what the Grade-ster has to say:
The code tells the story, but not the whole story – Grady Booch
Now that the context for this post has been set, I’d like to put in a plug for Simon Brown’s terrific work on the subject of lightweight software architecture documentation. In tribute to Simon, I decided to hoist a few of his slides that resonate with me.
Note that the last graphic is my (and perhaps Simon’s?) way of promoting standardized UML-sketching for recording and communicating software architectures. Of course, if you don’t record and communicate your software architectures, then reading this post was a waste of your time; and I’m sorry for that.
The biggest threat to success in software development is the Escalation of Unessential Complexity (EUC). Like stress is to humans, EUC is a silent project/product killer lurking in the shadows – ready to pounce. If your team isn’t constantly asking “how can we make this simpler?” as time ticks by and money gets burned, then your project and product will probably get hosed somewhere downstream.
But wait! It’s even worse than you think. Asking “how can we make this simpler?” applies to all levels and areas of a project: requirements, architecture, design, code, tools, and process. Simply applying continuous inquiry to one or two areas might delay armageddon, but you’ll still probably get hosed.
If you dwell in a culture that reveres or is ignorant of EUC, then you most likely won’t ever hear the question “how can we make this simpler?” getting asked.
But no need to fret. All ya gotta do to calm your mind is embrace this Gallism :
In complex systems, malfunction and even total malfunction may not be detectable for long periods, if ever – John Gall
Alright, before we go on, let’s first get something out of the way so that we can start from the same context. This post is not about Small Scale Development (SSD) projects. As the following figure shows, on SDD projects one can successfully write code directly from a list of requirements (with some iterative back-and-forth of course) or set of use cases or (hopefully not) both.
Now that we’ve gotten that out of the way, let’s talk about the real subject of this post: Large Scale Development (LSD <- appropriate acronym, no?) projects. On hallucinogenic LSD efforts, one or possibly two additional activities are required to secure any chance at timely success. As the next figure shows, these two activities are “System Design” and “Software Design“.
So, what’s the difference between “system design” and “software design“? Well, if you’re developing software-intensive products for a specialized business domain (e.g. avionics, radar, sonar, medical, taxes), then you’re gonna need domain experts to bridge the GOHI (Gulf Of Human Intellect) between the higher level requirements and the lower level software design…..
Most specialized domain experts don’t know enough about general software design (object-oriented, structured, functional) and most software experts don’t know enough about domain-specific design to allow for successfully skipping the system design phase/stage. But that hasn’t stopped orgs from doing this….
Robert Pirsig’s “Zen And The Art Of Motorcycle Maintenance” is one of my fave books of all time. I have a soft cover copy that I bought in the nineties. Because of its infinite depth and immersive pull, I’ve read it at least three times over the years. Thus, when Amazon.com sent me a recommendation for the kindle version of it for $2.99, I jumped at the chance to e-read it and capture some personally meaningful notes from it.
(Published in 1974) the book sold 5 million copies worldwide. It was originally rejected by 121 publishers, more than any other bestselling book, according to the Guinness Book of Records. – Wikipedia
In a nutshell, ZATAOMM is about a college professor (Pirsig himself) who ends up going insane over his obsession with trying to objectively define what the metaphysical concept of “quality” means.
Quality…you know what it is, yet you don’t know what it is. But that’s self-contradictory. But some things are better than others, that is, they have more quality. But when you try to say what the quality is, apart from the things that have it, it all goes poof! There’s nothing to talk about. But if you can’t say what Quality is, how do you know what it is, or how do you know that it even exists? If no one knows what it is, then for all practical purposes it doesn’t exist at all.
During my fourth read of ZATAOMM, I started noticing how much of the wisdom Mr. Pirsig proffers up applies to the “art” of software development/maintenance:
When you want to hurry something, that means you no longer care about it and want to get on to other things.
This comes up all the time in
mechanicalsoftware work. A hang-up. You just sit and stare and think, and search randomly for new information, and go away and come back again, and after a while the unseen factors start to emerge.
Sometimes just the act of writing down the problems straightens out your head as to what they really are.
The craftsman isn’t ever following a single line of instruction. He’s making decisions as he goes along. For that reason he’ll be absorbed and attentive to what he’s doing even though he doesn’t deliberately contrive this. He isn’t following any set of written instructions because the nature of the material at hand determines his thoughts and motions, which simultaneously change the nature of the material at hand.
Any effort that has self-glorification as its final endpoint is bound to end in disaster.
Stuck. No answer. Honked. Kaput. It’s a miserable experience emotionally. You’re losing time. You’re incompetent. You don’t know what you’re doing. You should be ashamed of yourself.
This gumption trap of anxiety, which results from overmotivation, can lead to all kinds of errors of excessive fussiness. You fix things that don’t need fixing, and chase after imaginary ailments. You jump to wild conclusions and build all kinds of errors into the machine because of your own nervousness.
Impatience is close to boredom but always results from one cause: an underestimation of the amount of time the job will take. You never really know what will come up and very few jobs get done as quickly as planned. Impatience is the first reaction against a setback and can soon turn to anger if you’re not careful.
Impatience, stuckness, underestimation, anxiety, and carelessness. These are just a subset of the feelings and behaviors that pervade dysfunctionally soulless organizations whose sole focus is on “following prescribed process and meeting schedule“.