On my current project, we’re joyfully using C++11 to write our computationally-dense target processing software. We’ve found that std::shared_ptr and std::unique_ptr are extremely useful classes for avoiding dreaded memory leaks. However, I find it mildly irritating that there is no std::make_unique to complement std::make_shared. It’s great that std::make_unique will be included in the C++14 standard, but whenever we use a std::unique_ptr we gotta include a fugly “new” in our C++11 code until then:
But wait! I stumbled across this helpful Herb Sutter slide:
A variadic function template that uses perfect forwarding. It’s outta my league, but….. Whoo Hoo! I’m gonna add this sucker to our platform library and start using it ASAP.
In this interesting 2006 slide deck, “C++ in safety-critical applications: the JSF++ coding standard“, Bjarne Stroustrup and Kevin Carroll provide the rationale for selecting C++ as the programming language for the JSF (Joint Strike Fighter) jet project:
First, on the language selection:
- “Did not want to translate OO design into language that does not support OO capabilities“.
- “Prospective engineers expressed very little interest in Ada. Ada tool chains were in decline.“
- “C++ satisfied language selection criteria as well as staffing concerns.“
They also articulated the design philosophy behind the set of rules as:
- “Provide “safer” alternatives to known “unsafe” facilities.”
- “Craft rule-set to specifically address undefined behavior.”
- “Ban features with behaviors that are not 100% predictable (from a performance perspective).”
Note that because of the last bullet, post-initialization dynamic memory allocation (using new/delete) and exception handling (using throw/try/catch) were verboten.
Interestingly, Bjarne and Kevin also flipped the coin and exposed the weaknesses of language subsetting:
What they didn’t discuss in the slide deck was whether the strengths of imposing a large coding standard on a development team outweigh the nasty weaknesses above. I suspect it was because the decision to impose a coding standard was already a done deal.
Much as we don’t want to admit it, it all comes down to economics. How much is the lowering of the risk of loss of life worth? No rule set can ever guarantee 100% safety. Like trying to move from 8 nines of availability to 9 nines, the financial and schedule costs in trying to achieve a Utopian “certainty” of safety start exploding exponentially. To add insult to injury, there is always tremendous business pressure to deliver ASAP and, thus, unconsciously cut corners like jettisoning corner-case system-level testing and fixing hundreds of ”annoying” rules violations.
Does anyone have any data on whether imposing a strict coding standard actually increases the safety of a system? Better yet, is there any data that indicates imposing a standard actually decreases the safety of a system? I doubt that either of these questions can be answered with any unbiased data. We’ll just continue on auto-believing that the answer to the first question is yes because it’s supposed to be self-evident.
Much has been written about the differences between, and similarities across, management and leadership. But unsurprisingly, most managers equate the word “manager” with the word “leader” by default. After all, they’ve been appointed by other “leaders“. Thus, by (their) definition, managers are leaders.
On the other hand, most raw employees equate the word “manager” with “manager” by default. Err, on second thought, since (as usual) he has no supporting “data“, this BD00 post is prolly full of BS00:
Our old arrogant, egotistical nature (continuously) seeks out sustaining agreement with itself and its distorted opinions. – William Samuel
“Efficient systems are dangerous to themselves and others” – John Gall
A new system is always established with the goal of outright solving, or at least mitigating, a newly perceived problem that can’t be addressed with an existing system. As long as the nature of the problem doesn’t change, continuously optimizing the system for increased efficiency also joyfully increases its effectiveness.
However, the universe being as it is, the nature of the problem is guaranteed to change and there comes a time where the joy starts morphing into sorrow. That’s because the more efficient a system becomes over time, the more rigid its structure and behavior becomes and the less open to change it becomes. And the more resistant to change it becomes, the more ineffective it becomes at achieving its original goal – which may no longer be the right goal to strive for!.
In the manic drive to make a system more efficient (so that more money can be made with less effort), it’s difficult to detect when the inevitable joy-to-sorrow inflection point manifests. Most managers, being cost-reduction obsessed, never see it coming – and never see that it has swooshed by. Instead of changing the structure and/or behavior of the system to fit the new reality, they continue to tweak the original structure and fine tune the existing behaviors of the system’s elements to minimize the delay from input to output. Then they are confounded when (if?) they detect the decreased effectiveness of their actions. D’oh! I hate when that happens.
There’s quite a difference between thinking and behaving as if “the world should be better aligned with my wishes!” and “the world could be better aligned with my wishes“. If your psychic disposition is toward the former, you’ll most likely be walking around frustrated and bitchy most of the time. If it’s toward the latter, you’ll most likely be more accepting and graceful.
BD00 seems to think that he’s been experiencing a slow shift over the years from thinking in terms of “should!” to thinking in terms of “could“. But of course, it may be just another one of those self-delusions that are packed wall to wall inside of his crippled mind.
Nassim Taleb nails it with this simple but profound sentence:
Our minds are not quite designed to understand how the world works, but, rather, to get out of trouble rapidly and have progeny. – Nassim Taleb (Fooled By Randomness)
We human beings are so full of ourselves. With much hubris, we auto-assume that we are above all other life forms just because we can “think“. We concoct immortal and all-powerful gods in our minds who we “think” are watching over our well-being (but not the well being of those we don’t like). Then, when something terrible happens, we wonder “why” our gods could allow such a tragedy. Instead, maybe we should contemplate “why not?“.
The ability to “think” has unquestioningly made life more comfortable locally for the human race over time. However, it’s questionable whether “thinking” has made human life more comfortable globally. Unlike a “mindless” swarm of locusts that ravish the environment with a vengeance, we “mindful” humans seem to be ravishing our environment and other fellow humans at an increasingly alarming rate as our “thinking” supposedly evolves.
Years ago, I watched a televised debate between Deepak Chopra and atheist Sam Harris. Since Deepak came across at times as defensive, I’ve never felt the need to delve into any of Deepak’s books. Nevertheless, since I instantaneously fell in love with this telling picture from the Chopra LinkedIn post “The Conscious Lifestyle: Awareness Skills“, I just had to copy and paste it here: