The addition, err, redefinition of the auto keyword in C++11 was a great move to reduce code verbosity during the definition of local variables:
In addition to this convenient usage, employing auto in conjunction with the new (and initially weird) function-trailing-return-type syntax is useful for defining function templates that manipulate multiple parameterized types (see the third entry in the list of function definitions below).
In the upcoming C++14 standard, auto will also become useful for defining normal, run-of-the-mill, non-template functions. As the fourth entry below illustrates, we’ll be able to use auto in function definitions without having to use the funky function-trailing-return-type syntax (see the useless, but valid, second entry in the list).
For a more in depth treatment of C++11′s automatic type deduction capability, check out Herb Sutter’s masterful post on the new AAA (Almost Always Auto) idiom.
When not ranting and raving on this blawg about “great injustices” (LOL) that I perceive are keeping the world from becoming a better place, I design, write, and test real-time radar system software for a living. I use the UML before, during, and after coding to capture, expose, and reason about my software designs. The UML artifacts I concoct serve as a high level coding road map for me; and a communication tool for subject matter experts (in my case, radar system engineers) who don’t know how to (or care to) read C++ code but are keenly interested in how I map their domain-specific requirements/designs into an implementable software design.
I’m not a UML language lawyer and I never intend to be one. Luckily, I’m not forced to use a formal UML-centric tool to generate/evolve my “bent” UML designs (see what I mean by “bent” UML here: Bend It Like Fowler). I simply use MSFT Visio to freely splat symbols and connections on an e-canvas in any way I see fit. Thus, I’m unencumbered by a nanny tool telling me I’m syntactically/semantically “wrong!” and rudely interrupting my thought flow every five minutes.
The 2nd graphic below illustrates an example of one of my typical class diagrams. It models a small, logically cohesive cluster of cooperating classes that represent the “transmit timeline” functionality embedded within a larger “scheduler” component. The scheduler component itself is embedded within yet another, larger scale component composed of a complex amalgam of cooperating hardware and software components; the radar itself.
When fully developed and tested, the radar will be fielded within a hostile environment where it will (hopefully) perform its noble mission of detecting and tracking aircraft in the midst of random noise, unwanted clutter reflections, cleverly uncooperative “enemy” pilots, and atmospheric attenuation/distortion. But I digress, so let me get back to the original intent of this post, which I think has something to do with how and why I use the UML.
The radar transmit timeline is where other necessarily closely coupled scheduler sub-components add/insert commands that tell the radar hardware what to do and when to do it; sometime in the future relative to “now“. As the radar rotates and fires its sophisticated, radio frequency pulse trains out into the ether looking for targets, the scheduler is always “thinking” a few steps ahead of where the antenna beam is currently pointing. The scheduler relentlessly fills the TxTimeline in real time with beam-specific commands. It issues those commands to the hardware early enough for the hardware to be able to queue, setup, and execute the minute transmit details when the antenna arrives at the desired command point. Geeze! I’m digressing yet again off the UML path, so lemme try once more to get back to what I originally wanted to ramble about.
Being an unapologetic UML bender, and not a fan of analysis-paralysis, I never attempt to meticulously show every class attribute, operation, or association on a design diagram. I weave in non-UML symbology as I see fit and I show only those elements I deem important for creating a shared understanding between myself and other interested parties. After all, some low level attributes/operations/classes/associations will “go away” as my learning unfolds and others will “emerge” during coding anyway, so why waste the time?
Notice the “revision number” in the lower right hand corner of the above class diagram. It hints that I continuously keep the diagram in sync with the code as I write it. In fact, I keep the applicable diagram(s) open right next to my code editor as I hack away. As a PAYGO practitioner, I bounce back and forth between code & UML artifacts whenever I want to.
The UML sequence diagram below depicts a visualization of the participatory role of the TxTimeline object in a larger system context comprised of other peer objects within the scheduler. For fear of unethically disclosing intellectual property, I’m not gonna walk through a textual explanation of the operational behavior of the scheduler component as “a whole“. The purpose of presenting the sequence diagram is simply to show you a real case example that “one diagram is not enough” for me to capture the design of any software component containing a substantial amount of “essential complexity“. As a matter of fact, at this current moment in time, I have generated a set of 7+ leveled and balanced class/sequence/activity diagrams to steer my coding effort. I always start coding/testing with class skeletons and I iteratively add muscles/tendons/ligaments/organs to the Frankensteinian beast over time.
In this post, I opened up my trench coat and
showed you my… attempted to share with you an intimate glimpse into the way I personally design & develop software. In my process, the design is not done “all upfront“, but a purely subjective mix of mostly high and low level details is indeed created upfront. I think of it as “Big Design, But Not All Upfront“.
Despite what some code-centric, design-agnostic, software development processes advocate, in my mind, it’s not just about the code. The code is simply the lowest level, most concrete, model of the solution. The practices of design generation/capture and code slinging/testing in my world are intimately and inextricably coupled. I’m not smart enough to go directly to code from a user story, a one-liner work backlog entry, a whiteboard doodle, or a set of casual, undocumented, face-to-face conversations. In my domain, real-time surveillance radar systems, expressing and capturing a fair amount of formal detail is (rightly) required up front. So, screw you to any and all NoUML, no-documentation, jihadists who happen to stumble upon this post.
In the context of complex decisions with uncertain outcomes and no obvious right answer, the managerial mind inevitably longs for some handrails to grasp amid the smoke and flames. Strategic planning offers that consolation— or illusion— of a sure path to the future – Stewart, Matthew
In “The Management Myth“, Matthew Stewart researches how the business of “Business Strategy” got started and how it evolved over the decades. He (dis)credits Igor Ansoff with starting the phantom fad founded on “nonfalsifiable tautologies, generic reminders, and pompous maxims“. Mr. Stewart also credits mainstream strategy guru Michael Porter with growing the beast in the nineties into the mega-business it is today.
Perhaps the most interesting outcome from the rise of the business of strategy was the stratification of “management” into two classes, top management and middle management:
Top management takes responsibility for deciding on the mix of businesses a corporation ought to pursue and for judging the performance of business unit managers. Middle management is said to be responsible for the execution of activities within specific lines of business. This division within management has created a new and problematic social reality. In earlier times, there was one management and there was one labor, and telling the two apart was a fairly simple matter of looking at the clothes they wore. The rise of middle management has resulted in the emergence of a large group of individuals who technically count as managers and sartorially look the part but nonetheless live very far down the elevator shaft from the people who actually have power – Stewart, Matthew
I always wondered how the delineation between “top” and “middle” management came about. Now I know why.
All self-credentialed management gurus base their sage “advice” on some sort of underlying pseudo-science. To keep the gravy train a rollin’, they gotta keep up with the science du jour; and today’s fresh produce seems to be the highly acclaimed field of neuroscience. But beware….
Studying neurobiology to understand humans is like studying ink to understand literature – Nassim Taleb
So, what preceded neuroscience as the rack du jour for the guru aristocracy to hang their collection baskets on? BD00 is delighted you asked! He’ll not only give you the answer to that question, he’ll gift you the entire history of the march of the gurus:
As you might guess, the research effort that went into the development of this guru-march graphic required many staff hours distributed over dozens of sprints from a highly effective, self-organized, horizontally-scaled, agile team facilitated by a Scrum Alliance certified Scrum Master.
Without a doubt, the most impactful (and depressing) management book I’ve read over the past few decades is Matthew Stewart’s “The Management Myth“. In his unforgettable masterpiece, Mr. Stewart interweaves his personal rise-and-fall story as a highly paid management consultant with the story of the development of management “science” during the 20th century. Both tracts are highly engaging, thought-provoking, and as I said, depressing reads.
At the end of this post, I’m gonna present a passage from Matt’s book that compares the Winslow and Mayo approaches to “scientific” management. But before I do, I feel the need to provide some context on the slots occupied by Winslow and Mayo in the annals of management “science“.
The Taylor Way
Frederick Winslow Taylor is considered by most to be the father of “scientific” management. In his management model, there are two classes of people, the thinkers (managers) and the doers (workers). Thinkers are elites and workers are dumbasses. By increasing piece/hour pay, Taylor’s model can be used to mechanistically increase efficiency, although it doesn’t come for free. Executed “scientifically“, the increase in labor cost is dwarfed by the increase in profits.
The Mayo Way
Elton Mayo, although not nearly as famous as Doug MacGregor (the eloquent theory X and X guy who I liked very much before reading this obscene book), is considered to be one of the top “scientists“, and perhaps creator of, the human relations branch of management (pseudo)science. In Mayo’s management model, there are also two classes of people, the thinkers (managers) and the feelers (workers). Thinkers are also elites, but workers are bundles of emotions. By manipulating emotions, Mayo’s model can be used to “humanely” increase efficiency. But unlike the reviled, inhumane, Taylor model, the efficiency gains from Mayo’s “nice” model are totally free. A double win! Productivity gains in an ethical manner with no additionally incurred financial cost to the dudes in the head shed. Management is happy and the workery is a happy, self-realized community. W00t!
OK, now with the context in place, here’s the passage I promised:
Mayo’s drive for control makes Taylor look like a placard-waving champion of the workingman. The father of scientific management may have referred to his workers as “drays” and “oxen,” but with his incentive-based piece-rate systems he nonetheless took for granted that these beasts of burden had the capacity to make economic decisions for themselves on the basis of their material self-interest. In Mayo’s world, however, the workers of the world lack this basic rational capacity to act in their own self-interest. – Stewart, Matthew (2009-08-10). The Management Myth: Why the Experts Keep Getting it Wrong (p. 135). W. W. Norton & Company. Kindle Edition.
When I first read that passage, it sent an uncomfortable shiver down my spine. Was it as good for you as it was for me?
Shoving all the preceding BD00 drama aside, I’d rather be happy (and duped?) making $XXXX than be miserable making the same amount. I just wish that badass Matt didn’t throw his turd in my damn punchbowl!
In case you’ve been wondering why I’ve been relentlessly railing lately against the guild of agile coaches on Twitter, this post exposes my main motivational force. From what I’ve seen, the coaching community rarely, if ever, thinks or speaks or writes about where the fruits of their so-called 400% efficiency improvements end up. They either auto-assume that the tropical delights are doled out fairly, or the topic is taboo; undiscussable (RIP Chris Argyris).
Take a guess at which CEO recently made all these inspirational statements:
- “Overall I am very pleased with the progress we have made, but we still have a lot of work to do to drive consistent execution and navigate a rapidly shifting marketplace.”
- “We saw improved sales in our mainstream XXXXX business, but we need to improve our pricing discipline and profitability,”
- “We saw improved sales execution, a strong hyper-scale quarter, and stabilization in XXXXX complimented by revenue growth in YYYYYYY”
- “We improved our share position in all three regions”
- “We continue to manage the end-to-end cost structure of our XXXXX business with profitability very much in mind.”
- “Looking forward we will stay committed to smart capital allocation and profitable growth.”
- “As we said at our security analyst meeting last month, we believe we can grow both margin and share over the longer term. We’ll continue to be aggressive in targeted cases, but we have more opportunity to improve our profitability”
If you’re expecting an answer from BD00, then fuggedaboud it. You can pick any CEO because the vast majority of C-execs speak in this same tongue. But ya know what? Despite the standard BD00 sarcasm oozing from this post, the “system” demands that somebody do it; and I’m thankful that those who do it, do do it. I wouldn’t want to do it. In addition to not fitting into the physical and psychological profiles required by the C-level community, it’s not my cup of tea.
If you’re a student (or self-proclaimed/credentialed “expert“) of institutional behavior, there’s no doubt that you’ve heard of Doug MacGregor‘s famous Theory X and Theory Y worldviews regarding social attitudes within organizations. And, if you’re a manager who is not into political suicide, you at least publicly espouse allegiance to the more ethically pleasing Theory Y view.
Well, in “The Management Myth: Why the Experts Keep Getting it Wrong“, philosopher-turned-business-consultant-returned-philosopher Matt Stewart concocts an interesting, but perhaps more pragmatic, Theory T:
Theory T (for tragic): Some degree of conflict is inherent in all forms of social organization. Sometimes the self is at odds with the community, sometimes the community is at odds with itself, and sometimes, as Thomas Hobbes pointed out, it’s a war of all against all. – Matt Stewart
Perhaps shockingly, but not totally out of the realm of possibility, Matt concludes:
It (Theory Y) is an attempt to trick our ethical intuitions— that is, to make workers believe that they are being well treated when in fact they are being exploited.
In this unsettling but thought-tickling view, Mr. Stewart asserts that the aim of both the bad-X and good-Y theories is to ultimately exploit the workery, but only Theory X is transparently upfront about it.
My twitter bio reads: “Fumbling, bumbling, stumbling, exploring, discovering, and being. So many ings!“. As that “ing-ful” first sentence implies, I’m always poking around for new ideas and alternative ways of looking at various aspects of the world. To BD00, ing-ing one’s way through life is a big part of really living life itself. Life is too short to stop ing-ing. But hey, it’s just badass BD00′s opinion; it doesn’t have to be yours.
When I first discover some novel and interesting work from someone I never heard of before, my levels of excitement and curiosity rise. I then dive a little deeper into the work in an honest attempt at ferreting out and understanding the real foundational substance of the work. If (heaven forbid!) I judge a newly discovered work as “meh“, then I move my attention onward toward the next adventurous expedition. There’s no sense in wasting time on something that doesn’t tingle my nerve endings with new meaning. Again, life is too short, no?
If (heaven forbid!) I judge that a newly discovered work is “good” or “bad“, then I get hooked and my current mental models of the world get rattled to an extent proportional to the work’s influence over me. Hell, my mental model(s) may even move off their concrete foundations a bit. In the areas of systems thinking and institutional behaving, the brilliant works of people like Deming, Ackoff, Argyris, MacGregor, Livingston, Warfield, Powers, Starkermann, Forrester, Meadows, Bateson, and Wheatley have considerably shaped my foundational views.
In the interest of full disclosure, I’ve decided to share with you below the relatively benign (compared to this people-oriented, blasphemous model) state transition diagram model of what I suppose goes on inside BD00′s forever ing-ing mind. As you can surmise, the external behaviors (speaking, writing) that I manifest while dwelling in the “sharing” state are bound to piss some people off. Also notice that, in homage to my man Shakespeare, I have inserted a “pausing” state in the model. It’s purpose, which doesn’t always get fulfilled, is to inhibit “the rush to judgment” malady that we all to some extent exhibit(?).
Essentially, all models are wrong, but some are useful – George Box
What does your thinking model look like? I’m especially interested in hearing from those of you who “think” you have transcended the innate human trait of judging objects – the set of which includes people. What would a world without judging look like? Would it be worth striving toward a world without any judging at all? Is it realistic to think there can be a world where people only judge “non-people” objects? BD00 doesn’t “think” so. D’oh!
My Twitter best-buddy tweeted this to me last night:
Sure enough, Richard was right:
D’oh! The production was performed covertly; totally unauthorized and unapproved by BD00 himself. An outrage!
At first, BD00 considered inflicting his powerful law firm (Dewey, Cheetum, and Howe) upon the masterminds behind the flick. But upon further inspection, BD00 decided to keep the dogs caged. He discovered that the movie portrayed him as a gentle giant (think Shrek) with unparalleled leadership skills (think Gandhi) and an aura of self-confident invincibility (think 007). Think multiple inheritance:
Since: the director obviously hit the nail on the head; the special effects are state of the art; and the cinematography is stunning, there will be no messy lawsuit or accompanying media frenzy.