Given a design that requires a bidirectional association between two peer classes:
here is how it can be implemented in C++:
Note: I’m using structs here instead of classes to keep the code in this post less verbose
The key to implementing a bidirectional association in C++ is including a forward struct declaration statement in each of the header files. If you try to code the bidirectional relationship with #include directives instead of forward declarations:
you’ll have unknowingly caused a downstream problem because you’ve introduced a circular dependency into the compiler’s path. To show this, let’s implement the following larger design:
Now, if we try to create a Controller object in a .cpp file like this:
the compiler will be happy with the forward class declarations implementation. However, it will barf on the circularly dependent #include directives implementation. The compiler will produce a set of confusing errors akin to this:
I can’t remember the exact details of the last time I tried coding up a design that had a bidirectional relationship between two classes, but I do remember creating an alternative design that did not require one.
In “old” C++, object factories had little choice but to return unsafe naked pointers to users. In “new” C++, factories can return safe smart pointers. The code snippet below contrasts the old with new.
The next code snippet highlights the difference in safety between the old and the new.
When a caller uses the old factory technique, safety may be compromised in two ways:
- If an exception is thrown in the caller’s code after the object is created but before the delete statement is executed, we have a leak.
- If the user “forgets” to write the delete statement, we have a leak.
Returning a smart pointer from a factory relegates these risks to the dust bin of history.
Check out the simple unit test code fragment below. What the hell is the #define private public preprocessor directive doing in there?
Now look at the simple definition of MyClass:
The purpose of the preprocessor #define statement is to provide a simple, elegant way of being able to unit test the privately defined MyClass::privateImpl() function. It sure beats the kludgy “friend class technique” that blasphemously couples the product code base to the unit test code base, no? It also beats the commonly abused technique of specifying all member functions as public for the sole purpose of being able to invoke them in unit test code, no?
Since the (much vilified) C++ preprocessor is a simple text substitution tool, the compiler sees this definition of MyClass just prior to generating the test executable:
Embarassingly, I learned this nifty trick only recently from a colleague. Even more embarassingly, I didn’t think of it myself.
In an ideal world, there is never a need to directly call private functions or inspect private state in a unit test, but I don’t live in an ideal world. I’m not advocating the religious use of the #define technique to do white box unit testing, it’s just a simple tool to use if you happen to need it. Of course, it would be insane to use it in any production code.
Arne Mertz has a terrific post that covers just about every angle on unit testing C++ code: Unit Tests Are Not Friends.
Gee, look at all those fancy, multidividual contributor titles. And then there is the development team, a.k.a the title-less induhvidual contributors.
While paging through my portfolio of dorky sketches and e-doodles, I stumbled upon one that I whipped up a long time ago when I was learning about Linux shells:
Unless I’m forced to rush, I always bootstrap my learning experience for new subjects by drawing simplistic, abstract pictures like the above as I study the subject matter. Sometimes I’ll spend several hours drawing contextual pix from the lightweight intro chapter(s) of a skill-acquisition book before diving into the nitty gritty details. It works for me because I’m not smart enough to learn by skimming over the “what” and immediately diving into the “how“.
Whenever I feel forced to bypass the “what” and bellyflop into the “how” (via pressure to produce “Twice The Software In Half The Time“), I make way more mistakes. Not only does it degrade the quality of my work, it propagates downstream and degrades the quality of the receivers/users of that work.
It’s a great story. A nine page white-paper hatched by a mysterious author that elegantly integrates peer-to-peer networking, cryptography, and economics to create a new form of money the world has never seen. Brilliant, simply brilliant.
Since I can’t remember where I poached this “checklist of maladies” from, I can’t give any attribution to its creator(s). I’m sorry for that, but I wanted to share it anyway:
I do, however, remember that the presenter was talking about agile processes gone awry.
It’s funny how these maladies have been around forever: pre-agile and post-agile. Resilient little cockroaches, no?