If you’re curious about the Bitcoin phenomenon and you want to experiment with the technology, Bulldozer00 has an invitation for you.
The first person (that BD00 “knows“) to create their first bitcoin wallet and post a valid bitcoin address in the comments section of this post will get approximately .25 USD worth of bitcoin (1,100 bits) from BD00.
By “knows“, BD00 means that he has previously interacted with you either:
- via face-to-face conversation,
- via your perusal of this blog,
- via Twitter, or
- via LinkedIn.
BTW, in case you want to shower BD00 with some of your own bitcoins, here is one of my bitcoin addresses:
Come on. Don’t be a chicken. Give it a try. I promise that you’ll like it and you’ll get hooked, just like BD00 did!
IMPORTANT UPDATE: At 12:30 PM EST, BD00 sent 1100 uBTC to FreeGnu:
During my current foray into the bitcoin system, I made my first peer-to-peer transfer. After exchanging some USD for some bitcoin on Coinbase.com, I moved my bitcoin out of my account’s wallet on the site and into the encrypted wallet on my PC:
It was incredibly easy to execute the transfer – and there were no “wire” fees involved. Coinbase.com made their money off of me by charging a 1% fee when I initially exchanged my USD for BTC. In addition, they paid the minisicule transaction fee that keeps the bitcoin network humming along with the thousands of co-operating “miner” computers that implement, verify, and confirm transactions.
Like many people who trust private banks to store their cash, many people trust middleman sites like (highly regarded) coinbase.com to store their bitcoin. However, because of the Mt. Gox fiasco and the current lack of regulation in the brave new bitcoin world, I’ve chosen to cut out the middleman and keep my BTC locally – and offline.
Imagine that I was a migrant worker and I wanted to send some money home to my family in my country of origin. As long as my family had access to the internet, I would be able to instantly send them some bitcoin exactly like I transferred bitcoin between my wallets: direct, instantaneous, peer-to-peer, and without any middleman or accompanying fees.
Instead of following standard Markowitz “portfolio theory“ and allocating your investments across low/medium/high risk financial products according to your age group, Nassim Taleb has suggested that investors adhere to the 80-20 rule: invest 80% of your funds in the most conservative products available (e.g. US treasury bills) and the other 20% in the wildest, riskiest investments that you can find (e.g. startup funding).
Wild and risky investments are those that may go to zero but they also have a small chance of going through the roof. The odds that they go to zero are much greater than the odds of them skyrocketing to Mars.
Mr. Taleb’s thinking is that in a black swan triggered extreme financial crisis (e.g. the crash of 2008), a Markowitz-type portfolio will get demolished across the board. The 80% portion of an 80-20 portfolio susceptible to the black swan will suffer too, but because of its extreme conservative nature the likelihood of massive devastation is much less than for the Markowitz mix. For the remaining 20% segment, the black swan event may actually turn out to be a white swan (e.g. shorting the bull market before the 2008 crash).
I’m too chicken to buck the herd’s “age-based allocation” investment strategy, but I have set aside a small stash of “play money” that I’m willing to lose entirely on a wild and crazy investment.
After a fair amount of grokking, the wild and crazy investment I’ve chosen to risk my play money pool with is….. the fledgling Bitcoin movement.
I am by no means a libertarian ideologue (I lean toward the left), but the Bitcoin movement is fascinating to me for the following reason (plucked from the book “Digital Gold” by Nathaniel Popper):
“The root problem with conventional currency is all the trust that’s required to make it work,” Satoshi (the mysterious, unknown creator of the peer-to-peer Bitcoin network protocol) wrote. “The central bank must be trusted not to debase the currency.” The issue that Satoshi referred to here—currency debasement—was, in fact, a problem with existing monetary systems that had much more potential widespread appeal, especially in the wake of the government-sponsored bank bailouts that had occurred just a few months earlier in the United States. Many believe that the end of the gold standard (by Nixon in the 70s) allowed central banks to print money with no restraint, hurting the long-term value of the dollar and allowing for unbridled government spending.
Another reason why I’m drawn to the bitcoin community is the potential of the system to help the poor – those people who do not have access to bank accounts or credit cards and are forced to deal in cash. The increasing uptake of Bitcoin in Argentina, whose government is notoriousy fiscally irresponsible with its peso currency, is enough evidence for me to believe that the Bitcoin network will do for money what the internet has done for information.
The thing that makes Bitcoin a wild and crazy investment is that….
Bitcoin itself is always one big hack away from total failure.
Since the literal disappearance of 100s of millions of dollars worth of Bitcoins from the Mt. Gox implosion and the high profile “Silk Road” disaster did not kill Bitcoin in the crib, it illustrates the underlying strength and robustness of the system. Thus, I made the first of several Bitcoin buys through the Coinbase exchange:
My strategy is to buy and hold for the long, long term. Oh, and to insulate myself from another Mt. Gox type debacle, I’m moving my Bitcoins off of the Coinbase site and into my own personal Bitcoin wallet as soon as I receive them.
If you want to start grokking Bitcoin for yourself, I suggest looking at the following resources that won me over:
Book: Digital Gold – Nathaniel Popper
EconTalk Podcast: Wences Casares on Bitcoin and Xapo
EconTalk Podcast: Nathaniel Popper on Bitcoin and Digital Gold
One of the goals for each evolutionary increment in C++ is to decrease the probability of an average programmer from making mistakes by supplanting “old style” features/idioms with new, easier to use, and more expressive alternatives. The following code sample attempts to show an example of this evolution from C++98/03 to C++11 to C++14.
In C++98/03, there were two ways of clearing out the set of inner vectors in the vector-of-vectors-of-doubles data structure encapsulated by MyClass. One could use a plain ole for-loop or the std::for_each() STL algorithm coupled with a remotely defined function object (ClearVecFunctor). I suspect that with the exception of clever language lawyers, blue collar programmers (like me) used the simple for-loop option because of its reduced verbosity and compactness of expression.
With the arrival of C++11 on the scene, two more options became available to programmers: the range-for loop, and the std::for_each() algorithm combined with an inline-defined lambda function. The range-for loop eliminated the chance of “off-by-one” errors and the lambda function eliminated the inconvenience of having to write a remotely located functor class.
The ratification of the C++14 standard brought yet another convenient option to the table: the polymorphic lambda. By using auto in the lambda argument list, the programmer is relieved of the obligation to explicitly write out the full type name of the argument.
This example is just one of many evolutionary improvements incorporated into the language. Hopefully, C++17 will introduce many more.
Note: The code compiles with no warnings under gcc 4.9.2. However, as you can see in the image from the bug placed on line 41, the Eclipse CDT indexer has not caught up yet with the C++14 specification. Because auto is used in place of the explicit type name in the lambda argument list, the indexer cannot resolve the std::vector::clear() member function.
8/26/15 Update – As a result of reader code reviews provided in the comments section of this post, I’ve updated the code as follows:
8/29/15 Update – A Fix to the Line 27 begin-begin bug:
Flat And Independent
Assume that company ABC develops products for customers in domain XYZ as follows:
To remove the “development process” variable from further consideration in this post (because, thanks to consultants, it seems like everybody and their brother thinks process (traditional, Scrum, XP, LeSS, SAFE, Lean, etc.) is the maker or breaker of success), assume that all the teams use the same development process.
As the figure implies, each product is tailor-made for each customer. Since there are no inter-team dependencies and there is no hierarchy in the organizational structure, each team is an island unto itself and fully responsible for its own success or failure.
The tradeoff for this team independence is that the cost of development for company ABC may be higher than alternative strategies due to the duplication of work inherent in this Flat And Independent (FAIN) approach. For example, the above figure shows that components A and B are developed from scratch 3 times and component 2 is developed twice. What a waste of resources, no? However, assuming that components A and B only need to be developed once and reused across the board requires that component A is identical for all customers and component C is identical for customers 2 and 3. However, even though the products are targeted for the same domain this may not be true. The amount of overlapping functionality for a given component is dependent on the amount of overlap between the customer requirements applicable to that component:
If there is zero requirements overlap, or the amount of overlap is so small that it’s too expensive to gauge, then financing three separate component development efforts is more economically viable and schedule-friendly than trying to ferret out all overlaps and embracing the alternative, Hierarchical And Inter-Dependent (HAID) strategy…..
Hierarchical And Inter-Dependent
Now, assume that company DEF also develops products for customers in domain XYZ, but the org employs the HAID strategy as follows:
In this specific instantiation of the HAID (aka product line) approach:
- Core asset component B is developed once and reused three times
- Core asset components A and C are developed once and reused twice
Beside the obvious downside of core asset components D, E, and F being developed but not reused at all (violating YAGNI in this specific case when it actually applies), there is a less obvious but insidious inefficiency in the two layer hierarchical structure: the product teams are dependent on, and thus, at the mercy of the core assets team. The cost and schedule inefficiencies introduced by this hierarchical dependency can make the HAID approach less economically viable than the traditional, seemingly wasteful, FAIN approach. But wait! It’s worse than that. If you’ve been immersed in the HAID way of life for too long, like a fish in water that has no concept of what the hell water is, you may not even know that you’d be better off if you initially chose, or currently revert to, the FAIN strategy.
Inappropriate application of, or poor execution of, the HAID approach to product development reminds me of the classic framework dilemma in software development. You know the feeling. It’s when you break out into a cold sweat after you choose a development framework (or development process!) and you suddenly realize that you’ve handcuffed yourself into submission after it’s too late to reverse the decision. D’oh!
I guess the moral of this story is nothing new: “just because you changed strategies to become more effective doesn’t make it so.” Well, maybe there is no moral, but I had to end this post some-freakin’-how.
My previous post highlighted inter-company culture clashes. This followup highlights the most insidious intra-company culture clash:
A dear twitter friend recently sent me a link to this case study written by a LeSS management consultant: “Thales Surface Radar”. Since I’ve been an active contributor to the development of multi-million dollar air defense and air traffic control radars for decades, I was excited to finally see a case study directly applicable to my domain of interest.
On my first reading, I found it difficult to separate the wheat from the chaff. Sadly, I experienced the same frustrating feeling on my second read-through – even though I slooowed down and arduously tried to absorb the content. Honestly, I tried, I really tried. I’m just not smart enough to “get it“. No “ah-hah!” moments for BD00.
I think the case study is just another self-promoting, fluffy, smoke-and-mirrors piece written by a stereotypical consultant who wants your money. What do you think?