Unprecedented

November 29, 2017 Leave a comment

In one year, Bitcoin has risen 10X. An incredible, unprecedented event in the history of investment. A substantial price pullback seems in the cards, but the world has never seen a financial innovation like Bitcoin.

Categories: bitcoin

Coming Clean….

November 17, 2017 10 comments

Just in case you were wondering why I haven’t been blogging about Bitcoin, C++ programming, and/or dysfunctional management behaviors. I’ve decided to come clean….

In late February 2015, I was diagnosed with Stage 3B, Non-Small Cell Lung Cancer (NSCLC).

There, it took almost two fuckin’ years to get the courage to do so, but I’ve said it. Any kestions?

Yes, I’m serious; Yahoo Serious!

Categories: Cancer

The State Of Bitcoin

May 23, 2017 2 comments

Well, that’s all fine and dandy for us bitcoiners. What most people that have recently “seen the light” don’t know is that the brutal scaling war going on behind the scenes can tear the price in half (at least) in the near future.

Categories: bitcoin

Encrypting/Decrypting For Confidentiality

May 20, 2017 1 comment

Depending on how they’re designed, there are up to 5 services that a cryptographic system can provide to its users:

In my last post, Hashing For Integrity, we used the Poco.Crypto library to demonstrate how one-way hash functions can provide for message integrity over unsecured communication channels. In this post, we’ll use the library to “Encrypt/Decrypt For Confidentiality“.

The figure below shows how a matched symmetric key/cipher pair provides the service of “confidentiality” to system users. Since they are “symmetric“, the encryption key is the same as the decryption key. Anyone with the key and matching cipher can decrypt a message (or file) that was encrypted with the same key/cipher pair.

The gnarly issue with symmetric key cryptographic sytems is how to securely distribute copies of the key to those, and only those, users who should get the key. Even out-of-band key transfers (via e-mail, snail mail, telephone call, etc) are vulnerable to being intercepted by “bad guys“. The solution to the “secure key distribution” problem is to use asymmetric key cryptography in conjunction with symmetric key cryptography, but that is for a future blog post.

To experiment with symmetric key encryption/decryption using the Poco.Crypto library, we have added a MyCrypto class to the bulldozer00/PocoCryptoExample GitHub repository.

The design of the Poco.Crypto library requires users to create a CipherKey object directly, and then load the key into a Cipher acquired through a CipherFactory singleton.

The MyCipher class data members and associated constructor code are shown below:

After the MyCipher constructor has finished executing, the following CipherKey characteristics appear in the console:

Note that for human readability, the 256 bit key value is printed out as a series of 64, 4 bit hex nibbles.

So, where did we get the “aes-256-cbc” key name from, and are there different key generation algorithms that we could’ve used instead? We got the key name from the openssl library by entering “openssh -h” at the command line:

Using “des-ede” as the key name, we get the following console output after the constructor has finished executing:

The number of bits in a CipherKey is important. The larger the number of bits, the harder it is for the bad guys to crack the system.

So, now that we have a matched CipherKey and Cipher object pair, let’s put them to use to encrypt/decrypt a ClearText message. As you can see below, the Poco.Crypto library makes the implementation of the MyCipher::encryptClearTextMsg() and MyCipher::decryptCipherTextMsg()  member functions trivially simple.

The unit test code that ensures that the ClearText message can be encrypted/decrypted is present in the  MyCipherTest.cpp file:

The console output after running the test is as anticipated:

So, there you have it. In this post we employed the Poco.Crypto library to learn how to use and test the facilities provided by the library to simulate a crypto system that provides its users with confidentiality using encryption/decryption. I hope this post was useful to those C++ programmers who are interested in cryptographic systems and want to get started coding with the Poco.Crypto library.

Categories: C++ Tags:

Hashing For Integrity

May 3, 2017 5 comments

Introduction

Before I discovered the wild and wacky world of Bitcoin, I didn’t pay much attention to cryptography or system security. Those intertwined subject areas, though important, seemed boring to me. Plus, the field is loaded with all kinds of rich, complex, terminology and deep, bit-wise, computationally-intensive, mathematical computations:

Symmetric/Asymmetric key generation algorithms, secure key distribution, private/public key pairs, block and bitstream cipher algorithms: DES, 3DES, AES, Blowfish, encryption by substitution/translation, hashing algorithms for integrity: MD5, RSA, SHA1, SHA2, RIPEMD-160, digital signatures, message digests, confidentiality, authentication, non-repudiation.

But now that I’m a “Bitcoiner“, all these topics suddenly seem interesting to me. Thus, I decided to look for a C++ Crypto library and write some buggy, exploratory, code to learn what the hell is going on.

Out of the gate, I didn’t want to get bogged down or overwhelmed by interfacing directly with the bedrock openssl C library API which underpins most higher level Crypto libraries. I wanted an easier-to-use, abstraction-oriented wrapper library on top of it that would shield me from all of the low level details in openssl

I didn’t have to look far for a nice C++ Crypto library. Poco has a Crypto library. Poco is a well-known, highly polished, set of general purpose, open source libraries/frameworks that is widely deployed across the globe.

The next thing I needed to do was to narrow down the scope of the project. Instead of hacking together some big complicated application, I decided to learn how “hashing achieves message integrity“.

Hashing For Integrity

A hash is a one-way function. When a message, large or small, is sent through a hashing algorithm, the resulting output is NOT an encrypted message that can be decrypted further downstream. It’s a simple, fixed size (in terms of number of bits) value also known as a message “fingerprint“, or “digest“.

Let’s say “I owe you $100” and you want an acknowledgment from me of that fact. I could write it down on paper, sign the note, and give it to you. My signature conveys the fact that I authorized the IOU and it gives the message a degree of integrity.

If you received the note without my signature, I could deny the IOU and I could deny ever sending the note to you. I can even say that you made it up out of thin air to scam $100 from me.

By providing you with an electronic version of the “I owe you $100” note AND a fingerprint in the form of a hash value derived from the content of the note, you could at least verify that the note content is legit and hasn’t been tampered with.

You would do this verification by locally running the note content through the exact same hash function I did, and then comparing your hash value with the fingerprint/digest supplied directly with the note. If they match, then the note is legit. Otherwise, it means that the note was “altered” sometime after I generated the first hash value. Any little change, even a one bit mutation to the message, invalidates the fingerprint derived from the unaltered message. That’s the nature of hashing.

Using Poco::Crypto::SHA1Engine For Hashing

The class diagram in the figure below shows what I needed from the Poco.Crypto library in order to code up and simulate the behavior of a hashing system.

The Poco::Crypto:SHA1Engine class implements the SHA-1 message digest algorithm. (FIPS 180-1, see (http://www.itl.nist.gov/fipspubs/fip180-1.htm) ). Here’s a simple code usage example in which: an engine is created, a message is sent to it, and a hash value is returned:

The following console output from the code shows that the SH1Engine, in its default configuration, generates 160 bit hash values. The SH1Engine::digestToHex() converts the bit pattern into 40, 4bit, nibbles and returns a human-readable hexidecimal string:

The PocoCryptoExample Project

Using the Eclipse CDT, I coded up the design in the class diagram from the previous section. The PocoCryptoExample source tree is available on GitHub here: https://github.com/bulldozer00/PocoCryptoExample. All you have to do is download the source tree, import it into Eclipse as “an existing Eclipse project“, and build the executable using the internal CDT builder (see README.md).

The test driver code that exercises the design (in MessageIntegrityTest.cpp) is as follows:

  1. Initializes the Poco Crypto library (which in turn initializes the openssl library).
  2. Creates the Sender, Recipient, and the I owe you $100” message.
  3. Invokes the Sender to compute the message fingerprint, set the message and fingerprint within a ChannelMessage, and transmit the result to the Recipient.
  4. Upon receipt of the transaction status (success/failure) back from the Recipient, the test driver prints the result to the console.

The test driver then:

  1. Commands the Sender to simulate a man-in-the-middle attack by maliciously changing the message content to You owe me $10,000” and sending a  new ChannelMessage to the Recipient without changing the fingerprint from the prior ChannelMessage.
  2. Upon receipt of the transaction status (success/failure), the test driver prints the result to the console.
  3. Uninitializes the Poco Crypto library (which in turn uninitializes the openssl library) and exits.

Here is the console output produced by the program:

What’s Next?

A next step in the learning process can be to integrate a Poco::Crypto::Cipher class into the design so that message encryption/decryption capability can be added. As you can see from the example code on the Poco Cipher API page, it’s not as easy as adding the SHA1Engine. It is more difficult to create/use a Cipher object because the class depends on the CipherFactory and CipherKey classes.

With those additions, we can simulate sending an encrypted and fingerprinted (integrity AND confidentiality) message to “matchedRecipients who have the same Poco.Crypto objects in their code. A Recipient would then use the SH1Engine to first check that the fingerprint belongs to the message. If the message passes that test, the recipient would then use the Cipher to decrypt the message content.

You know what would be great? If the ISO C++ standards committee added a <crypto> library to the C++ standard library to complement the impressive random number generation and probability distribution functionality available in <random>.

Categories: C++ Tags: ,

Bitcoin Echo Chambers

Categories: bitcoin

Mind-To-Code-To-Mind And Mind-To-Model-To-Code

April 27, 2017 Leave a comment

Since my previous post, I’ve been thinking in more detail about how we manage to move an integrated set of static structures and dynamic behaviors out of our heads and into a tree of associated source code files. A friend of mine, Bill Livingston, coined this creative process as “bridging the gap” across the “Gulf Of Human Intellect” (GOHI).

The figure below shows two methods of transcending the GOHI: direct mind-to-code (M2C), and indirect mind-to-model-to-source (M2M2C). The difference is that M2M2C is scale-able where as M2C is not. Note that both methods are iterative adventures.

Past a certain system size (7 +/- 2 interconnected chunks?), no one can naturally fit a big system model entirely within their head without experiencing mental duress. By employing a concrete model as a “cache” between the mind and the code, M2M2C can give large performance and confidence boosts to the mind. But, one has to want to actively learn how to model ideas in order to achieve these benefits.

From Mind-To-Code (M2C)

How do we grow from a freshly minted programmer into a well-rounded, experienced, software engineer? Do we start learning from the top-down about abstract systems, architecture, design, and/or software development processes? Or do we start learning from the bottom up about concrete languages, compilers, linkers, build systems, version control systems?

It’s natural to start from the bottom-up; learning how to program “hands on“. Thus, after learning our first language-specific constructs, we write our first “Hello World” program. We use M2C to dump our mind’s abstract content directly into a concrete main.cpp  file via an automatic, effortless, Vulcan mind-meld process.

Next, we learn, apply, and remember over time a growing set of language and library features, idioms, semantics, and syntax. With the addition of these language technical details into to our mind space, we gain confidence and we can tackle bigger programming problems. We can now hold a fairly detailed vision of bigger programs in our minds – all at once.

From Mind-To-Model-To-Code (M2M2C)

However, as we continue to grow, we start to yearn of building even bigger, more useful, valuable systems that we know we can’t hold together in our minds – all at once. We turn “upward“, stretching our intellectual capabilities toward the abstract stuff in the clouds. We learn how to apply heuristics and patterns to create and capture design and architecture artifacts.

Thus, unless we want to go down the language lawyer/teacher route, we learn how to think outside of the low level “language space“. We start thinking in terms of “design space“, creating cohesive functional units of structure/behavior and the mechanisms of loosely connecting them together for inter-program and intra-program communication.

We learn how to capture these designs via a modeling tool(s) so we can use the concrete design artifacts as a memory aid and personal navigational map to code up, integrate, and test the program(s). The design artifacts also serve double duty as communication aid for others. Since our fragile minds are unreliable, and they don’t scale linearly, the larger the system (in terms of number of units, types of units, size of units, and number of unit-to-unit interfaces), the more imperative it is to capture these artifacts and keep them somewhat in synch with the fleeting images we are continuously munching on in our mind.

We don’t want to record too much detail in our model because the overhead burden would be too great if we had to update the concrete model artifacts every time we changed a previous decision. On the other hand, we don’t want to be too miserly. If we don’t record “just enough” detail, we won’t be able mentally trace back from the artifacts to the “why?” design decisions we made in our head. That’s the “I don’t know why that’s in the code base or how we got here” syndrome.

A Useful Design Tool

For a modeling tool, we can use plain ole paper sketches that use undecipherable “my own personal notation“, or we can use something more rigorous like basic UML diagrams.

For example, take the static structural model of a simple 3 class design in this UML class diagram:

I reverse-engineered this model out of a small section of the code base in an open source software project. If you know UML, you know that the diagram reads as:

  • A CBlockis aCBlockHeader.
  • A CBlockhas” one or more CTransactionRef objects that it creates, owns, and manages during runtime
  • A CBlockHeaderhas” several data members that it creates, owns, and manages during runtime.

Using this graphic artifact, we can get to a well structured skeleton code base better than trying to hold the entire design in our head at once and then doing that Vulcan mind meld thingy directly to code again.

Using the UML class diagram, I coded up the skeletal structure of the program as three pairs of .h + .cpp files. Some UML tools can auto-generate the code skeletons at the push of a button after the model(s) have been manually entered into the tool’s database. But that would be a huge overkill here.

As a sanity-test, I wrote a main.cpp file that simply creates and destroys an object of each type:

From Mind-To-Model-To-Code: Repeat And Rise

For really big systems, the ephemeral, qualitative, “ilities” and “itys” tend to ominously pop up out of the shadows during the tail end of a lengthy development effort (during the dreaded system integration & testing phases). They suddenly, but understandably, become as important to success as the visible, “functional feature set“.  After all, if your system is dirt slow (low respons-ivity), and/or crashes often (low reliab-ility ), and/or only accommodates half the number of users as desired (low scala-bility), no one may buy it.

So, in summary, we start out as a junior programmer with limited skills:

Then, assuming we don’t stop learning because “we either know it all already or we’ll figure it out on the fly” we start transforming into a more skilled software engineer.

Categories: technical, uml Tags:
%d bloggers like this: