Archive

Posts Tagged ‘software design’

Is It Safe?


Remember this classic torture scene in “Marathon Man“? D’oh! Is It Safe Now, suppose you had to create the representation of a message that simply aggregates several numerical measures into one data structure:

safe unsafe

Given no information other than the fact that some numerical computations must be performed on each individual target track attribute within your code, which implementation would you choose for your internal processing? The binary, type-safe, candidate, or the text, type-unsafe, option? If you chose the type-unsafe option, then you’d impose a performance penalty on your code every time you needed to perform a computation on your tracks. You’d have to deserialize and extract the individual track attribute(s) before implementing the computations:

eraseTrks

If your application is required to send/receive track messages over a “wire” between processing nodes, then you’d need to choose some sort of serialization/deserialization protocol along with an over-the-wire message format. Even if you were to choose a text format (JSON, XML) for the wire, be sure to deserialize the input as soon as possible and serialize on output as late as possible. Otherwise you’ll impose an unnecessary performance hit on your code every time you have to numerically manipulate the fields in your message.

InOutTrks

More generally….

Serialize-Deserialize

Who Dun It?

July 6, 2015 2 comments

Assume that the figure below faithfully represents two platform infrastructures developed by two different teams for the same application domain. Secondly, assume that both the JAD and UAS designs provide the exact same functionality to their developer users. Thirdly, assume that the JAD design was more expensive to develop (relative depth) and is more frustrating for developers to use (relative jaggy-ness) than the UAS design.

JaggyAndDeep

Fourthly, assume that you know that an agile team created one of the platforms and a traditional team produced the other – but you don’t know which team created which platform.

WhoDunIt

Now that our four assumptions have been espoused, can you confidently state, and make a compelling case for, which team hatched the JAD monstrosity and which team produced the elegant UAS foundation? I can’t.

VCID, ACID, SCID


First, we have VCID:

VCID

In VCID mode, we iteratively define, at a coarse level of granularity, what the Domain-Specific Architecture (DSA) is and what the revenue-generating portfolio of Apps that we’ll be developing are.

Next up, we have ACID:

ACID

In ACID mode, we’ll iteratively define, at at finer level of detail, what each of our Apps will do for our customers and the components that will comprise each App.

Then, we have SCID, where we iteratively cut real App & DSA code and implement per-App stories/use cases/functions:

SCID

But STOP! Unlike the previous paragraphs imply, the “CID”s shouldn’t be managed as a sequential, three step, waterfall execution from the abstract world of concepts to the real world of concrete code. If so, your work is perhaps doomed. The CIDs should inform each other. When work in one CID exposes an error(s) in another CID, a transition into the flawed CID state should be executed to repair the error(s).

CID STM

Managed correctly, your product development system becomes a dynamically executing, inter-coupled, set of operating states with error-correcting feedback loops that steer the system toward its goal of providing value to your customers and profits to your coffers.

CID Timeline

Beware Of Micro-Fragmentation

January 18, 2015 1 comment

While watching Neal Ford’s terrific “Agile Engineering Practices” video series, I paid close attention to the segment in which he interactively demonstrated the technique of Test Driven Development (TDD). At the end of his well-orchestrated example, which was to design/write/test code that determines whether an integer is a perfect number, Mr. Ford presented the following side-by-side summary comparison of the resulting “traditional” Code Before Test (CBT) and “agile” TDD designs.

Ford TDD Example

As expected from any good agilista soldier, Mr. Ford extolled the virtues of the TDD derived design on the right without mentioning any downside whatsoever. However, right off the bat, I judged (and still do) that the compact, cohesive, code-all-in-close-proximity CBT design on the left is more readable, understandable, and maintainable than the micro-fragmented TDD design on the right. If the atomic code in the CBT isPerfect() method on the left ended up spanning much more space than shown, I may have ended up agreeing with Neal’s final assessment that the TDD result is better – in this specific case. But I (and hopefully you) don’t subscribe to this, typical-of-agile-zealots, 100% true, assertion:

TDD Improves

The downside of TDD (to which there are, amazingly, none according to those who dwell in the TDD cathedral), is eloquently put by Jim Coplien in his classic “Why Most Unit Testing Is Waste” paper:

If you find your testers (or yourself) splitting up functions to support the testing process, you’re destroying your system architecture and code comprehension along with it. Test at a coarser level of granularity. – Jim Coplien

As best I can, I try to avoid being an absolutist. Thus, if you think the TDD generated code structure on the right is “better” than the integrated code on the left, then kudos to you, my friend. The only point I’m trying to make, especially to younger and less experienced software engineers, is this: every decision is a tradeoff. When it comes to your intimate, personal, work habits, don’t blindly accept what any expert says at face value – especially “agile” experts.

TDD DDT

Inside-Out, Outside-In

December 22, 2014 Leave a comment

There are two common perspectives on the process of architectural design, whether it be for buildings or for software. The first is that a designer starts with nothing—a blank slate, whiteboard, or drawing board—and builds-up an architecture from familiar components until it satisfies the needs of the intended system. The second is that a designer starts with the system needs as a whole, without constraints, and then incrementally identifies and applies constraints to elements of the system in order to differentiate the design space and allow the forces that influence system behavior to flow naturally, in harmony with the system. Where the first emphasizes creativity and unbounded vision, the second emphasizes restraint and understanding of the system context. – “RESTful” Roy Fielding

It might not be a correct interpretation, but BD00 associates Mr. Fielding’s two alternatives with “inside-out” and “outside-in” design.

The figure below illustrates the process of inside-out design. The designer iteratively composes a structure and “hopes” it will integrate smoothly downstream into the context for which it is intended. During the inside-out design process, the parts are king and the system context is secondary.

Parts First

The figure below depicts an outside-in design process. The designer iteratively composes a structure within the bounded constraints of the context (the “whole“) for which it is intended. During the outside-in design process, system context is king and the parts are secondary.

Context First

Because system contexts can vary widely from system to system and they’re usually vaguely defined, messy, and underspecified, designers often opt for the faster inside-out approach. BD00 uses the outside-in design process. What process do you use?

Slice By Slice Over Layer By Layer

December 5, 2014 3 comments

Assume that your team was tasked with developing a large software system in any application domain of your choice. Also, assume that in order to manage the functional complexity of the system, your team iteratively applied the “separation of concerns” heuristic during the design process and settled on a cleanly layered system as such:

Three Layers

So, how are you gonna manifest your elegant paper design as a working system running on real, tangible hardware? Should you build it from the bottom up like you make a cake, one layer at a time?

Layer By Layer

Or, should you build it like you eat a cake, one slice at a time?

Slice By Slice

The problem with growing the system layer-by-layer is that you can end up developing functionality in a lower layer that may not ever be needed in the higher layers (an error of commission). You may also miss coding up some lower layer functionality that is indeed required by higher layers because you didn’t know it was needed during the upfront design phase (an error of omission). By employing the incremental slice-by-slice method, you’ll mitigate these commission/omission errors and you’ll have a partially working system at the end of each development step – instead of waiting until layers 1 and 2 are solid enough to start adding layer 3 domain functionality into the mix.

In the context of organizational growth, Russell Ackoff once stated something like: “it is better to grow horizontally than vertically“. Applying Russ’s wisdom to the growth of a large software system:

It’s better to grow a software system horizontally, one slice at a time, than vertically, one layer at a time.

The above quote is not some profound, original, BD00 quote. It’s been stated over and over again by multitudes of smart people over the years. BD00 just put his own spin on it.

Abstract Decomposition And Concrete Composition

November 6, 2014 1 comment

On the left we have the process of abstract decomposition, and on the right we have the process of concrete composition:

Decompose And Compose

Note that during the concrete composition from parts to final system on the right, we gracefully transition through two stable, intermediate forms. Some people and communities, especially those obsessed with “velocity” and “time-to-market“, would say “bollocks” to all those value-subtracting, intermediate steps. We no need no stinking intermediate  forms:

direct composition

 

 

 

 

Follow

Get every new post delivered to your Inbox.

Join 513 other followers

%d bloggers like this: