Archive

Posts Tagged ‘requirements’

Both Inane And Insane

April 7, 2015 6 comments

Let’s start this post off by setting some context. What I’m about to spout concerns the development of large, complex, software systems – not mobile apps or personal web sites. So, let’s rock!

The UML class diagram below depicts a taxonomy of methods for representing and communicating system requirements.

Reqs Taxonomy

On the left side of the diagram, we have the traditional methods: expressing requirements as system capabilities/functions/”shalls”. On the right side of the diagram, we have the relatively newer artifacts: use cases and user stories.

When recording requirements for a system you’re going to attempt to build, you can choose a combination of methods as you (or your process police) see fit. In the agile world, the preferred method (as evidenced by 100% of the literature) is to exclusively employ fine-grained user stories – classifying all the other, more abstract, overarching, methods as YAGNI or BRUF.

As the following enhanced diagram shows, whichever method you choose to predominantly start recording and communicating requirements to yourself and others, at least some of the artifacts will be inter-coupled. For example, if you choose to start specifying your system as a set of logically cohesive capabilities, then those capabilities will be coupled to some extent – regardless of whether you consciously try to discover and expose those dependencies or not. After all, an operational system is a collection of interacting parts – not a bag of independent parts.

Reqs Associations

Let’s further enhance our class diagram to progressively connect the levels of granularity as follows:

Reqs Granularity

If you start specifying your system as a set of coarse-grained, interacting capabilities, it may be difficult to translate those capabilities directly into code components, packages, and/or classes. Thus, you may want to close the requirements-to-code intellectual gap by thoughtfully decomposing each capability into a set of logically cohesive, but loosely coupled, functions. If that doesn’t bridge the gap to your liking, then you may choose to decompose each function further into a finer set of logically cohesive, but loosely coupled, “shall” statements. The tradeoff is time upfront for time out back:

  • Capabilities -> Source Code
  • Use Cases – > Source Code
  • Capabilities -> Functions -> Source Code
  • Use Case -> User Story -> Source Code
  • Capabilities -> Functions -> “Shalls” -> Source Code

Note that, taken literally, the last bullet implies that you don’t start writing ANY code until you’ve completed the full, two step, capabilities-to-“shalls” decomposition. Well, that’s a croc o’ crap. You can, and should, start writing code as soon as you understand a capability and/or function well enough so that you can confidently cut at least some skeletal code. Any process that prohibits writing a single line of code until all the i’s are dotted and all the t’s are crossed and five “approval” signatures are secured is, as everyone (not just the agile community) knows, both inane and insane.

No Source Code

Of course, simple projects don’t need no stinkin’ multi-step progression toward source code. They can bypass the Capability, Function, and Use Case levels of abstraction entirely and employ only fine grained “shalls” or User Stories as the method of specification.

One Step To Source Code

On the simplest of projects, you can even go directly from thoughts in your head to code:

Thoughts To Source Code

The purpose of this post is to assert that there is no one and only “right” path in moving from requirements to code. The “heaviness” of the path you decide to take should match the size, criticality, and complexity of the system you’ll be building. The more the mismatch, the more the waste of time and effort.

Meet RAID And DIRA

November 29, 2014 Leave a comment

If you’re a C++ programmer, you’ve surely written code in accordance with the RAII (Resource Acquisition Is Initialization) idiom. Inspired by the RAII acronym, BD00 presents the RAID idiom: Requirements Allocation Is Design…

RAID

In order to allocate requirements to a design, you must have a design in mind that you think satisfies those requirements. Circularly speaking, in order to create a design (like the one above), you must have a set of requirements in mind to fuel your design process. Thus, RAID == DIRA (Design Is Requirements Allocation).

RAID DIRA

Functional Allocation III

The figure below shows the movement from the abstract to the concrete through a nested “allocation” process designed for big and complex products (thus, hackers and duct-tapers need not apply). “Shalls” are allocated to features, which are allocated to functions, which are allocated to subsystems, which are allocated to hardware and software modules. Since allocation is labor intensive, which takes time, which takes money, are all four levels of allocation required? Can’t we just skip all the of the intermediate levels, allocate “shalls” to HW/SW modules, and start iteratively building the system pronto? Hmmm. The answer is subjective and, in addition to product size/complexity, it depends on many corpo-specific socio-technical factors. There is no pat answer.

Levels Of Abstraction

The figure below shows three variations derived from the hypothetical six-level allocation reference template. In the left-most process, two or more product subsystems that will (hopefully) satisfy a pre-existing set of “shalls” are conceived. At the end of the allocation process, the specification artifacts are released to teams of people “allocated” to each subsystem and the design/construction process continues. ¬† In the middle process, each SW module, along with the set of shalls and functions that it is required to implement is allocated to a SW developer. In the right process, which, in addition to custom software requires the creation of custom HW, the specification artifacts are allocated to various SW and HW engineers. (Since multiple individuals are involved in the development of the product, the definition of all entity-to-entity interfaces is at least as crucial as the identification and definition of the entities themselves, but that is a subject for another future post.)

Allocation Variations

Which process variation, if any, is “best”? Again, the number of unquantifiable socio-technical factors involved¬† can make the decision daunting. The sad part is, in order to avoid thinking and making a conscious decision, most corpo orgs ram the full 6 level process down their people’s throats no matter what. To pour salt on the wound, management, which is always on the outside and looking-in toward the development teams, piles on all kinds of non-value-added byzantine approval procedures while simultaneously pounding the team(s) with schedule pressure to compplete. If you’re on the inside and directly creating or indirectly contributing to value creation, it’s not pretty. Bummer.

%d bloggers like this: