Archive

Posts Tagged ‘systems engineering’

Understood, Manageable, And Known.

February 5, 2014 Leave a comment

Our sophistication continuously puts us ahead of ourselves, creating things we are less and less capable of understanding – Nassim Taleb

It’s like clockwork. At some time downstream, just about every major weapons system development program runs into cost, schedule, and/or technical performance problems – often all three at once (D’oh!).

Despite what their champions espouse, agile and/or level 3+ CMMI-compliant processes are no match for these beasts. We simply don’t have the know how (yet?) to build them efficiently. The scope and complexity of these Leviathans overwhelms the puny methods used to build them. Pithy agile tenets like “no specialists“, “co-located team“, “no titles – we’re all developers” are simply non-applicable in huge, multi-org programs with hundreds of players.

puny processes

Being a student of big, distributed system development, I try to learn as much about the subject as I can from books, articles, news reports, and personal experience. Thanks to Twitter mate @Riczwest, the most recent troubled weapons system program that Ive discovered is the F-35 stealth fighter jet. On one side, an independent, outside-of-the-system, evaluator concludes:

The latest report by the Pentagon’s chief weapons tester, Michael Gilmore, provides a detailed critique of the F-35′s technical challenges, and focuses heavily on what it calls the “unacceptable” performance of the plane’s software… the aircraft is proving less reliable and harder to maintain than expected, and remains vulnerable to propellant fires sparked by missile strikes.

On the other side of the fence, we have the $392 billion program’s funding steward (the Air Force) and contractor (Lockheed Martin) performing damage control via the classic “we’ve got it under control” spiel:

Of course, we recognize risks still exist in the program, but they are understood and manageable. – Air Force Lieutenant General Chris Bogdan, the Pentagon’s F-35 program chief

The challenges identified are known items and the normal discoveries found in a test program of this size and complexity. – Lockheed spokesman Michael Rein

All of the risks and challenges are understood, manageable, known? WTF! Well, at least Mr. Rein got the “normal” part right.

In spite of all the drama that takes place throughout a large system development program, many (most?) of these big ticket systems do eventually get deployed and they end up serving their users well. It simply takes way more sweat, time, and money than originally estimated to get it done.

Drama

Three Degrees Of Distribution

December 19, 2012 2 comments

Behold the un-credentialed and un-esteemed BD00′s taxonomy of software-intensive system complexity:

Three Degrees

How many “M”s does the system you’re working on have? If the answer is three, should it really be two? If the answer is two, should it really be one? How do you know what number of “M”s your system design should have? When tacking on another “M” to your system design because you “have to“, what newly emergent property is the largest complexity magnifier?

Now, replace the inorganic legend at the top of the page with the following organic one and contemplate how the complexity and “success” curves are affected:

Social Legend

Too Detailed, Too Fast

October 16, 2012 Leave a comment

One of the dangers system designers continually face is diving into too much detail, too fast. Given a system problem statement, the tendency is to jump right into fine-grained function (feature) definitions so that the implementation can get staffed and started pronto. Chop-chop, aren’t you done yet?

The danger in bypassing a multi-leveled  analysis/synthesis effort and directly mapping  the problem elements into concrete, buildable functions is that important, unseen, higher level relationships can be missed – and you may pay the price for your haste with massive downstream rework and integration problems. Of course, if the system you’re building is simple enough or you’re adding on to an existing system, then the skip level approach may work out just fine.

Dummkopf

October 2, 2012 Leave a comment

I received this e-mail ad yesterday and felt a burning need (which will not be articulated) to post it on this bogus blog:

Shhhhhh! Don’t tell anyone, but I’m a fan of the “Dummies” series because….  I am a freakin’ Dummkopf. The first book I ever studied on C++ was “C++ For Dummies“. Double-Shhh!!!!

If you stumble across a “Software Engineering For Dummies” book, please notify me. Even better, keep a lookout for a “Software Engineering For Idiots” book.

Note: Since the links in jpg images don’t work, here’s the link you want to click, if you do indeed want to click it: Systems Engineering for Dummies – Effective Application Lifecycle Management Solutions Lab. Don’t worry, I won’t tell anyone. Mum’s the word.

Bounded Solution Spaces

September 29, 2012 2 comments

As a result of an interesting e-conversation with my friend Charlie Alfred, I concocted this bogus graphic:

Given a set of requirements, the problem space is a bounded (perhaps wrongly, incompletely, or ambiguously) starting point for a solution search. I think an architect’s job is to conjure up one or more solutions that ideally engulf the entire problem space. However, the world being as messy as it is, different candidates will solve different parts of the problem – each with its own benefits and detriments. Either way, each solution candidate bounds a solution space, no?

Rules Of Thumb

September 17, 2012 Leave a comment

Look what I dug up in my archives:

Hopefully, this list may provide some aid to at least one poor, struggling soul out in the ether.

Idealized Design

July 4, 2012 2 comments

Russell Ackoff describes the process of “Idealized Design” as follows:

In this process those who formulate the vision begin by assuming that the system being redesigned was completely destroyed last night, but its environment remains exactly as it was. Then they try to design that system with which they would replace the existing system right now if they were free to replace it with any system they wanted.

The basis for this process lies in the answer to two questions. First, if one does not know what one would do if one could do whatever one wanted without constraint, how can one possibly know what to do when there are constraints? Second, if one does not know what one wants right now how can one possibly know what they will want in the future?

An idealized redesign is subject to two constraints and one design principle: technological feasibility and  operational viability, and it is required to be able to learn and adapt rapidly and effectively.

So, are you ready to blow up your system? Nah, tis better to keep the unfathomable, inefficient, ineffective beast (under continuous assault from the second law of thermo) alive and unwell. It’s easier and less risky and requires no work. And hey, we can still have fun complaining about it.

Not Applicable?


D4P4D

May 11, 2012 6 comments

I just received two copies of William Livingston’s “Design For Prevention For Dummies” (D4P4D) gratis from the author himself. It’s actually section 7 of the “Non-Dummies” version of the book. With the addition of  “For Dummies” to the title,  I think it was written explicitly for me. D’oh!

The D4P is a mind bending, control theory based methodology (think feedback loops) for problem prevention in the midst of powerful, natural institutional forces that depend on problem manifestation and continued presence in order to keep the institution alive.

Mr. Livingston is an elegant, Shakespearian-type writer who’s fun to read but tough as hell to understand. I’ve enjoyed consuming his work for over 25 years but I still can’t understand or apply much of what he says – if anything!

As I slowly plod through the richly dense tome, I’ll try to write more posts that disclose the details of the D4P process. If you don’t see anything more about the D4P from me in the future, then you can assume that I’ve drowned in an ocean of confusion.

Human And Automated Controllers

January 15, 2012 Leave a comment

Note: The figures that follow were adapted from Nancy Leveson‘s “Engineering A Safer World“.

In the good ole days, before the integration of fast (but dumbass) computers into controlled-process systems, humans had no choice but to exercise direct control over processes that produced some kind of needed/wanted results. During operation, one or more human controllers would keep the “controlled process” on track via the following monitor-decide-execute cycle:

  • monitor the values of key state variables (via gauges, meters, speakers, etc)
  • decide what actions, if any, to take to maintain the system in a productive state
  • execute those actions (open/close valves, turn cranks, press buttons, flip switches, etc)

As the figure below shows, in order to generate effective control actions, the human controller had to maintain an understanding of the process goals and operation in a mental model stored in his/her head.

With the advent of computers, the complexity of systems that could be, were, and continue to be built has skyrocketed. Because of the rise in the cognitive burden imposed on humans to effectively control these newfangled systems, computers were inserted into the control loop to: (supposedly) reduce cognitive demands on the human controller, increase the speed of taking action, and reduce errors in control judgment.

The figure below shows the insertion of a computer into the control loop. Notice that the human is now one step removed from the value producing process.

Also note that the human overseer must now cognitively maintain two mental models of operation in his/her head: one for the physical process and one for the (supposedly) subservient automated controller:

Assuming that the automated controller unburdens the human controller from many mundane and high speed monitoring/control functions, then the reduction in overall complexity of the human’s mental process model may more than offset the addition of the requirement to maintain and understand the second mental model of how the automated controller works.

Since computers are nothing more than fast idiots with fixed control algorithms designed by fallible human experts (who nonetheless often think they’re infallible in their domain), they can’t issue effective control actions in disturbance situations that were unforeseen during design. Also, due to design flaws in the hardware or software, automated controllers may present an inaccurate picture of the process state, or fail outright while the controlled process keeps merrily chugging along producing results.

To compensate for these potentially dangerous shortfalls, the safest system designs provide backup state monitoring sensors and control actuators that give the human controller the option to override the “fast idiot“. The human controller relies primarily on the interface provided by the computer for monitoring/control, and secondarily on the direct interface couplings to the process.

Follow

Get every new post delivered to your Inbox.

Join 393 other followers

%d bloggers like this: