Archive

Posts Tagged ‘systems engineering’

Rules Of Thumb

September 17, 2012 Leave a comment

Look what I dug up in my archives:

Hopefully, this list may provide some aid to at least one poor, struggling soul out in the ether.

Idealized Design

July 4, 2012 2 comments

Russell Ackoff describes the process of “Idealized Design” as follows:

In this process those who formulate the vision begin by assuming that the system being redesigned was completely destroyed last night, but its environment remains exactly as it was. Then they try to design that system with which they would replace the existing system right now if they were free to replace it with any system they wanted.

The basis for this process lies in the answer to two questions. First, if one does not know what one would do if one could do whatever one wanted without constraint, how can one possibly know what to do when there are constraints? Second, if one does not know what one wants right now how can one possibly know what they will want in the future?

An idealized redesign is subject to two constraints and one design principle: technological feasibility and  operational viability, and it is required to be able to learn and adapt rapidly and effectively.

So, are you ready to blow up your system? Nah, tis better to keep the unfathomable, inefficient, ineffective beast (under continuous assault from the second law of thermo) alive and unwell. It’s easier and less risky and requires no work. And hey, we can still have fun complaining about it.

Not Applicable?

D4P4D

May 11, 2012 6 comments

I just received two copies of William Livingston’s “Design For Prevention For Dummies” (D4P4D) gratis from the author himself. It’s actually section 7 of the “Non-Dummies” version of the book. With the addition of  “For Dummies” to the title,  I think it was written explicitly for me. D’oh!

The D4P is a mind bending, control theory based methodology (think feedback loops) for problem prevention in the midst of powerful, natural institutional forces that depend on problem manifestation and continued presence in order to keep the institution alive.

Mr. Livingston is an elegant, Shakespearian-type writer who’s fun to read but tough as hell to understand. I’ve enjoyed consuming his work for over 25 years but I still can’t understand or apply much of what he says – if anything!

As I slowly plod through the richly dense tome, I’ll try to write more posts that disclose the details of the D4P process. If you don’t see anything more about the D4P from me in the future, then you can assume that I’ve drowned in an ocean of confusion.

Human And Automated Controllers

January 15, 2012 Leave a comment

Note: The figures that follow were adapted from Nancy Leveson‘s “Engineering A Safer World“.

In the good ole days, before the integration of fast (but dumbass) computers into controlled-process systems, humans had no choice but to exercise direct control over processes that produced some kind of needed/wanted results. During operation, one or more human controllers would keep the “controlled process” on track via the following monitor-decide-execute cycle:

  • monitor the values of key state variables (via gauges, meters, speakers, etc)
  • decide what actions, if any, to take to maintain the system in a productive state
  • execute those actions (open/close valves, turn cranks, press buttons, flip switches, etc)

As the figure below shows, in order to generate effective control actions, the human controller had to maintain an understanding of the process goals and operation in a mental model stored in his/her head.

With the advent of computers, the complexity of systems that could be, were, and continue to be built has skyrocketed. Because of the rise in the cognitive burden imposed on humans to effectively control these newfangled systems, computers were inserted into the control loop to: (supposedly) reduce cognitive demands on the human controller, increase the speed of taking action, and reduce errors in control judgment.

The figure below shows the insertion of a computer into the control loop. Notice that the human is now one step removed from the value producing process.

Also note that the human overseer must now cognitively maintain two mental models of operation in his/her head: one for the physical process and one for the (supposedly) subservient automated controller:

Assuming that the automated controller unburdens the human controller from many mundane and high speed monitoring/control functions, then the reduction in overall complexity of the human’s mental process model may more than offset the addition of the requirement to maintain and understand the second mental model of how the automated controller works.

Since computers are nothing more than fast idiots with fixed control algorithms designed by fallible human experts (who nonetheless often think they’re infallible in their domain), they can’t issue effective control actions in disturbance situations that were unforeseen during design. Also, due to design flaws in the hardware or software, automated controllers may present an inaccurate picture of the process state, or fail outright while the controlled process keeps merrily chugging along producing results.

To compensate for these potentially dangerous shortfalls, the safest system designs provide backup state monitoring sensors and control actuators that give the human controller the option to override the “fast idiot“. The human controller relies primarily on the interface provided by the computer for monitoring/control, and secondarily on the direct interface couplings to the process.

B and S == BS

December 6, 2011 Leave a comment

About a year ago, after a recommendation from management guru Tom Peters, I read Sidney Dekker’s “Just Culture“. I mention this because Nancy Leveson dedicates a chapter to the concept of a “just culture” in her upcoming book “Engineering A Safer World“.

The figure below shows a simple view of the elements and relationships in an example 4 level “safety control structure“. In unjust cultures, when a costly accident occurs, the actions of the low elements on the totem pole, the operator(s) and the physical system, are analyzed to death and the “causes” of the accident are determined.

After the accident investigation is “done“, the following sequence of actions usually occurs:

  • Blame and Shame (BS!) are showered upon the operator(s).
  • Recommendations for “change” are made to operator training, operational procedures, and the physical system design.
  • Business goes back to usual
  • Rinse and repeat

Note that the level 2 and level 3 elements usually go uninvestigated – even though they are integral, influential forces that affect system operation. So, why do you think that is? Could it be that when an accident occurs, the level 2 and/or level 3 participants have the power to, and do, assume the role of investigator? Could it be that the level 2 and/or level 3 participants, when they don’t/can’t assume the role of investigator, become the “sugar daddies” to a hired band of independent, external investigators?

The Law Of Diminishing Returns…

November 11, 2011 Leave a comment
%d bloggers like this: