Posts Tagged ‘systems’

Deterministic, Animated, Social

September 5, 2010 Leave a comment

Unless you object, of course, a system can be defined as an aggregation of interacting parts built by a designer for a purpose. Uber systems thinker Russell Ackoff classified systems into three archetypes: deterministic, animated, and social. The main criterion Ackoff uses for mapping a system into its type is purpose; the purpose of the containing whole and the purpose(s) of the whole’s parts.

The figure below attempts to put the Ackoff  “system of system types” :) into graphic form.

Deterministic Systems

In a deterministic system like an automobile, neither the whole nor its parts have self-purposes because there is no “self”. Both the whole and its parts are inanimate objects with fixed machine behavior designed and assembled by a purposeful external entity, like an engineering team.  Deterministic systems are designed by men to serve specific, targeted purposes of men. The variety of behavior exhibited by deterministic systems, while possibly being complex in an absolute sense, is dwarfed by the variety of behaviors capable of being manifest by animated or social systems.

Animated Systems

In an animated system, the individual parts don’t have isolated purposes of their own, but the containing whole does. The parts and the whole are inseparably entangled in that the parts require services from the whole and the whole requires services from the parts in order to survive.  The non-linear product (not sum) of the interactions of the parts manifest as the external observable behavior of the whole. Any specific behavior of the whole cannot be traced to the behavior of a single specific part. The human being is the ultimate example of an animated system. The heart, lungs, liver, an arm, or a leg have no purposes of their own outside of the human body. The whole body, with the aid of the product of the interactions of its parts produces a virtually infinite range of behaviors. Without some parts, the whole cannot survive  (loss of a functioning heart). Without other parts, the behavior of the whole becomes constrained (loss of a functioning leg).

Social Systems

In a social system, the whole and each part has a purpose. The larger the system, the greater the number and variety of the purposes. If they aren’t aligned to some degree, the product of the purposes can cause a huge range of externally observed behaviors to be manifest. When the self-purposes of the parts are in total alignment with whole, the system’s behavior exhibits less variety and greater efficiency at trying to fulfill the whole’s purpose(s). Both internal and external forces continually impose pressure upon the whole and its parts to misalign. Only those designers who can keep the parts’ purpose aligned with the whole’s purpose have any chance of getting the whole to fulfill its purpose.

System And Model Mismatch

Ackoff states that modeling a system of one type with the wrong type for the purpose of improving or replacing it is the cause of epic failures. For example, attempting to model a social system as a deterministic system with an underlying mathematical model causes erroneous actions and decisions to be made by ignoring the purposes of the parts. Human purposes cannot be modeled with equations. Likewise, modeling a social system as an animated system also ignores the purposes of the many parts. These mismatches assume the purposes of the parts align with each other and the purpose of the whole. Bad assumption, no?

Go, Go Go!

August 6, 2010 Leave a comment

Rob Pike is the Google dude who created the Go programming language and he seems to be on a PR blitz to promote his new language. In this interview, “Does the world need another programming language?”, Mr. Pike says:

…the languages in common use today don’t seem to be answering the questions that people want answered. There are niches for new languages in areas that are not well-served by Java, C, C++, JavaScript, or even Python. – Rob Pike

In Making It Big in Software, UML co-creator Grady Booch seems to disagree with Rob:

It’s much easier to predict the past than it is the future. If we look over the history of software engineering, it has been one of growing levels of abstraction—and, thus, it’s reasonable to presume that the future will entail rising levels of abstraction as well. We already see this with the advent of domain-specific frameworks and patterns. As for languages, I don’t see any new, interesting languages on the horizon that will achieve the penetration that any one of many contemporary languages has. I held high hopes for aspect-oriented programming, but that domain seems to have reached a plateau. There is tremendous need to for better languages to support massive concurrency, but therein I don’t see any new, potentially dominant languages forthcoming. Rather, the action seems to be in the area of patterns (which raise the level of abstraction). – Grady Booch

I agree with Grady because abstraction is the best tool available to the human mind for managing the explosive growth in complexity that is occurring as we speak. What do you think?

Abstraction is selective ignorance – Andrew Koenig

Arbitrary Boundaries

The Generic Template

My Arbitrarily Defined Boundary

Your Arbitrarily Defined Boundary

Processes, Threads, Cores, Processors, Nodes

May 27, 2010 2 comments

Ahhhhh, the old days. Remember when the venerable CPU was just that, a CPU? No cores, no threads, no multi-CPU servers. The figure below shows a simple model of a modern day symmetric multi-processor, multi-core, multi-thread server. I concocted this model to help myself understand the technology better and thought I would share it.

The figure below shows a generic model of a multi-process, multi-threaded, distributed, real-time software application system. Note that even though they’re not shown in the diagram, thread-to-thread and process-to-process interfaces abound. There is no total independence since the collection of running entities comprise an interconnected “system” designed for a purpose.

Interesting challenges in big, distributed system design are:

  • Determining the number of hardware nodes (NN) required to handle anticipated peak input loads without dropping data because of a lack of processing power.
  • The allocation of NAPP application processes to NN nodes (when NAPP > NN).
  • The dynamic scheduling and dispatching of software processes and threads to hardware processors, cores, and threads within a node.

The first two bullets above are under the full control of system designers, but not the third one. The integrated hardware/software figure below highlights the third bullet above. The vertical arrows don’t do justice to the software process-thread to hardware processor-core-thread scheduling challenge. Human control over these allocation activities is limited and subservient to the will of the particular operating system selected to run the application. In most cases, setting process and thread priorities is the closest the designer can come to controlling system run-time behavior and performance.

Ackoff On Systems Thinking

Russell Ackoff, bless his soul, was a rare, top echelon systems thinker who successfully avoided being assimilated by the borg. Checkout the master’s intro to systems thinking in the short series of videos below.

What do you think?

Viable, Vulnerable, Doomed

May 14, 2010 1 comment

Unless an org is subsidized without regard to its performance (e.g. a government agency, a pure corpocracy overhead unit like HR), it must both explore and exploit to retain its existence. Leaders explore the unknown and managers exploit the known, so competence in both these areas is required for sustained viability.

Exploitation is characterized by linear thinking (projecting future trajectory solely based on past trajectory) and exploration is characterized by loop thinking. Since these two types of thinking are radically different and prestigious schools teach linear thinking exclusively, all unenlightened orgs have a dearth of loop thinkers. Sadly, the number of linear thinkers (knowers) increases and the number of loop thinkers (unknowers) decreases as the management chain is traversed upward. This is the case because linear thinkers and loop thinkers aren’t fond of one another and the linear thinkers usually run the show.

The figure below hypothesizes three types of org systems: vulnerable, doomed, and viable. The vulnerable org has a loop thinking exploration group but most new product/service ideas are “rejected” by the linear thinkers in charge because of the lack of ironclad business cases. Those new product/service ideas that do run the gauntlet and are successful in the marketplace inch the org forward and keep it from imploding. The doomed org has an exploration group, but it’s just for show. These orgs parade around their credentialed rocket scientists for the world to see and hear but nothing of exploitable substance ever comes out of the money sucking rathole. The viable org not only has a productive explorer group, but the top leadership group is comprised of loop thinkers too – D’oh! These extraordinary orgs (e.g. Apple, Netflix, Zappos, SAS) are perpetually ahead of their linear thinking peers and they continually (and unsurprisingly) kick ass in the marketplace.

What type of org are you a member of?

Closed Systems

In “Entropy Demystified“, Arieh Ben-Naim states an often forgotten fact about entropy:

The entropy of a system can decrease when that system is coupled with another system (e.g. a heating system connected with a thermostat). The law of ever increasing entropy is only valid in an isolated system.

In the figure below, the system on the left is coupled with the external environment and its members can use the coupling to learn how to adapt, dynamically self-organize, and arrest the growth in entropy that can destroy systems. In the isolated system on the right, which models a typical corpo mediocracy run by fat headed and infallible BMs who ignore everything outside their cathedral walls, there is no possibility of learning – and entropy marches forward.

Jumpin’ Out

If you’re deeply embedded in a complex social system, it’s incredibly difficult to gain any insight into what the system you’re enmeshed in really does, or how it does what it does. Even though you’re an integral element of the system, your view is most likely obscured by your lack of interest in finding out or, more likely, by a lack of communication from the dudes in the penthouse. All you can see are trees. No forest, and no sun above the treetops.

A great way of “jumping out of the system” to get a better view and understanding is by modeling. By taking a stab at modeling the static structures and dynamic behaviors of the system you’re in, you can get a much better feel for what’s going on.

A formal language like UML or SysML paired with a good visual drawing tool like Visio can be a powerful tool set to help you gain perspective, but all you really need is a pencil and paper to start things off. Since your first few iterations will suck and be totally wrong, you’ll be throwing away lots of wood pulp if you don’t use an electronic tool. If you stick with it, you’ll acquire an understanding of what really happens in your system as opposed to what is espoused by those in charge.

All models are wrong, but some are useful. – George Box

Leverage Point

April 11, 2010 Leave a comment

In this terrific systems article pointed out to me by Byron Davies, Donella Meadows states:

Physical structure is crucial in a system, but the leverage point is in proper design in the first place. After the structure is built, the leverage is in understanding its limitations and bottlenecks and refraining from fluctuations or expansions that strain its capacity.

The first sentence doesn’t tell me anything new, but the second one does. Many systems, especially big software systems foisted upon maintenance teams after they’re hatched to the customer, are not thoroughly understood by many, if any, of the original members of the development team. Upon release, the system “works” (and it may be stable). Hurray!

In the post delivery phase, as the (always) unheralded maintenance team starts adding new features without understanding the system’s limitations and bottlenecks, the structural and behavioral integrity of the beast starts surely degrading over time. Scarily, the rate of degradation is not constant; it’s more akin to an exponential trajectory. It doesn’t matter how pristine the original design is, it will undoubtedly start it’s march toward becoming an unlovable “big ball of mud“.

So, how can one slow the rate of degradation in the integrity of a big system that will continuously be modified throughout its future lifetime? The answer is nothing profound and doesn’t require highly skilled specialists or consultants. It’s called PAYGO.

In the PAYGO process, a set of lightweight but understandable and useful multi-level information artifacts that record the essence of the system are developed and co-evolved with the system software. They must be lightweight so that they are easily constructable, navigable, and accessible. They must be useful or post-delivery builders won’t employ them as guidance and they’ll plow ahead without understanding the global ramifications of their local changes. They must be multi-level so that different stakeholder group types, not just builders, can understand them. They must be co-evolved so that they stay in synch with the real system and they don’t devolve into an incorrect and useless heap of misguidance. Challenging, no?

Of course, if builders, and especially front line managers, don’t know how to, or don’t care to, follow a PAYGO-like process, then they deserve what they get. D’oh!

Underbid And Overpromise

April 7, 2010 2 comments

As usual, I don’t get it. I don’t get the underbid-overpromise epidemic that’s been left untreated for ages. Proposal teams, under persistent pressure from executives to win contracts from customers, and isolated from hearing negative feedback by unintegrated program execution and product development teams, perpetually underbid on price/delivery and over-promise on product features and performance. This unquestioned underbid-overpromise industry worst practice has been entrenched in mediocracies since the dawn of the cover-your-ass, ironclad contract. The undiscussable but real tendency to, uh, “exaggerate” an org’s potential to deliver is baked into the system. That’s because  competitors and customers are willing co-conspirators in this cycle of woe. The stalemate ensures that there’s no incentive for changing the busted system. As the saying goes; “if we can’t fix it, it ain’t broke!“. D’oh!

If a company actually could take the high road and submit more realistic proposals to customers, they’d go out of business because non-individual customers (i.e. dysfunctional org bureaucracies where no one takes responsibility for outcomes) choose the lowest bidder 99.99999% of the time. I said “actually could” in the previous sentence because most companies “can’t“. That’s because most are so poorly managed that they don’t know what or where their real costs are. Unrecorded overtime, vague and generic work breakdown structures, inscrutable processes, and wrongly charged time all guarantee that the corpo head sheds don’t have a clue where their major cost sinks are. Bummer.


Get every new post delivered to your Inbox.

Join 455 other followers

%d bloggers like this: