Archive

Posts Tagged ‘systems’

Go, Go Go!

August 6, 2010 Leave a comment

Rob Pike is the Google dude who created the Go programming language and he seems to be on a PR blitz to promote his new language. In this interview, “Does the world need another programming language?”, Mr. Pike says:

…the languages in common use today don’t seem to be answering the questions that people want answered. There are niches for new languages in areas that are not well-served by Java, C, C++, JavaScript, or even Python. – Rob Pike

In Making It Big in Software, UML co-creator Grady Booch seems to disagree with Rob:

It’s much easier to predict the past than it is the future. If we look over the history of software engineering, it has been one of growing levels of abstraction—and, thus, it’s reasonable to presume that the future will entail rising levels of abstraction as well. We already see this with the advent of domain-specific frameworks and patterns. As for languages, I don’t see any new, interesting languages on the horizon that will achieve the penetration that any one of many contemporary languages has. I held high hopes for aspect-oriented programming, but that domain seems to have reached a plateau. There is tremendous need to for better languages to support massive concurrency, but therein I don’t see any new, potentially dominant languages forthcoming. Rather, the action seems to be in the area of patterns (which raise the level of abstraction). – Grady Booch

I agree with Grady because abstraction is the best tool available to the human mind for managing the explosive growth in complexity that is occurring as we speak. What do you think?

Abstraction is selective ignorance – Andrew Koenig

Arbitrary Boundaries


The Generic Template

My Arbitrarily Defined Boundary

Your Arbitrarily Defined Boundary

Processes, Threads, Cores, Processors, Nodes

May 27, 2010 2 comments

Ahhhhh, the old days. Remember when the venerable CPU was just that, a CPU? No cores, no threads, no multi-CPU servers. The figure below shows a simple model of a modern day symmetric multi-processor, multi-core, multi-thread server. I concocted this model to help myself understand the technology better and thought I would share it.

The figure below shows a generic model of a multi-process, multi-threaded, distributed, real-time software application system. Note that even though they’re not shown in the diagram, thread-to-thread and process-to-process interfaces abound. There is no total independence since the collection of running entities comprise an interconnected “system” designed for a purpose.

Interesting challenges in big, distributed system design are:

  • Determining the number of hardware nodes (NN) required to handle anticipated peak input loads without dropping data because of a lack of processing power.
  • The allocation of NAPP application processes to NN nodes (when NAPP > NN).
  • The dynamic scheduling and dispatching of software processes and threads to hardware processors, cores, and threads within a node.

The first two bullets above are under the full control of system designers, but not the third one. The integrated hardware/software figure below highlights the third bullet above. The vertical arrows don’t do justice to the software process-thread to hardware processor-core-thread scheduling challenge. Human control over these allocation activities is limited and subservient to the will of the particular operating system selected to run the application. In most cases, setting process and thread priorities is the closest the designer can come to controlling system run-time behavior and performance.

Ackoff On Systems Thinking


Russell Ackoff, bless his soul, was a rare, top echelon systems thinker who successfully avoided being assimilated by the borg. Checkout the master’s intro to systems thinking in the short series of videos below.

What do you think?

Viable, Vulnerable, Doomed

May 14, 2010 1 comment

Unless an org is subsidized without regard to its performance (e.g. a government agency, a pure corpocracy overhead unit like HR), it must both explore and exploit to retain its existence. Leaders explore the unknown and managers exploit the known, so competence in both these areas is required for sustained viability.

Exploitation is characterized by linear thinking (projecting future trajectory solely based on past trajectory) and exploration is characterized by loop thinking. Since these two types of thinking are radically different and prestigious schools teach linear thinking exclusively, all unenlightened orgs have a dearth of loop thinkers. Sadly, the number of linear thinkers (knowers) increases and the number of loop thinkers (unknowers) decreases as the management chain is traversed upward. This is the case because linear thinkers and loop thinkers aren’t fond of one another and the linear thinkers usually run the show.

The figure below hypothesizes three types of org systems: vulnerable, doomed, and viable. The vulnerable org has a loop thinking exploration group but most new product/service ideas are “rejected” by the linear thinkers in charge because of the lack of ironclad business cases. Those new product/service ideas that do run the gauntlet and are successful in the marketplace inch the org forward and keep it from imploding. The doomed org has an exploration group, but it’s just for show. These orgs parade around their credentialed rocket scientists for the world to see and hear but nothing of exploitable substance ever comes out of the money sucking rathole. The viable org not only has a productive explorer group, but the top leadership group is comprised of loop thinkers too – D’oh! These extraordinary orgs (e.g. Apple, Netflix, Zappos, SAS) are perpetually ahead of their linear thinking peers and they continually (and unsurprisingly) kick ass in the marketplace.

What type of org are you a member of?

Closed Systems


In “Entropy Demystified“, Arieh Ben-Naim states an often forgotten fact about entropy:

The entropy of a system can decrease when that system is coupled with another system (e.g. a heating system connected with a thermostat). The law of ever increasing entropy is only valid in an isolated system.

In the figure below, the system on the left is coupled with the external environment and its members can use the coupling to learn how to adapt, dynamically self-organize, and arrest the growth in entropy that can destroy systems. In the isolated system on the right, which models a typical corpo mediocracy run by fat headed and infallible BMs who ignore everything outside their cathedral walls, there is no possibility of learning – and entropy marches forward.

Jumpin’ Out


If you’re deeply embedded in a complex social system, it’s incredibly difficult to gain any insight into what the system you’re enmeshed in really does, or how it does what it does. Even though you’re an integral element of the system, your view is most likely obscured by your lack of interest in finding out or, more likely, by a lack of communication from the dudes in the penthouse. All you can see are trees. No forest, and no sun above the treetops.

A great way of “jumping out of the system” to get a better view and understanding is by modeling. By taking a stab at modeling the static structures and dynamic behaviors of the system you’re in, you can get a much better feel for what’s going on.

A formal language like UML or SysML paired with a good visual drawing tool like Visio can be a powerful tool set to help you gain perspective, but all you really need is a pencil and paper to start things off. Since your first few iterations will suck and be totally wrong, you’ll be throwing away lots of wood pulp if you don’t use an electronic tool. If you stick with it, you’ll acquire an understanding of what really happens in your system as opposed to what is espoused by those in charge.

All models are wrong, but some are useful. – George Box

Leverage Point

April 11, 2010 Leave a comment

In this terrific systems article pointed out to me by Byron Davies, Donella Meadows states:

Physical structure is crucial in a system, but the leverage point is in proper design in the first place. After the structure is built, the leverage is in understanding its limitations and bottlenecks and refraining from fluctuations or expansions that strain its capacity.

The first sentence doesn’t tell me anything new, but the second one does. Many systems, especially big software systems foisted upon maintenance teams after they’re hatched to the customer, are not thoroughly understood by many, if any, of the original members of the development team. Upon release, the system “works” (and it may be stable). Hurray!

In the post delivery phase, as the (always) unheralded maintenance team starts adding new features without understanding the system’s limitations and bottlenecks, the structural and behavioral integrity of the beast starts surely degrading over time. Scarily, the rate of degradation is not constant; it’s more akin to an exponential trajectory. It doesn’t matter how pristine the original design is, it will undoubtedly start it’s march toward becoming an unlovable “big ball of mud“.

So, how can one slow the rate of degradation in the integrity of a big system that will continuously be modified throughout its future lifetime? The answer is nothing profound and doesn’t require highly skilled specialists or consultants. It’s called PAYGO.

In the PAYGO process, a set of lightweight but understandable and useful multi-level information artifacts that record the essence of the system are developed and co-evolved with the system software. They must be lightweight so that they are easily constructable, navigable, and accessible. They must be useful or post-delivery builders won’t employ them as guidance and they’ll plow ahead without understanding the global ramifications of their local changes. They must be multi-level so that different stakeholder group types, not just builders, can understand them. They must be co-evolved so that they stay in synch with the real system and they don’t devolve into an incorrect and useless heap of misguidance. Challenging, no?

Of course, if builders, and especially front line managers, don’t know how to, or don’t care to, follow a PAYGO-like process, then they deserve what they get. D’oh!

Underbid And Overpromise

April 7, 2010 2 comments

As usual, I don’t get it. I don’t get the underbid-overpromise epidemic that’s been left untreated for ages. Proposal teams, under persistent pressure from executives to win contracts from customers, and isolated from hearing negative feedback by unintegrated program execution and product development teams, perpetually underbid on price/delivery and over-promise on product features and performance. This unquestioned underbid-overpromise industry worst practice has been entrenched in mediocracies since the dawn of the cover-your-ass, ironclad contract. The undiscussable but real tendency to, uh, “exaggerate” an org’s potential to deliver is baked into the system. That’s because  competitors and customers are willing co-conspirators in this cycle of woe. The stalemate ensures that there’s no incentive for changing the busted system. As the saying goes; “if we can’t fix it, it ain’t broke!“. D’oh!

If a company actually could take the high road and submit more realistic proposals to customers, they’d go out of business because non-individual customers (i.e. dysfunctional org bureaucracies where no one takes responsibility for outcomes) choose the lowest bidder 99.99999% of the time. I said “actually could” in the previous sentence because most companies “can’t“. That’s because most are so poorly managed that they don’t know what or where their real costs are. Unrecorded overtime, vague and generic work breakdown structures, inscrutable processes, and wrongly charged time all guarantee that the corpo head sheds don’t have a clue where their major cost sinks are. Bummer.

Structure And The “ilities”

October 20, 2009 Leave a comment

In nature, structure is an enabler or disabler of functional behavior. No hands – no grasping, no legs – no walking, no lungs – no living. Adding new functional components to a system enables new behavior and subtracting components disables behavior. Changing the arrangement of an existing system’s components and how they interconnect can also trade-off qualities of behavior, affectionately called the “ilities“. Thus, changes in structure effect changes in behavior.

The figure below shows a few examples of a change to an “ility” due to a change in structure. Given the structure on the left, the refactored structure on the right leads to an increase in the “ility” listed under the new structure. However, in moving from left to right, a trade-off has been made for the gain in the desired “ility”. For the monolithic->modular case, a decrease in end-to-end response-ability due to added box-to-box delay has been traded off. For the monolithic->redundant case, a decrease in buyability due to the added purchase cost of the duplicate component has been introduced. For the no feedback->feedback case, an increase in complexity has been effected due to the added interfaces. For the bowl->bottle example, a decrease in fill-ability has occurred because of the decreased diameter of the fill interface port.

Ilities

The plea of this story is: “to increase your aware-ability of the law of unintended consequences”. What you don’t know CAN hurt you. When you are bound and determined to institute what you think is a “can’t lose” change to a system that you own and/or control, make an effort to discover and uncover the ilities that will be sacrificed for those that you are attempting to instill in the system. This is especially true for socio-technical systems (do you know of any system that isn’t a socio-technical system?) where the influence on system behavior by the technical components is always dwarfed by the influence of the components that are comprised of groups of diverse individuals.

Follow

Get every new post delivered to your Inbox.

Join 393 other followers

%d bloggers like this: