Design Thinking’s Timely Death

This is a slightly abbreviated repost of a piece by the same name that I posted two years ago today. It was reblogged in a few places, including, oddly, the site of European design school. I was surprised at the high ratio of praise to condemnation this generated. For a thoughtful opposing view, see this piece on the Censemaking blog. Two years later, Design Thinking appears to have the same degree of  promise, the same advocates and detractors, and even more misappropriation and co-opting.


Design Thinking is getting a new life. We should bury it instead. Here’s why.

Its Humble Origins

In 1979 Bruce Archer, renowned engineer and professor at the Royal College of Art, wrote in a Design Studies paper,

“There exists a designerly way of thinking and communicating that is both different from scientific and scholarly ways of thinking and communicating, and as powerful as scientific and scholarly methods of inquiry when applied to its own kinds of problems.”

Innocent enough in context, Archer’s statement was likely the impetus for the problematic term, Design Thinking. Archer convincingly argued that design warranted a third fundamental area of education along with science and humanities. The next year Bryan Lawson at University of Sheffield wrote How Designers Think, now in 4th edition. Peter Rowe then authored Design Thinking in the mid 1980s. At that time, design thinking mainly referred to thinking about design and the mental process of designing well. In the mid 1990s, management consultancies, seeking new approaches to sell to clients looking outside their box for a competitive edge, pounced on Design Thinking. Design Thinking then transformed into a conceptual framework, a design-centered management initiative, that deified a narrow subset of people engaged in design – those who defined the shape of products, typically called “designers.”

Bay BridgeThese designers – again, a subset of those Archer was addressing – think differently, Lawson told us. His point was valid. But many professionals - some of them designers - read much more into his observation. Many readers inferred that designers have a special – almost mystical – way of knowing. Designers, suddenly with guru status, were in demand for advisory roles. Design firms didn’t at all mind becoming management consultancies and being put in the position of advising CEOs not only on product definition but on matters ranging from personnel to market segment analysis. It paid well and designers found the view from atop this new pedestal refreshing. But any value that may have existed from teaching “designerly” ways to paper pushers, bean counters and silo builders deflated as Design Thinking was then reshaped into another n-step improvement process by legacy consulting firms.

If you find my summary overly cynical, consider that Bruce Nussbaum, once one of design thinking’s most vocal advocates, calls design thinking a failed experiment. Don Norman, IDEO fellow and former VP of Apple, calls the idea that designers possess some creative thought process above all others in their skills at creative thought “a myth lacking any evidence.” He sees Design Thinking as now being a public relations term aimed at mystifying an ineffective approach to convince business that designers can add value to problems like healthcare, pollution, and organizational dynamics. It’s a term that needs to die, says Norman. Peter Merholz, president of Adaptive Path, calls BusinessWeek’s recent praise of design thinking “fetishistic.” He facetiously suggests that to fix things, you can simply “apply some right-brained turtleneck-wearing ‘creatives,’ ‘ideating’ tons of concepts … out of whole cloth.”

Analysis and Synthesis Again

Misunderstood science contributed to the early days of Design Thinking in the same way that it informed Systems Thinking. As with Systems Thinking, confusion about the relationship between analysis and synthesis was fundamental to the development of Design Thinking. Recall that in science, synthesis is the process of inferring effects from given causes; whereas analysis is the route by which we seek the causes of observed effects. Loosely speaking, using this first definition of synthesis, analysis is the opposite of synthesis. In broader usage synthesis indicates combining components to form something new that has properties not found in its components (we’ll call this definition 2). I’ll touch on the consequence of conflating the two definitions of synthesis below.

In How Designers Think, Lawson performed a famous experiment on two groups, one of architects and one of scientists, involving combining colored blocks to achieve a specified design, where some of rules about block combinations were revealed only by experimentation. The architects did better than the scientists in this test. Lawson repeated the experiment with groups of students just entering educational programs for scientists and architects. Both these groups did poorer than both groups of their trained equivalents. From this experiment Lawson concluded that the educational experience of the different professions caused the difference in thinking styles, while acknowledging that those more adept at thinking in the abstract might be more inclined toward architecture than science.

Lawson concludes that the scientists tried to maximize the information available to them about the allowed combinations; i.e., they sought to identify the governing rules. In contrast, the architects, he concluded, aimed directly at achieving the desired result, only replacing blocks when rules emerged to show the attempted arrangement unworkable or disallowed. From these conclusions about why the groups behaved the way he observed them to behave, Lawson secondarily concluded that:

The essential difference between these two strategies is that while the scientists focused their attention on discovering the rule, the architects were obsessed with achieving the desired result. The scientists adopted a generally problem-focused strategy and the architects a solution-focused strategy.

SF MOMALawson’s work is fascinating, and How Designers Think is still a great read 30 years later; but there are huge leaps of inference in his conclusions summarized above. Further, the choice of language is a opportunistic. A simpler reading of the facts (one less reliant on characterizing states of mind of the participants and relying less on semantics) might be that architects are better at building structures (architecting) than are scientists.  A likely cause is that architects are trained to build structures and scientists are not. An experiment involving “design” of a corrosion-resistant steel alloy might well find scientists to be more creative (successful at creating or synthesizing such a result).

Lawson correctly observes that, generally speaking, architects learn about the nature of the problem largely as a result of trying out solutions, whereas scientists set out specifically to study the problem to discover the relevant principles. Presumably, most engineers would fall somewhere between these extremes. While trying out solutions might not be universally applicable (not a good choice for tall buildings, reactors and aircraft) scientists, business managers, and many others too often forget to use the “designerly” approach to challenges – including trying out different solutions early in the game. Further, anyone who has seen corporate analysis-paralysis in action (inaction) can readily see where more architect-style thinking might be useful in many business problems. However, much that has been built on Lawson’s findings cannot bear the weight of real business.

Design – A Remedy for Destructive Science?

In “Designerly Ways of Knowing,” a 1982 paper in Design Studies, Nigel Cross concluded from Lawson’s work that:

These experiments suggest that scientists problem-solve by analysis, whereas designers problem-solve by synthesis.

Cross’s statement – quoted ad nauseum by the worst hucksters of Design Thinking – has several logical problems, especially when removed from its context. First, assuming Lawson’s findings correct, Cross erroneously equates rule discovery (how scientists solve problems) with analysis. Second, it implies that analysis (seeking causes for observed effects) is the opposite not of definition 1 of synthesis above but of definition 2 (building something new out of components). Thus by substitution, the reader infers that building something is the opposite of analyzing something. This position is obviously wrong on logical grounds, yet is deeply engrained in popular thought and in many introductions to Design Thinking.

UntitledThe error is due to choice of language, choice of examples, and semantic equivocation. Analysis of composition differs from analysis of function. Further, analysis of composition can be physical or conceptual. The destructive connotation of analysis only applies when value judgment is attached to physical decomposition. You analyze a frog by dissecting it (murderer!). You analyze a clock by disassembling it – no, by tearing it apart. This wording needlessly condemns the concept of analysis from the start. But what if you analyze the compressive strength of stone by building a tower of stone blocks? Or if you analyze trends by building software. How about analyzing electrical components by building a circuit? And what of Lawson’s architects who analyzed feasibility of certain arrangements of blocks by using a solution-focused strategy. In these examples analysis appears less villainous.

In its original context, Cross’s analysis-synthesis statement - though technically incorrect - makes a point. We gather that architects aim initially for a satisfactory solution, then seek to refine it if possible, rather than on methodical discovery of the parameters of the problem. Despite providing fodder for less thoughtful advocates of Design Thinking, Cross advanced the field  by making a solid case for the value of design education, defending his position that such education develops skills for solving real-world, ill-defined problems, and promotes visual thinking and iconic modes of cognition. It’s unfortunate that his analysis-synthesis quote has been put to such facile use.

For Archer, Lawson, and Cross, Design Thinking was largely about design, design education, and the insights that good design skills bring, such as welcoming new points of view and fresh insights, challenging implicit constraints, and conscious avoidance of stomping on the creative spirit. But Design Thinking after the mid 1990′s set unrealistic goals. It wasn’t just Design Thinking’s reliance on a shaky conception of analysis and synthesis that set it adrift. It was the expansion of scope and the mark left by its corporate usurpers, subjecting the term to endless redefinition and reducing it to jargon. While Tim Brown’s Change by Design does venture fairly far into the realm of corporate renewal, he still tends to keep design on center stage. But in the writings of more ambitious gurus, Design Thinking has strayed far  from its roots. For Thomas Lockwood (Design Thinking: Integrating Innovation, Customer Experience, and Brand Value) Design Thinking seems a transformation of consciousness that will not only nourish corporate creativity but will cure societal ails, fix the economy and rescue the environment.

1953 Alfa Romeo BAT

Design Tweeting

A recent  WSJ article explains that Design Thinking “uses close, almost anthropological observation of people to gain insight into problems.” Search Twitter for Design Thinking and you’ll find recent tweets from initiates having discovered this cutting edge concept. ” Kick off your week with a new way of thinking: Design Thinking.” Supply chain thought leadership through Design Thinking. “Use design thinking to find the right-fit job.” One advocate proclaims Design Thinking to be the means to overcome emotional resistance to change.

Don Norman is on the mark when he reminds us that radical breakthrough ideas and creative thinking somehow managed to shape history before the advent of Design Thinking. Norman observes, “‘Design Thinking’ is what creative people in all disciplines have always done.” Breakthroughs happen when people find fresh insights, break outmoded rules, and get new perspectives through conscious effort – all without arcane modes of thinking.

Rational Thinking – The Next Old Thing

Design Thinking has lost its focus – and perhaps its mind. The term has been redefined to the point of absurdity. And its overworked referent has drifted from an attitude and guiding principle to yet another hackneyed process in a long line of bankrupt business improvement initiatives, passionately embraced by amnesic devotees for a few months until the next one comes along. This might be the inevitable fate of brands that no one owns (e.g., “Design Thinking”) spawned by innovators, put into the public domain, and hijacked by consultancies that prey on business managers seeking that infusion of quick-transformation magic.

In short, Design Thinking is hopelessly contaminated. There’s too much sleaze in the field. Let’s bury it and get back to basics like good design. Everyone already knows that solution-focus is as essential as problem-focus. Stop arguing the point. If good design doesn’t convince the world that design should be fully integrated into business and society, another over-caffeinated Design Thinking program isn’t likely to do so either.


Your future is here

, ,

Leave a comment

The Onagawa Reactor Non-Meltdown

On March 11, 2011, the strongest earthquake in Japanese recorded history hit Tohuku, leaving about 15,000 dead. The closest nuclear reactor to the quake’s epicenter was the Onagawa Nuclear Power Station operated by Tohoku Electric Power Company. As a result of the earthquake and subsequent tsunami that destroyed the town of Onagawa, the Onagawa nuclear facility remained intact and shut itself down safely, without incident. The Onagawa nuclear facility was the vicinity’s only safe evacuation destination. Residents of Onagawa left homeless by the natural disasters sought refuge in the facility, where its workers provided food.

The more famous Fukushima nuclear facility was about twice as far from the earthquake’s epicenter. The tsunami at Fukushima was slightly less severe. Fukushimia experienced three core meltdowns, resulting in evacuation of 300,000 people. The findings of the Fukushima Nuclear Accident Independent Investigation Commission have been widely published. They conclude that Fukushima failed to meet the most basic safety requirements, had conducted no valid probabilistic risk assessment, had no provisions for containing damage, and that its regulators operated in a network of corruption, collusion, and nepotism. Kiyoshi Kurokawa, Chairman of the commission stated:

THE EARTHQUAKE AND TSUNAMI of March 11, 2011 were natural disasters of a magnitude that shocked the entire world. Although triggered by these cataclysmic events, the subsequent accident at the Fukushima Daiichi Nuclear Power Plant cannot be regarded as a natural disaster. It was a profoundly manmade disaster – that could and should have been foreseen and prevented.

Only by grasping [the mindset of Japanese bureaucracy] can one understand how Japan’s nuclear industry managed to avoid absorbing the critical lessons learned from Three Mile Island and Chernobyl. It was this mindset that led to the disaster at the Fukushima Daiichi Nuclear Plant.

The consequences of negligence at Fukushima stand out as catastrophic, but the mindset that supported it can be found across Japan.

Despite these findings, the world’s response Fukushima has been much more focused on opposition to nuclear power than on opposition to corrupt regulatory government bodies and the cultures that foster them.

Two scholars from USC, Airi Ryu and Najmedin Meshkati, recently published “Why You Haven’t Heard About Onagawa Nuclear Power Station after the Earthquake and Tsunami of March 11, 2011,their examination of the contrasting safety mindsets of TEPCO, the firm operating the Fukushima nuclear plant, and Tohoku Electric Power, the firm operating Onagawa.

Ryu and Meshkati reorted vast differences in personal accountability, leadership values, work environments, and approaches to decision-making. Interestingly, they found even Tohuko Electric to be weak in setting up an environment where concerns could be raised and where an attitude of questioning authority was encouraged. Nevertheless, TEPCO was far inferior to Tohoku Electric in all other safety culture traits.

Their report is worth a read for anyone interested in the value of creating a culture of risk management and the need for regulatory bodies to develop non-adversarial relationships with the industries they oversee, something I discussed in a recent post on risk management.

1 Comment

Incommensurability and the Design-Engineering Gap

Those who conceptualize products – particularly software – often have the unpleasant task of explaining their conceptual gems to unimaginative, sanctimonious engineers entrenched in the analytic mire of in-the-box thinking. This communication directs the engineers to do some plumbing and flip a few switches that get the concept to its intended audience or market… Or, at least, this is how many engineers think they are viewed by designers.

Truth is, engineers and creative designers really don’t speak the same language. This is more than just a joke. Many posts here involve philosopher of science, Thomas Kuhn. Kuhn’s idea of incommensurability between scientific paradigms also fits the design-engineering gap well. Those who claim the label, designers, believe design to be a highly creative, open-ended process with no right answer. Many engineers, conversely, understand design – at least within their discipline – to mean a systematic selection of components progressively integrated into an overall system, guided by business constraints and the laws of nature and reason. Disagreement on the meaning of design is just the start of the conflict.

Kuhn concluded that the lexicon of a discipline constrains the problem space and conceptual universe of that discipline. I.e., there is no fundamental theory of meaning that applies across paradigms. The meaning of expressions inside a paradigm comply only with the rules of that paradigm.  Says Kuhn, “Conceptually, the world is our representation of our niche, the residence of the particular human community with whose members we are currently interacting” (The Road Since Structure, 1993, p. 103). Kuhn was criticized for exaggerating the extent to which a community’s vocabulary and word usage constrains the thoughts they are able to think. Kuhn saw this condition as self-perpetuating, since the discipline’s constrained thoughts then eliminate any need for expansion of its lexicon. Kuhn may have overplayed his hand on incommensurability, but you wouldn’t know it from some software-project kickoff meetings I’ve attended.

This short sketch, The Expert, written and directed by Lauris Beinerts, portrays design-engineering incommensurability from the perspective of the sole engineer in a preliminary design meeting.

See also: Debbie Downer Doesn’t Do Design

, ,

1 Comment

Arianna Huffington, Wisdom, and Stoicism 1.0

Arianna HuffingtonArianna Huffington spoke at The Commonwealth Club in San Francisco last week. Interviewed by Facebook CEO Sheryl Sandberg, Huffington spoke mainly on topics in her recently published Thrive: The Third Metric to Redefining Success and Creating a Life of Well-Being, Wisdom, and Wonder. 2500 attendees packed Davies Symphony Hall. Several of us were men. 

Huffington began with the story of her wake-up call to the idea that success is killing us. She told of collapsing from exhaustion, hitting the corner of her desk on the way down, gashing her forehead and breaking her cheek bone.

She later realized that “by any sane definition of success, if you are lying in a pool of blood on the floor of your office you’re not a success.”

After this epiphany Huffington began an inquiry into the meaning of success. The first big change was realizing that she needed much more sleep. She joked that she now advises women to sleep their way to the top. Sleep is a wonder drug.

Her reexamination of success also included personal values. She referred to ancient philosophers who asked what is a good life. She explicitly identified her current doctrine with that of the Stoics (not to be confused with modern use of the term stoic). “Put joy back in our everyday lives,” she says. She finds that we have shrunken the definition of success down to money and power, and now we need to expand it again. Each of us needs to define success by our own criteria, hence the name of her latest book. The third metric in her book’s title includes focus on well-being, wisdom, wonder, and giving.

Refreshingly (for me at least) Huffington drew repeatedly on ancient western philosophy, mostly that of the Stoics. In keeping with the Stoic style, her pearls often seem self-evident only after the fact:

“The essence of what we are is greater than whatever we are in the world.” 

Take risk. See failure as part of the journey, not the opposite of success. (paraphrased) 

I do not try to dance better than anyone else. I only try to dance better than myself. 

“We may not be able to witness our own eulogy, but we’re actually writing it all the time, every day.” 

“It’s not ‘What do I want to do?’, it’s ‘What kind of life do I want to have?” 

“Being connected in a shallow way to the entire world can prevent us from being deeply connected to those closest to us, including ourselves.” 

“‘My life has been full of terrible misfortunes, most of which never happened.’” (citing Montaigne)

Marcus AureliusAs you’d expect, Huffington and Sandberg suggested that male-dominated corporate culture betrays a dearth of several of the qualities embodied in Huffington’s third metric. Huffington said the most popular book among CEOs is the Chinese military treatise, The Art of War. She said CEOs might do better to read children’s books like Silverstein’s The Giving Tree or maybe Make Way for Ducklings. Fair enough; there are no female Bernie Madoffs.

I was pleasantly surprised by Huffington. I found her earlier environmental pronouncements to be poorly conceived. But in this talk on success, wisdom, and values, she shone. Huffington plays the part of a Stoic well, though some of the audience seemed to judge her more of a sophist. One attendee asked her if she really believed that living the life she identified in Thrive could have possibly led to her current success. Huffington replied yes, of course, adding that she, like Bill Clinton, found they’d made all their biggest mistakes while tired.

Huffington’s quotes above align well with the ancients. Consider these from Marcus Aurelius, one of the last of the great Stoics:

Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth. 

Very little is needed to make a happy life; it is all within yourself, in your way of thinking. 

Confine yourself to the present.

 Be content to seem what you really are. 

The object of life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane.

I particularly enjoyed Huffington’s association of sense-of-now, inner calm, and wisdom with Stoicism, rather than, as is common in Silicon Valley, with a misinformed and fetishized understanding of Buddhism. Further, her fare was free of the intellectualization of mysticism that’s starting to plague Wisdom 2.0. It was a great performance.





Preach not to others what they should eat, but eat as becomes you, and be silent. - Epictetus



Multiple-Criteria Decision Analysis in the Engineering and Procurement of Systems

The use of weighted-sum value matrices is a core component of many system-procurement and organizational decisions including risk assessments. In recent years the USAF has eliminated weighted-sum evaluations from most procurement decisions. They’ve done this on the basis that system requirements should set accurate performance levels that, once met, reduce procurement decisions to simple competition on price. This probably oversimplifies things. For example, the acquisition cost for an aircraft system might be easy to establish. But life cycle cost of systems that includes wear-out or limited-fatigue-life components requires forecasting and engineering judgments. In other areas of systems engineering, such as trade studies, maintenance planning, spares allocation, and especially risk analysis, multi-attribute or multi-criterion decisions are common.

Weighted-sum criterion matrices (and their relatives, e.g., weighted-product, AHP, etc.) are often criticized in engineering decision analysis for some valid reasons. These include non-independence of criteria, difficulties in normalizing and converting measurements and expert opinions into scores, and logical/philosophical concerns about decomposing subjective decisions into constituents.

Years ago, a team of systems engineers and I, while working through the issues of using weighted-sum matrices to select subcontractors for aircraft systems, experimented with comparing the problems we encountered in vendor selection to the unrelated multi-attribute decision process of mate selection. We met the same issues in attempting to create criteria, weight those criteria, and establish criteria scores in both decision processes, despite the fact that one process seems highly technical, the other one completely non-technical. This exercise emphasized the degree to which aircraft system vendor selection involves subjective decisions. It also revealed that despite the weaknesses of using weighted sums to make decisions, the process of identifying, weighting, and scoring the criteria for a decision greatly enhanced the engineers’ ability to give an expert opinion. But this final expert opinion was often at odds with that derived from weighted-sum scoring, even after attempts to adjust the weightings of the criteria.

Weighted-sum and related numerical approaches to decision-making interest me because I encounter them in my work with clients. They are central to most risk-analysis methodologies, and, therefore, central to risk management. The topic is inherently multidisciplinary, since it entails engineering, psychology, economics, and, in cases where weighted sums derive from multiple participants, social psychology.

This post is an introduction-after-the-fact, to my previous post, How to Pick a Spouse. I’m writing this brief prequel to address the fact that blog excerpting tools tend to use only the first few lines of a post, and on that basis, my post appeared to be on mate selection rather than decision analysis, it’s main point.

If you’re interested in multi-attribute decision-making in the engineering of systems, please continue now to How to Pick a Spouse.




Katz’s Law: Humans will act rationally when all other possibilities have been exhausted.


Leave a comment

How to Pick a Spouse

Bekhap’s Law asserts that brains times beauty equals a constant. Can this be true? Are intellect and beauty quantifiable? Is beauty a property of the subject of investigation, or a quality of the mind of the beholder? Are any other relevant variables (attributes) intimately tied to brains or beauty? Assuming brains and beauty both are desirable, Backhap’s Law implies an optimization exercise – picking a point on the reciprocal function representing the best compromise between brains and beauty. Presumably, this point differs for all evaluators. It raises questions about the marginal utility of brains and beauty. Is it possible that too much brain or too much beauty could be a liability? (Engineers would call this an edge-case check of Beckhap’s validity.) Is Beckhap’s Law of any use without a cost axis? Other axes? In practice, if taken seriously, Backhap’s Law might be merely one constraint in a multi-attribute decision process for selecting a spouse. It also sheds light on the problems of Air Force procurement of the components of a weapons system and a lot of other decisions. I’ll explain why.

C-17 aircraft photo

I’ll start with an overview of how the Air Force oversees contract awards for aircraft subsystems – at least how it worked through most of USAF history, before recent changes in procurement methods.  Historically, after awarding a contract to an aircraft maker, the aircraft maker’s engineers wrote specs for its systems. Vendors bid on the systems by creating designs described in proposals submitted for competition. The engineers who wrote the specs also created a list of a few dozen criteria, with weightings for each, on which they graded the vendors’ proposals. The USAF approved this criteria list and their weightings before vendors submitted their proposals to ensure the fairness deserved by taxpayers. Pricing and life-cycle cost were similarly scored by the aircraft maker. The bidder with the best total score got the contract.

A while back I headed a team of four engineers, all single men, designing and spec’ing out systems for a military jet. It took most of a year to write these specs. Six months later we received proposals hundreds of pages long. We graded the proposals according to our pre-determined list of criteria. After computing the weighted sums (sums of score times weight for each criteria) I asked the engineers if the results agreed with their subjective judgments. That is, did the scores agree with the subjective judgment of best bidder made by these engineers independent of the scoring process. Only about half of them were. I asked the team why they thought the score results differed from their subjective judgments.

They proposed several theories. A systems engineer, viewing the system from the perspective of its interactions and interfaces with the entire aircraft may not be familiar with all the internal details of the system while writings specs. You learn a lot of these details by reading the vendors’ proposals. So you’re better suited to create the criteria list after reading proposals. But the criteria and their weightings are fixed at that point because of the fairness concern. Anonymized proposals might preserve fairness and allow better criteria lists, one engineer offered.

But there was more to the disconnect between their subjective judgments of “best candidate” and the computed results. Someone immediately cited the problem of normalization. Converting weight in pounds, for example, to a dimensionless score (e.g., a grade of 0 to 100) was problematic. If minimum product weight is the goal, how you do you convert three vendors’ product weights into grades on the 100 scale. Giving the lowest weight 100 points and subtracting the percentage weight delta of the others feels arbitrary – because it is. Doing so compresses the scores excessively – making you want to assign a higher weighting to product-weight to compensate for the clustering of the product-weight scores. Since you’re not allowed to do that, you invent some other ad hoc means of increasing the difference between scores. In other words, you work around the weighted-sum concept to try to comply with the spirit of the rules without actually breaking the rules. But you still end up with a method in which you’re not terribly confident.

A bright young engineer named Hui then hit on a major problem of the weighted-sum scoring approach. He offered that the criteria in our lists were not truly independent; they interacted with each other. Further, he noted, it would be impossible to create a list of criteria that were truly independent. Nature, physics and engineering design just don’t work like that. On that thought, another engineer said that even if the criteria represented truly independent attributes of the vendors’ proposed systems, they might not be independent in a mental model of quality judgment. For example, there may be a logical quality composed of a nonlinear relationship between reliability, spares cost, support equipment, and maintainability. Engineering meets philosophy.

We spent lunch critiquing and philosophizing about multi-attribute decision-making. Where else is this relevant, I asked. Hui said, “Hmmm, everywhere?” “Dating!” said Eric. “Dating, or marriage?”, I asked. They agreed that while their immediate dating interests might suggest otherwise, all four were in fact interested in finding a spouse at some point. I suggested we test multi-attribute decision matrices on this particular decision. They accepted the challenge. Each agreed to make a list of past and potential future candidates to wed, without regard for the likelihood of any mutual interest the candidate might have. Each also would independently prepare a list of criteria on which they would rate the candidates. To clarify, each engineer would develop their own criteria, weightings, and scores for their own candidates only. No multi-party (participatory) decisions were involved; these involve other complex issues beyond our scope here (e.g., differing degrees of over/under-confidence in participants, doctrinal paradox, etc.). Sharing the list would be optional.

Nevertheless, on completing their criteria lists, everyone was happy to share criteria and weightings. There were quite a few non-independent attributes related to appearance, grooming and dress, even within a single engineer’s list. Likewise with intelligence. Then there was sense of humor, quirkiness, religious compatibility, moral virtues, education, type A/B personality, all the characteristics of Myers-Briggs, Eysenck, MMPI, and assorted personality tests. Each engineer rated a handful of candidates and calculated the weighted sum for each.

I asked everyone if their winning candidate matched their subjective judgment of who the winner should have been. A resounding no, across the board.

Some adherents of rigid multi-attribute decision processes address such disconnects between intuition and weighted-sum decision scores by suggesting that in this case we merely adjust the weightings. For example, MindTools suggests:

“If your intuition tells you that the top scoring option isn’t the best one, then reflect on the scores and weightings that you’ve applied. This may be a sign that certain factors are more important to you than you initially thought.”

To some, this sounds like an admission that subjective judgment is more reliable than the results of the numerical exercise. Regardless, no amount of adjusting scores and weights left the engineers confident that the method worked. No adjustment to the weight coefficients seemed to properly express tradeoffs between some of the attributes. I.e., no tweaking of the system ordered the candidates (from high to low) in a way that made sense to each evaluator. This meant the redesigned formula still wasn’t trustworthy. Again, the matter of complex interactions of non-independent criteria came up. The relative importance of attributes seems to change as one contemplates different aspects of a thing. A philosopher’s perspective would be that normative statements cannot be made descriptive by decomposition. Analytic methods don’t answer normative questions.

Interestingly, all the engineers felt that listing criteria and scoring them helped them make better judgments about the ideal spouse, but not the judgments resulting directly from the weighted-sum analysis.

Fact is, picking which supplier should get the contract and picking the best spouse candidate are normative, subjective decisions. No amount of dividing a subjective decision into components makes it objective. Nor does any amount of ranking or scoring. A quantified opinion is still an opinion. This doesn’t mean we shouldn’t use decision matrices or quantify our sentiments, but it does mean we should not hide behind such quantifications.

From the perspective of psychology, decomposing the decision into parts seems to make sense. Expert opinion is known to be sometimes marvelous, sometimes terribly flawed. Daniel Kahneman writes extensively on associative coherence, finding that our natural, untrained tendency is to reach conclusions first, and justify them second. Kahneman and Gary Klein looked in detail at expert opinions in “Conditions for Intuitive Expertise: a Failure to Disagree(American Psychologist, 2009). They found that short-answer expert opinion can be very poor. But they found that the subjective judgments of experts forced to examine details and contemplate alternatives – particularly when they have sufficient experience to close the intuition feedback loop ­– are greatly improved.

Their findings seem to support the aircraft engineers’ views of the weight-sum analysis process. Despite the risk of confusing reasons with causes, enumerating the evaluation criteria and formally assessing them aids the subjective decision process. Doing so left them more confident about their decisions, for spouse and for aircraft system, though those decision differed from the ones produced by weighted sums. In the case of the aircraft systems, the engineers had to live with the results of the weighted-sum scoring.

I was one of the engineers who disagreed with the results of the aircraft system decisions.  The weighted-sum process awarded a very large contract to the firm whose design I judged inferior. Ten years later, service problems were severe enough that the Air Force agreed to switch to the vendor I had subjectively judged best. As for the engineer-spouse decisions, those of my old engineering team are all successful so far. It may not be a coincidence that the divorce rates of engineers are among the lowest of all professions.


Hedy Lamarr was granted a patent for spread-spectrum communication technology, paving the way for modern wireless networking.

Hedy Lamarr


1 Comment

A New Era of Risk Management?

The quality of risk management has mostly fallen for the past few decades. There are signs of change for the better.

Risk management is a broad field; many kinds of risk must be managed. Risk is usually defined in terms of probability and cost of a potential loss. Risk management, then, is the identification, assessment and prioritization of risks and the application of resources to reduce the probability and/or cost of the loss.

The earliest and most accessible example of risk management is insurance, first documented in about 1770 BC in the Code of Hammurabi (e.g., rules 23, 24, and 48). The Code addresses both risk mitigation, through threats and penalties, and minimizing loss to victims, through risk pooling and insurance payouts.

Golden Gate BridgeInsurance was the first example of risk management getting serious about risk assessment. Both the frequentist and quantified subjective risk measurement approaches (see recent posts on belief in probability) emerged from actuarial science developed by the insurance industry.

Risk assessment, through its close relatives, decision analysis and operations research, got another boost from World War II. Big names like Alan Turing, John Von Neumann, Ian Fleming (later James Bond author) and teams at MIT, Columbia University and Bletchley Park put quantitative risk analyses of several flavors on the map.

Today, “risk management” applies to security guard services, portfolio management, terrorism and more. Oddly, much of what is called risk management involves no risk assessment at all, and is therefore inconsistent with the above definition of risk management, paraphrased from Wikipedia.

Most risk assessment involves quantification of some sort. Actuarial science and the probabilistic risk analyses used in aircraft design are probably the “hardest” of the hard risk measurement approaches, Here, “hard” means the numbers used in the analyses come from measurements of real world values like auto accidents, lightning strikes, cancer rates, and the historical failure rates of computer chips, valves and motors. “Softer” analyses, still mathematically rigorous, involve quantified subjective judgments in tools like Monte Carlo analyses and Bayesian belief networks. As the code breakers and submarine hunters of WWII found, trained experts using calibrated expert opinions can surprise everyone, even themselves.

A much softer, yet still quantified (barely), approach to risk management using expert opinion is the risk matrix familiar to most people: on a scale of 1 to 4, rate the following risks…, etc. It’s been shown to be truly worse than useless in many cases, for a variety of reasons by many researchers. Yet it remains the core of risk analysis in many areas of business and government, across many types of risk (reputation, credit, project, financial and safety). Finally, some of what is called risk management involves no quantification, ordering, or classifying. Call it expert intuition or qualitative audit.

These soft categories of risk management most arouse the ire of independent and small-firm risk analysts. Common criticisms by these analysts include:

1. “Risk management” has become jargonized and often involves no real risk analysis.
2. Quantification of risk in some spheres is plagued by garbage-in-garbage-out. Frequency-based models are taken as gospel, and believed merely because they look scientific (e.g., Fukushima).
3. Quantified/frequentist risk analyses are not used in cases where historical data and a sound basis for them actually exists (e.g., pharmaceutical manufacture).
4. Big consultancies used their existing relationships to sell unsound (fluff) risk methods, squeezing out analysts with sound methods (accused of Arthur Anderson, McKinsey, Bain, KPMG).
5. Quantitative risk analyses of subjective type commonly don’t involve training or calibration of those giving expert opinions, thereby resulting in incoherent (in the Bayesian sense) belief systems.
6. Groupthink and bad management override rational input into risk assessment (subprime mortgage, space shuttle Challenger).
7. Risk management is equated with regulatory compliance (banking operations, hospital medicine, pharmaceuticals, side-effect of Sarbanes-Oxley).
8. Some professionals refuse to accept any formal approach to risk management (medical practitioners and hospitals).

While these criticisms may involve some degree of sour grapes, they have considerable merit in my view, and partially explain the decline in quality of risk management. I’ve worked in risk analysis involving uranium processing, nuclear weapons handling, commercial and military aviation, pharmaceutical manufacture, closed-circuit scuba design, and mountaineering. If the above complaints are valid in these circles – and they are –  it’s easy to believe they plague areas where softer risk methods reign.

Several books and scores of papers specifically address the problems of simple risk-score matrices, often dressed up in fancy clothes to look rigorous. The approach has been shown to have dangerous flaws by many analysts and scholars, e.g., Tony Cox, Sam SavageDouglas Hubbard, and Laura-Diana Radu. Cox shows examples where risk matrices assign higher qualitative ratings to quantitatively smaller risks. He shows that risks with negatively correlated frequencies and severities can result in risk-matrix decisions that are worse than random decisions. Also, such methods are obviously very prone to range compression errors. Most interestingly, in my experience, the stratification (highly likely, somewhat likely, moderately likely, etc.) inherent in risk matrices assume common interpretation of terms across a group. Many tests (e.g., Kahneman & Tversky and Budescu, Broomell, Por) show that large differences in the way people understand such phrases dramatically affect their judgments of risk. Thus risk matrices create the illusion of communication and agreement where neither are present.

Nevertheless, the risk matrix has been institutionalized. It is embraced by government (MIL-STD-882), standards bodies (ISO 31000), and professional societies (Project Management Institute (PMI), ISACA/COBIT). Hubbard’s opponents argue that if risk matrices are so bad, why do so many people use them – an odd argument, to say the least. ISO 31000, in my view, isn’t a complete write-off. In places, it rationally addresses risk as something that can be managed through reduction of likelihood, reduction of consequences, risk sharing, and risk transfer. But elsewhere it redefines risk as mere uncertainty, thereby reintroducing the positive/negative risk mess created by economist Frank Knight a century ago. Worse, from my perspective, like the guidelines of PMI and ISACA, it gives credence to structure in the guise of knowledge and to process posing as strategy. In short, it sets up a lot of wickets which, once navigated, give a sense that risk has been managed when in fact it may have been merely discussed.

A small benefit of the subprime mortgage meltdown of 2008 was that it became obvious that the financial risk management revolution of the 1990s was a farce, exposing a need for deep structural changes. I don’t follow financial risk analysis closely enough to know whether that’s happened. But the negative example made public by the housing collapse has created enough anxiety in other disciplines to cause some welcome reappraisals.

There is surprising and welcome activity in nuclear energy. Several organizations involved in nuclear power generation have acknowledged that we’ve lost competency in this area, and have recently identified paths to address the challenges. The Nuclear Energy Institute recently noted that while Fukushima is seen as evidence that probabilistic risk analysis (PRA) doesn’t work, if Japan had actually embraced PRA, the high risk of tsunami-induced disaster would have been immediately apparent. Late last year the Nuclear Energy Institute submitted two drafts to the U.S. Nuclear Regulatory Commission addressing lost ground in PRA and identifying a substantive path forward: Reclaiming the Promise of Risk-Informed Decision-Making and Restoring Risk-Informed Regulation. These documents acknowledge that the promise of PRA has been stunted by distrust of the method, focus on compliance instead of science, external audits by unqualified teams, and the above-mentioned Fukushima fallacy.

Likewise, the FDA, often criticized for over-regulating and over-reach – confusing efficacy with safety – has shown improvement in recent years. It has revised its decades-old process validation guidance to focus more on verification, scientific evidence and risk analysis tools rather than validation and documentation. The FDA’s ICH Q9 (Quality Risk Management) guidelines discuss risk, risk analysis and risk management in terms familiar to practitioners of “hard” risk analysis, even covering fault tree analysis (the “hardest” form of PRA) in some detail. The ASTM E2500 standard moves these concepts further forward. Similarly, the FDA’s recent guidelines on mobile health devices seem to accept that the FDA’s reach should not exceed its grasp in the domain of smart phones loaded with health apps. Reading between the lines, I take it that after years of fostering the notion that risk management equals regulatory compliance, the FDA realized that it must push drug safety far down into the ranks of the drug makers in the same way the FAA did with aircraft makers (with obvious success) in the late 1960s. Fostering a culture of safety rather than one of compliance distributes the work of providing safety and reduces the need for regulators to anticipate every possible failure of every step of every process in every drug firm.

This is real progress. There may yet be hope for financial risk management.

, ,



Get every new post delivered to your Inbox.

Join 62 other followers