Archive for category Probability and Risk

More Philosophy for Engineers

In a post on Richard Feynman and philosophy of science, I suggested that engineers would benefit from a class in philosophy of science. A student recently asked if I meant to say that a course in philosophy would make engineers better at engineering – or better philosophers. Better engineers, I said.

Here’s an example from my recent work as an engineer  that drives the point home.

I was reviewing an FMEA (Failure Mode Effects Analysis) prepared by a high-priced consultancy and encountered many cases where a critical failure mode had been deemed highly improbable on the basis that the FMEA was for a mature system with no known failures.

How many hours of operation has this system actually seen, I asked. The response indicated about 10 thousand hours total.

I said on that basis we could assume a failure rate of about one per 10,001 hours. The direct cost of the failure was about $1.5 million. Thus the “expected value” (or “mathematical expectation” – the probabilistic cost of the loss) of this failure mode in a 160 hour mission is $24,000 or about $300,000 per year (excluding any secondary effects such as damaged reputation). With that number in mind, I asked the client if they wanted to consider further mitigation by adding monitoring circuitry.

I was challenged on the failure rate I used. It was, after all, a mature, ten year old system with no recorded failures of this type.

Here’s where the analytic philosophy course those consultants never took would have been useful.

You simply cannot justify calling a failure mode extremely rare based on evidence that it is at least somewhat rare. All unique events – like the massive rotor failure that took out all three hydraulic systems of a DC-10 in Sioux City – were very rare before they happened.

The authors of the FMEA I was reviewing were using unjustifiable inductive reasoning. Philosopher David Hume debugged this thoroughly in his 1738 A Treatise of Human Nature.

Hume concluded that there simply is no rational or deductive basis for  induction, the belief that the future will be like the past.

Hume understood that, despite the lack of justification for induction, betting against the sun rising tomorrow was not a good strategy either. But this is a matter of pragmatism, not of rationality. A bet against the sunrise would mean getting behind counter-induction; and there’s no rational justification for that either.

In the case of the failure mode not yet observed, however, there is ample justification for counter-induction. All mechanical parts and all human operations necessarily have nonzero failure or error rates. In the world of failure modeling, the knowledge “known pretty good” does not support the proposition “probably extremely good”, no matter how natural the step between them feels.

Hume’s problem of induction, despite the efforts of Immanuel Kant and the McKinsey consulting firm, has not been solved.

A fabulously entertaining – in my view – expression of the problem of induction was given by philosopher Carl Hempel in 1965.

Hempel observed that we tend to take each new observation of a black crow as incrementally supporting the inductive conclusion that all crows are black. Deductive logic tells us that if a conditional statement is true, its contrapositive is also true, since the statement and its contrapositive are logically equivalent. Thus if all crows are black then all non-black things are non-crow.

It then follows that if each observation of black crows is evidence that all crows are black (compare: each observation of no failure is evidence that no failure will occur), then each observation of a non-black non-crow is also evidence that all crows are black.

Following this line, my red shirt is confirming evidence for the proposition that all crows are black. It’s a hard argument to oppose, but it simply does not “feel” right to most people.

Many try to salvage the situation by suggesting that observing that my shirt is red is in fact evidence that all crows are black, but provides only unimaginably small support to that proposition.

But pushing the thing just a bit further destroys even this attempt at rescuing induction from the clutches of analysis.

If my red shirt gives a tiny bit of evidence that all crows are black, it then also gives equal support to the proposition that all crows are white. After all, my red shirt is a non-white non-crow.

,

9 Comments

A New Era of Risk Management?

The quality of risk management has mostly fallen for the past few decades. There are signs of change for the better.

Risk management is a broad field; many kinds of risk must be managed. Risk is usually defined in terms of probability and cost of a potential loss. Risk management, then, is the identification, assessment and prioritization of risks and the application of resources to reduce the probability and/or cost of the loss.

The earliest and most accessible example of risk management is insurance, first documented in about 1770 BC in the Code of Hammurabi (e.g., rules 23, 24, and 48). The Code addresses both risk mitigation, through threats and penalties, and minimizing loss to victims, through risk pooling and insurance payouts.

Golden Gate BridgeInsurance was the first example of risk management getting serious about risk assessment. Both the frequentist and quantified subjective risk measurement approaches (see recent posts on belief in probability) emerged from actuarial science developed by the insurance industry.

Risk assessment, through its close relatives, decision analysis and operations research, got another boost from World War II. Big names like Alan Turing, John Von Neumann, Ian Fleming (later James Bond author) and teams at MIT, Columbia University and Bletchley Park put quantitative risk analyses of several flavors on the map.

Today, “risk management” applies to security guard services, portfolio management, terrorism and more. Oddly, much of what is called risk management involves no risk assessment at all, and is therefore inconsistent with the above definition of risk management, paraphrased from Wikipedia.

Most risk assessment involves quantification of some sort. Actuarial science and the probabilistic risk analyses used in aircraft design are probably the “hardest” of the hard risk measurement approaches, Here, “hard” means the numbers used in the analyses come from measurements of real world values like auto accidents, lightning strikes, cancer rates, and the historical failure rates of computer chips, valves and motors. “Softer” analyses, still mathematically rigorous, involve quantified subjective judgments in tools like Monte Carlo analyses and Bayesian belief networks. As the code breakers and submarine hunters of WWII found, trained experts using calibrated expert opinions can surprise everyone, even themselves.

A much softer, yet still quantified (barely), approach to risk management using expert opinion is the risk matrix familiar to most people: on a scale of 1 to 4, rate the following risks…, etc. It’s been shown to be truly worse than useless in many cases, for a variety of reasons by many researchers. Yet it remains the core of risk analysis in many areas of business and government, across many types of risk (reputation, credit, project, financial and safety). Finally, some of what is called risk management involves no quantification, ordering, or classifying. Call it expert intuition or qualitative audit.

These soft categories of risk management most arouse the ire of independent and small-firm risk analysts. Common criticisms by these analysts include:

1. “Risk management” has become jargonized and often involves no real risk analysis.
2. Quantification of risk in some spheres is plagued by garbage-in-garbage-out. Frequency-based models are taken as gospel, and believed merely because they look scientific (e.g., Fukushima).
3. Quantified/frequentist risk analyses are not used in cases where historical data and a sound basis for them actually exists (e.g., pharmaceutical manufacture).
4. Big consultancies used their existing relationships to sell unsound (fluff) risk methods, squeezing out analysts with sound methods (accused of Arthur Anderson, McKinsey, Bain, KPMG).
5. Quantitative risk analyses of subjective type commonly don’t involve training or calibration of those giving expert opinions, thereby resulting in incoherent (in the Bayesian sense) belief systems.
6. Groupthink and bad management override rational input into risk assessment (subprime mortgage, space shuttle Challenger).
7. Risk management is equated with regulatory compliance (banking operations, hospital medicine, pharmaceuticals, side-effect of Sarbanes-Oxley).
8. Some professionals refuse to accept any formal approach to risk management (medical practitioners and hospitals).

While these criticisms may involve some degree of sour grapes, they have considerable merit in my view, and partially explain the decline in quality of risk management. I’ve worked in risk analysis involving uranium processing, nuclear weapons handling, commercial and military aviation, pharmaceutical manufacture, closed-circuit scuba design, and mountaineering. If the above complaints are valid in these circles – and they are –  it’s easy to believe they plague areas where softer risk methods reign.

Several books and scores of papers specifically address the problems of simple risk-score matrices, often dressed up in fancy clothes to look rigorous. The approach has been shown to have dangerous flaws by many analysts and scholars, e.g., Tony Cox, Sam SavageDouglas Hubbard, and Laura-Diana Radu. Cox shows examples where risk matrices assign higher qualitative ratings to quantitatively smaller risks. He shows that risks with negatively correlated frequencies and severities can result in risk-matrix decisions that are worse than random decisions. Also, such methods are obviously very prone to range compression errors. Most interestingly, in my experience, the stratification (highly likely, somewhat likely, moderately likely, etc.) inherent in risk matrices assume common interpretation of terms across a group. Many tests (e.g., Kahneman & Tversky and Budescu, Broomell, Por) show that large differences in the way people understand such phrases dramatically affect their judgments of risk. Thus risk matrices create the illusion of communication and agreement where neither are present.

Nevertheless, the risk matrix has been institutionalized. It is embraced by government (MIL-STD-882), standards bodies (ISO 31000), and professional societies (Project Management Institute (PMI), ISACA/COBIT). Hubbard’s opponents argue that if risk matrices are so bad, why do so many people use them – an odd argument, to say the least. ISO 31000, in my view, isn’t a complete write-off. In places, it rationally addresses risk as something that can be managed through reduction of likelihood, reduction of consequences, risk sharing, and risk transfer. But elsewhere it redefines risk as mere uncertainty, thereby reintroducing the positive/negative risk mess created by economist Frank Knight a century ago. Worse, from my perspective, like the guidelines of PMI and ISACA, it gives credence to structure in the guise of knowledge and to process posing as strategy. In short, it sets up a lot of wickets which, once navigated, give a sense that risk has been managed when in fact it may have been merely discussed.

A small benefit of the subprime mortgage meltdown of 2008 was that it became obvious that the financial risk management revolution of the 1990s was a farce, exposing a need for deep structural changes. I don’t follow financial risk analysis closely enough to know whether that’s happened. But the negative example made public by the housing collapse has created enough anxiety in other disciplines to cause some welcome reappraisals.

There is surprising and welcome activity in nuclear energy. Several organizations involved in nuclear power generation have acknowledged that we’ve lost competency in this area, and have recently identified paths to address the challenges. The Nuclear Energy Institute recently noted that while Fukushima is seen as evidence that probabilistic risk analysis (PRA) doesn’t work, if Japan had actually embraced PRA, the high risk of tsunami-induced disaster would have been immediately apparent. Late last year the Nuclear Energy Institute submitted two drafts to the U.S. Nuclear Regulatory Commission addressing lost ground in PRA and identifying a substantive path forward: Reclaiming the Promise of Risk-Informed Decision-Making and Restoring Risk-Informed Regulation. These documents acknowledge that the promise of PRA has been stunted by distrust of the method, focus on compliance instead of science, external audits by unqualified teams, and the above-mentioned Fukushima fallacy.

Likewise, the FDA, often criticized for over-regulating and over-reach – confusing efficacy with safety – has shown improvement in recent years. It has revised its decades-old process validation guidance to focus more on verification, scientific evidence and risk analysis tools rather than validation and documentation. The FDA’s ICH Q9 (Quality Risk Management) guidelines discuss risk, risk analysis and risk management in terms familiar to practitioners of “hard” risk analysis, even covering fault tree analysis (the “hardest” form of PRA) in some detail. The ASTM E2500 standard moves these concepts further forward. Similarly, the FDA’s recent guidelines on mobile health devices seem to accept that the FDA’s reach should not exceed its grasp in the domain of smart phones loaded with health apps. Reading between the lines, I take it that after years of fostering the notion that risk management equals regulatory compliance, the FDA realized that it must push drug safety far down into the ranks of the drug makers in the same way the FAA did with aircraft makers (with obvious success) in the late 1960s. Fostering a culture of safety rather than one of compliance distributes the work of providing safety and reduces the need for regulators to anticipate every possible failure of every step of every process in every drug firm.

This is real progress. There may yet be hope for financial risk management.

, ,

4 Comments

Common-Mode Failure Driven Home

In a recent post I mentioned that probabilistic failure models are highly vulnerable to wrong assumptions of independence of failures, especially in redundant system designs. Common-mode failures in multiple channels defeats the purpose of redundancy in fault-tolerant designs. Likewise, if probability of non-function is modeled (roughly) as historical rate of a specific component failure times the length of time we’re exposed to the failure, we need to establish that exposure time with great care. If only one channel is in control at a time, failure of the other channel can go undetected. Monitoring systems can detect such latent failures. But then failures of the monitoring system tend to be latent.

For example, your car’s dashboard has an engine oil warning light. That light ties to a monitor that detects oil leaks from worn gaskets or loose connections before the oil level drops enough to cause engine damage. Without that dashboard warning light, the exposure time to an undetected slow leak is months – the time between oil changes. The oil warning light alerts you to the condition, giving you time to deal with it before your engine seizes.

But what if the light is burned out? This failure mode is why the warning lights flash on for a short time when you start your car. In theory, you’d notice a burnt-out warning light during the startup monitor test. If you don’t notice it, the exposure time for an oil leak becomes the exposure time for failure of the warning light. Assuming you change your engine oil every 9 months, loss of the monitor potentially increases the exposure time from minutes to months, multiplying the probability of an engine problem by several orders of magnitude. Aircraft and nuclear reactors contain many such monitoring systems. They need periodic maintenance to ensure they’re able to detect failures. The monitoring systems rarely show problems in the check-ups; and this fact often lures operations managers, perceiving that inspections aren’t productive, into increasing maintenance intervals. Oops. Those maintenance intervals were actually part of the system design, derived from some quantified level of acceptable risk.

Common-mode failures get a lot press when they’re dramatic. They’re often used by risk managers as evidence that quantitative risk analysis of all types doesn’t work. Fukushima is the current poster child of bad quantitative risk analysis. Despite everyone’s agreement that any frequencies or probabilities used in Fukushima analyses prior to the tsunami were complete garbage, the result for many was to conclude that probability theory failed us. Opponents of risk analysis also regularly cite the Tacoma Narrows Bridge collapse, the Chicago DC-10 engine-loss disaster, and the Mount Osutaka 747 crash as examples. But none of the affected systems in these disasters had been justified by probabilistic risk modeling. Finally, common-mode failure is often cited in cases where it isn’t the whole story, as with the Sioux City DC-10 crash. More on Sioux City later.

On the lighter side, I’d like to relate two incidents – one personal experience, one from a neighbor – that exemplify common-mode failure and erroneous assumptions of exposure time in everyday life, to drive the point home with no mathematical rigor.

I often ride my bicycle through affluent Marin County. Last year I stopped at the Molly Stone grocery in Sausalito, a popular biker stop, to grab some junk food. I locked my bike to the bike rack, entered the store, grabbed a bag of chips and checked out through the fast lane with no waiting. Ninety seconds at most. I emerged to find no bike, no lock and no thief.

I suspect that, as a risk man, I unconsciously model all risk as the combination of some numerical rate (occurrence per hour) times some exposure time. In this mental model, the exposure time to bike theft was 90 seconds. I likely judged the rate to be more than zero but still pretty low, given broad daylight, the busy location with lots of witnesses, and the affluent community. Not that I built such a mental model explicitly of course, but I must have used some unconscious process of that sort. Thinking like a crook would have served me better.

If you were planning to steal an expensive bike, where would you go to do it? Probably a place with a lot of expensive bikes. You might go there and sit in your pickup truck with a friend waiting for a good opportunity. You’d bring a 3-foot long set of chain link cutters to make quick work of the 10 mm diameter stem of a bike lock. Your friend might follow the victim into the store to ensure you were done cutting the lock and throwing the bike into the bed of your pickup to speed away before the victim bought his snacks.

After the fact, I had much different thought thoughts about this specific failure rate. More important, what is the exposure time when the thief is already there waiting for me, or when I’m being stalked?

My neighbor just experienced a nerve-racking common mode failure. He lives in a San Francisco high-rise and drives a Range Rover. His wife drives a Mercedes. He takes the Range Rover to work, using the same valet parking-lot service every day. He’s known the attendant for years. He takes his house key from the ring of vehicle keys, leaving the rest on the visor for the attendant. He waves to the attendant as he leaves the lot on way to the office.

One day last year he erred in thinking the attendant had seen him. Someone else, now quite familiar with his arrival time and habits, got to his Range Rover while the attendant was moving another car. The thief drove out of the lot without the attendant noticing. Neither my neighbor nor the attendant had reason for concern. This gave the enterprising thief plenty of time. He explored the glove box, finding the registration, which includes my neighbor’s address. He also noticed the electronic keys for the Mercedes.

The thief enlisted a trusted colleague, and drove the stolen car to my neighbor’s home, where they used the electronic garage entry key tucked neatly into its slot in the visor to open the gate. They methodically spiraled through the garage, periodically clicking the button on the Mercedes key. Eventually they saw the car lights flash and they split up, each driving one vehicle out of the garage using the provided electronic key fobs. My neighbor lost two cars though common-mode failures. Fortunately, the whole thing was on tape and the law men were effective; no vehicle damage.

Should I hide my vehicle registration, or move to Michigan?

—————–

In theory, there’s no difference between theory and practice. In practice, there is.

Leave a comment

Belief in Probability – Part 2

ProbusLast time I started with my friend Willie’s bold claim that he doesn’t believe in probability; then I gave a short history of probability. I observed that defining probability is a controversial matter, split between objective and subjective interpretations. About the only thing these interpretations agree on is that probability values range from zero to one, where P = 1 means certainty. When you learn probability and statistics in school, you are getting the frequentist interpretation, which is considered objective. Frequentism relies on directly equating observed frequencies with probabilities. In this model, the probability of an event exactly equals the limit of the relative frequency of that outcome in an infinitely large number of trials.

The problem with this interpretation in practice – in medicine, engineering, and gambling machines – isn’t merely the impossibility of an infinite number of trials. A few million trials might be enough. Running trials works for dice but not for earthquakes and space shuttles. It also has problems with things like cancer, where plenty of frequency data exists. Frequentism requires placing an individual specimen into a relevant population or reference class. Doing this is easy for dice, harder for humans. A study says that as a white males of my age I face a 7% probability of having a stroke in the next 10 years. That’s based on my membership in the reference class of white males. If I restrict that set to white men who don’t smoke, it drops to 4%. If I account for good systolic blood pressure, no family history of atrial fibrillation or ventricular hypertrophy, it drops another percent or so.

Ultimately, if I limit my population to a set of one (just me) and apply the belief that every effect has a cause (i.e., some real-world chunk of blockage causes an artery to rupture), you can conclude that my probability of having a stroke can only be one of two values – zero or one.

Frequentism, as seen by its opponents, too closely ties probabilities to observed frequencies. They note that the limit-of-relative-frequency concept relies on induction, which might mean it’s not so objective after all. Further, those frequencies are unknowable in many real-world cases. Still further, finding an individual’s correct reference class is messy, possibly downright subjective. Finally, no frequency data exists for earthquakes that haven’t happened yet. All that seems to do some real damage to frequentism’s utility score.

The subjective interpretations of probability propose fixes to some of frequentism’s problems. The most common subjective interpretation is Bayesianism, which itself comes in several flavors. All subjective interpretations see probability as a degree of belief in a specific outcome, as held by a rational person. Think of it as a fair bet with odds. The odds you’re willing to accept for a bet on your race horse exactly equals your degree of belief in that horse’s ability to win. If your filly were in the same race an infinite number of times, you’d expect to break even, based on those odds, whether you bet on her or against her.

Subjective interpretations rely on logical coherence and belief. The core of Bayesianism, for example, is that beliefs must 1) originate with a numerical probability estimate, 2) adhere to the rules of probability calculation, and 3) follow an exact rule for updating belief estimates based on new evidence. The second rule deals with the common core of probability math used in all interpretations. These include things like how to add and multiply probabilities and Bayes theorem, not to be confused with Bayesianism, the belief system. Bayes theorem is an uncontroversial equation relating the probability of A given B to the probability of A and the probability of B. The third rule of Bayesianism is similarly computational, addressing how belief is updated after new evidence. The details aren’t needed here. Note that while Bayesianism is generally considered subjective, it is still computationally exacting.

The obvious problem with all subjective interpretations, particularly as applied to engineering problems, is that they rely, at least initially, on expert opinion. Life and death rides on the choice of experts and the value of their opinions. As Richard Feynman noted in his minority report on the Challenger, official rank plays too large a part in the choice of experts, and the higher (and less technical) the rank, the more optimistic the probability estimates.

The engineering risk analysis technique most consistent with the frequentist (objective) interpretation of probability is fault tree analysis. Other risk analysis techniques, some embodied in mature software products, are based on Bayesian (subjective) philosophy.

When Willie said he didn’t believe in probability, he may have meant several things. I’ll try to track him down and ask him, but I doubt the incident stuck in his mind as it did mine. If he meant that he doesn’t believe that probability was useful in system design, he had a rational belief; but I disagree with it. I doubt he meant that though.

Willie may have been leaning toward the ties between probability and redundancy in system design. Probability is the calculus by which redundancy is allocated to redundant systems. Willie may think that redundancy doesn’t yield the expected increase in safety because having more equipment means more things than can fail. This argument fails to face that, ideally speaking, a redundant path does double the chance having a component failure, but squares the probability of system failure. That’s a good thing, since squaring a number less than one makes it smaller. In other words, the benefit in reducing the chance of system failure vastly exceeds the deficit of having more components to repair. If that was his point, I disagree in principle, but accept that redundancy is no excuse for lack of component design excellence.

He may also think system designers can be overly confident of the exponential increase in modeled probability of system reliability that stems from redundancy. That increase in reliability is only valid if the redundancy creates no common mode failures and no latent (undetected for unknown time intervals) failures of  redundant paths that aren’t currently operating. If that’s his point, then we agree completely. This is an area where pairing the experience and design expertise of someone like Willie with rigorous risk analysis using fault trees yields great systems.

Unlike Willie, Challenger-era NASA gave no official statement on its belief in probability. Feynman’s report points to NASA’s use of numeric probabilities for specific component failure modes. The Rogers Commission report says that NASA management talked about degrees of probability. From this we might guess that NASA believed in probability and its use in measuring risk. On the other hand, the Rogers Commission report also gives examples of NASA’s disbelief in probability’s usefulness. For example, the report’s Technical Management section states that, “NASA has rejected the use of probability on the basis that such techniques are insufficient to assure that adequate safety margins can be applied to protect the lives of the crew.”

Regardless of what NASA’s beliefs about porbability, it’s clear that NASA didn’t use fault tree analysis for the space shuttle program prior to the Challenger disaster. Nor did it use Bayesian inference methods, any hybrid probability model, or any consideration of probability beyond opinions about failures of  critical items. Feynman was livid about this. A Bayesian (subjective, but computational) approach would have at least forced NASA to make it subjective judgments explicit and would have produced a rational model of its judgments. Post-Challenger Bayesian analyses, including one by NASA, varied widely, but all indicated unacceptable risk. NASA has since adopted risk management approaches more consistent with those used in commercial and military aircraft design.

An obvious question arises when you think about using a frequentist model on nearly one-of-a-kind vehicles. How accurate can any frequency data be for something as infrequent as a shuttle flight? Accurate enough, in my view. If you see the shuttle as monolithic and indivisible, the data is too sparse; but not if you view it as a system of components, most of which, like o-ring seals, have close analogs in common use, with known failure rates.

The FAA mandated probabilistic risk analyses of the frequentist variety (effectively mandating fault trees) in 1968. Since then flying has become safe, by any measure. In no other endeavor has mankind made such an inherently dangerous activity so safe. Aviation safety progressed through many innovations, redundant systems being high on the list. Probability is the means by which you allocate redundancy. You can’t get great aircraft systems without designers like Willie. Nor can you get them without probability. Believe it or not.

, , ,

3 Comments

Belief in Probability – Part 1

Years ago in a meeting on design of a complex, redundant system for a commercial jet, I referred to probabilities of various component failures. In front of this group of seasoned engineers, a highly respected, senior member of the team interjected, “I don’t believe in probability.” His proclamation stopped me cold. My first thought was what kind a backward brute would say something like that, especially in the context of aircraft design. But Willie was no brute. In fact he is a legend in electro-hydro-mechanical system design circles; and he deserves that status. For decades, millions of fearless fliers have touched down on the runway, unaware that Willie’s expertise played a large part in their safe arrival. So what can we make of Willie’s stated disbelief in probability?

autobrakes
Friends and I have been discussing risk science a lot lately – diverse aspects of it including the Challenger disaster, pharmaceutical manufacture in China, and black swans in financial markets. I want to write a few posts on risk science, as a personal log, and for whomever else might be interested. Risk science relies on several different understandings of risk, which in turn rely on the concept of probability. So before getting to risk, I’m going to jot down some thoughts on probability. These thoughts involve no computation or equations, but they do shed some light on Willie’s mindset. First a bit of background.

Oddly, the meaning of the word probability involves philosophy much more than it does math, so Willie’s use of belief might be justified. People mean very different things when they say probability. The chance of rolling a 7 is conceptually very different from the chance of an earthquake in Missouri this year. Probability is hard to define accurately. A look at its history shows why.

Mathematical theories of probability only first appeared in the late 17th century. This is puzzling, since gambling had existed for thousands of years. Gambling was enough of a problem in the ancient world that the Egyptian pharaohs, Roman emperors and Achaemenid satraps outlawed it. Such legislation had little effect on the urge to deal the cards or roll the dice. Enforcement was sporadic and halfhearted. Yet gamblers failed to develop probability theories. Historian Ian Hacking  (The Emergence of Probability) observes, “Someone with only the most modest knowledge of probability mathematics could have won himself the whole of Gaul in a week.”

Why so much interest with so little understanding? In European and middle eastern history, it seems that neither Platonism (determinism derived from ideal forms) nor the Judeo/Christian/Islamic traditions (determinism through God’s will) had much sympathy for knowledge of chance. Chance was something to which knowledge could not apply. Chance meant uncertainty, and uncertainty was the absence of knowledge. Knowledge of chance didn’t seem to make sense. Plus, chance was the tool of immoral and dishonest gamblers.

The term probability is tied to the modern understanding of evidence. In medieval times, and well into the renaissance, probability literally referred to the level of authority –  typically tied to the nobility –  of a witness in a court case. A probable opinion was one given by a reputable witness. So a testimony could be highly probable but very incorrect, even false.

Through empiricism, central to the scientific method, the notion of diagnosis (inference of a condition from key indicators) emerged in the 17th century. Diagnosis allowed nature to be the reputable authority, rather than a person of status. For example, the symptom of skin spots could testify, with various degrees of probability, that measles had caused it. This goes back to the notion of induction and inference from the best explanation of evidence, which I discussed in past posts. Pascal, Fermat and Huygens brought probability into the respectable world of science.

But outside of science, probability and statistics still remained second class citizens right up to the 20th century. You used these tools when you didn’t have an exact set of accurate facts. Recognition of the predictive value of probability and statistics finally emerged when governments realized that death records had uses beyond preserving history, and when insurance companies figured out how to price premiums competitively.

Also around the turn of  the 20th century, it became clear that in many realms – thermodynamics and quantum mechanics for example – probability would take center stage against determinism. Scientists began to see that some – perhaps most – aspects of reality were fundamentally probabilistic in nature, not deterministic. This was a tough pill for many to swallow, even Albert Einstein. Einstein famously argued with Niels Bohr, saying, “God does not play dice.” Einstein believed that some hidden variable would eventually emerge to explain why one of two identical atoms would decay while the other did not. A century later, Bohr is still winning that argument.

What we mean when we say probability today may seem uncontroversial – until you stake lives on it. Then it gets weird, and definitions become important. Defining probability is a wickedly contentious matter, because wildly conflicting conceptions of probability exist.  They can be roughly divided into the objective and subjective interpretations. In the next post I’ll focus on the frequentist interpretation, which is objective, and the subjectivist interpretations as a group. I’ll look at the impact of accepting – or believing in – each of these on the design of things like airliners and space shuttles from the perspectives of Willie, Richard Feynman, and NASA. Then I’ll defend my own views on when and where to hold various beliefs about probability.

Autobrake diagram courtesy of Biggles Software.

, , ,

5 Comments

Is Fault Tree Analysis Deductive?

Endeavor over Golden Gate BridgeAn odd myth persists in systems engineering and risk analysis circles. Fault tree analysis (FTA), and sometimes fault trees themselves, are said to be deductive. FMEAs are called inductive. How can this be?

By fault trees I mean Boolean logic modeling of unwanted system states by logical decomposition of equipment fault states into combinations of failure states of more basic components. You can read more on fault tree analysis and its deductive nature at Wikipedia. By FMEA (Failure Mode & Effects Analysis) I mean recording all the things that can go wrong with the components of a system. Writers who find fault trees deductive also find FMEAs, their complement, to be inductive. I’ll argue here that building fault trees is not a deductive process, and that there is possible harm in saying so. Secondarily, I’ll offer that while FMEA creation involves inductive reasoning, the point carries little weight, since the rest of engineering is inductive reasoning too.

Word meanings can vary with context; but use of the term deductive is consistent across math, science, law, and philosophy. Deduction is the process of drawing a logically certain conclusion about a particular instance from a rule or premise about the general. Assuming all men are mortal, if Socrates is a man, then he is mortal. This is true regardless of the meaning of the word mortal. It’s truth is certain, even if Socrates never existed, and even if you take mortal to mean living forever.

Example from a software development website:

FMECA is an inductive analysis of system failure, starting with the presumed failure of a component and analyzing its effect on system stability: “What will happen if valve A sticks open?” In contrast, FTA is a deductive analysis, starting with potential or actual failures and deducing what might have caused them: “What could cause a deadlock in the application?”

The well-intended writer says we deduce the causes of the effects in question. Deduction is not up to that task. When we infer causes from observed effects, we are using induction, not deduction.

How did the odd claims that fault trees and FTAs are deductive arise? It might trace to William Vesely, NASA’s original fault tree proponent. Vesely sometimes used the term deductive in his introductions to fault trees. If he meant that the process of reducing fault trees into cut sets (sets of basic events or initiators) is deductive, he was obviously correct. But calculation isn’t the critical aspect of fault trees; constructing them is where the effort and need for diligence lie. Fault tree software does the math. If Vesely saw the critical process of constructing fault trees and supplying them with numerical data (often arduous, regardless of software) as deductive – which I doubt – he was certainly wrong. 

Inductive reasoning, as used in science, logic and philosophy, means inferring general rules or laws from observations of particular instances. The special use of the term math induction actually refers to deduction, as mathematicians are well aware. Math induction is deductive reasoning with a confusing title. Induction in science and engineering stems from our need to predict future events. We form theories about how things will behave in the future based on observations of how similar things behaved in the past. As I discussed regarding Bacon vs. Descartes, science is forced into the realm of induction because deduction never makes contact with the physical world – it lives in the mind.

Inductive reasoning is exactly what goes on when you construct a fault tree. You are making inferences about future conditions based on modeling and historical data – a purely inductive process. The fact that you use math to solve fault trees does not make fault trees any more deductive than the presence of math in lab experiments makes empirical science deductive.

Does this matter?

It’s easy enough to fix this technical point in descriptions fault tree analysis. We should do so, if merely to avoid confusing students. But more importantly, quantitative risk analysis – including FTA – has its enemies. They range from several top consultancies selling subjective, risk-score matrix methodologies dressed up in fancy clothes (see Tony Cox’s SIRA presentation on this topic) to some of NASA’s top management – those flogged by Richard Feynman in his minority report on the Challenger disaster. The various criticisms of fault tree analysis say it is too analytical and correlates poorly with the real world. Sound familiar? It echoes a feud between the heirs of Bacon (induction) and the heirs of Descartes (deduction). Some of fault trees’ foes find them overly deductive. They then imply that errors found in past quantitative analyses impugn objectivity itself, preferring subjective analyses based on expert opinion. This curious conclusion would not follow, even if fault tree analyses were deductive, which they are not.

.
——————————————

Science is the belief in the ignorance of experts. – Richard Feynman

.
.

,

1 Comment

Feynman’s Minority Report and Top-Down Design

On reading my praise of Richard Feynman, a fellow systems engineer and INCOSE member (International Council on Systems Engineering) suggested that I read Feynman’s Minority Report to the Space Shuttle Challenger Enquiry. He said I might not like it. I read it, and I don’t like it, not from the perspective of a systems engineer.

Challenger_explosion
Challenger explosion, Jan. 28, 1986

I should be clear on what I mean by systems engineering. I know of three uses of the term: first, the engineering of embedded systems, i.e., firmware (not relevant here); second, an organizational management approach (relevant, but secondary); third, a discipline aimed at design of assemblies of components to achieve a function that is greater than those of its constituents (bingo). Definitions given by others are useful toward examining Feynman’s minority report on the Challenger.

Simon Ramo, the “R” in TRW and inventor of the ICBM, put it like this: “Systems engineering is a discipline that concentrates on the design and application of the whole (system) as distinct from the parts. It involves looking at a problem in its entirety, taking into account all the facets and all the variables and relating the social to the technical aspect.”

Howard Eisner of GWU says, “Systems engineering is an iterative process of top-down synthesis, development, and operation of a real-world system that satisfies, in a near optimal manner, the full range of requirements for the system.” 

INCOSE’s definition is pragmatic (pleasantly, as their guide tends a bit toward strategic-management jargon): “Systems engineering is an interdisciplinary approach and means to enable the realization of successful systems.”

Feynman reaches several sound conclusions about root causes of the flight 51-L Challenger disaster. He observes that NASA’s safety culture had critical flaws and that its management seemed to indulge in fantasy, ignoring the conclusions, advice and warnings of diligent systems and component engineers. He gives specific examples of how NASA management grossly exaggerated the reliability of many systems and components in the shuttle. On this point he concludes, “reality must take precedence over public relations, for nature cannot be fooled.” He describes a belief by management that because an anomaly was without consequence in a previous mission, it is therefore safe. Most importantly, he cites the erroneous use of the concept of factor of safety around the O-ring seals between the two lower segments of the solid rocket motors by NASA management (the Rogers Commission also agrees that failure of these O-rings was the root cause of the disaster). An NASA report on seal erosion in an earlier mission (flight 51-C) had assigned a safety factor of three, based on the seals having eroded only one third of the amount thought to be critical. Feynman replies that the O-rings were not designed to erode, and hence the  factor-of-safety concept did not apply. Seal erosion was a failure of the design, catastrophic or not; there was no safety factor at all. “Erosion was a clue that something was wrong; not something from which safety could be inferred.”

But later Feynman incorrectly states that establishing a hypothetical propulsion system failure rate of 1 in 100,000 missions would require an inordinate number of tests to determine with confidence. Here he seems not to grasp both the exponential impact of redundancy on reliability, and that fault tree analysis could confidently calculate low system failure rates based on historical failure rates of large populations of constituent components, combined with the output of FMEAs (failure mode effects analyses) on those components in the relevant systems. This error does not impact Feynman’s conclusions about the root cause of the Challenger disaster. I mention it here because Feynman might be viewed as an authoritative source on systems engineering, but is here doing a poor job of systems engineering.

Discussing the liquid fuel engines, Feynman then introduces the concept of top-down design, which he criticizes. It isn’t clear exactly what he means by top-down. The most charitable reading would be a critique of NASA top management’s overruling the judgments of engineering management and engineers; but, on closer reading, it’s clear this cannot be his meaning:

The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually…

The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components.  Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes.

All mechanical-system design is necessarily top-down, in the sense of top-down used by Eisner, above. This use of the term is metaphor for progressive functional decomposition from mission requirements down to component requirements. Engineers cannot, for example, size a shuttle’s fuel pumps based on the functional requirement of having five men and two women orbit the earth to deploy a communications satellite. The fuel pump’s performance requirements ultimately emerge from successive derivations of requirements for subsystem design candidates. This design process is top-down, whether the various layers of subsystem design candidates are themselves newly conceived systems or ones that are already mature products (“off the shelf”). Wikipedia’s article and several software methodology sites incorrectly refer to design using off-the-shelf components as bottom-up – not involving functional decomposition. They err by failing to consider that piecing together existing subsystems toward a grander purpose still first requires functional decomposition of that grander purpose into lower-level requirements that serve as a basis for selecting existing subsystems. Simply put, you’ve got to know what you want a thing to do, even if you build that thing from available parts –  software or hardware –  in order to select those parts. Using off-the-shelf software subsystems still requires functional decomposition of the desired grander system.

Stealth Fighter, Frontal ViewF-117 frontal view

Off-the-shelf is a common strategy in aerospace, primarily for cost and schedule reasons. The Lockheed F-117, despite its unique design, used avionics taken from the C-130 and the F-16, brakes from the F-15, landing gear from the T-38, and other parts from commercial and military aircraft. This was for expediency. For the F-117, these off-the-shelf components still had to go through the necessary requirements validation, functional and stress testing, certification, and approval by all of the “ilities” (reliability, maintainability, supportability, durability, etc) required to justify their use in the vehicle – just as if they were newly designed. Likewise for the Challenger, the choice of new design vs. off-the-shelf should have had no impact on safety or reliability if proper systems engineering occurred. Whether its constituents were new designs or off-the-shelf, the shuttle’s propulsion system is necessarily – and desirably – the result of top-down design. Feynman may simply mean that the design and testing phases were rushed, that omissions were made, and that testing was incomplete. Other evidence suggests this; but these omissions are not a negative consequence of top-down design, which is the only sound process for the design of aircraft and other systems of systems.

It is difficult to imagine any sound basis for Feynman’s use of – and defense of – bottom-up design other than the selection of off-the-shelf components, which, as mentioned above, still entails functional decomposition (top-down design). Other uses of the term appear in discussions of software methodologies. I also found a handful of academic papers that incorrectly – incoherently, in my view – equate top-down with analysis and deduction, and bottom-up with synthesis and induction. The erroneous equation of analysis with deductive reasoning pops up in Design Thinking and social science literature (e.g., at socialresearchmethods.net). It fails to realize that analysis as a means of inferring cause from observed result (i.e., what made this happen?) always entails inductive reasoning. Geometry is deduction; science and engineering are inherently inductive.

The use of bottom-up shows up in software circles in a disparaging sense. It describes a state of system growth that happens with no conscious design beyond that of an original seed. It is non-design, in a sense. Such “organic growth” happens in enterprise software when new features, not envisioned during the original design, are later bolted-on. This can stem from naïve mismanagement by those unaware of the damage done to maintainability and further extensibility of the software system, or through necessity in a merger/acquisition scenario where the system’s owners are aware of the consequences but have no other alternatives. This scenario obviously does not apply to the hardware or software of the Challenger; and if it did, such bottom-up “design” would be a defect of the system, not a virtue.

Detail of 737 Gear Bay
Hydro-mechanical system components in 737 gear bay

Aerospace has in its legacy an attitude – as opposed to a design method – sometimes called a bottom-up mindset. I’ve encountered this as a form of resistance to methodological system-design-for-safety and the application of redundancy. In my experience it came from expert designers of electro-hydro-mechanical subsystems. A legendary aerospace systems designer once told me with a straight face, “I don’t believe in probability.” You can trace this type of thinking back to the rough and ready pioneers of manned flight. Charles Lindbergh, for example, said something along the lines of, “give me one good engine and one good pilot.” Implicit in this mentality is the notion that safety emerges from component quality rather than from system design. The failure rates of the best aerospace components tend to vary from those of average components by factors of two or ten, whereas redundancy has an exponential effect. Feynman’s criticism of top-down and endorsement of bottom-up – whatever he meant by it – could unfortunately be seen as support for this harmful and oddly persistent notion of bottom-up.

Toward the end of Feynman’s report, he reveals another misunderstanding about design of life-critical systems. In the section on avionics, he faults NASA for using 15-year-old software and hardware designs, concluding that the electronics are obsolete. He claims that modern chip sets are more reliable and of higher quality. This criticism runs contrary to his complaint about top-down design of the main engines, and it misses a key point. The improvements in reliability of newer chips would contribute only negligibly toward improved availability of the quad-redundant system containing them. More importantly, older designs of electronic components are often used in avionics precisely because they are old, mature designs. Accelerated-life testing of electronics is known to be tricky business. We use old-design chips because there is enough historical usage data to determine their failure rates without relying on accelerated-life testing. Long ago at McDonnell Douglas I oversaw use of the Intel 87C196 chip for a system on the C-17 aircraft. The Intel rep told me that this was the first use of the Intel 8086-derivative chip in a military aircraft. We defended its use, over the traditional but less capable Motorola chips, on the basis that the then 10+ year history of 8086’s in similar environments  was finally sufficient to establish a statistical failure rate usable in our system availability calculations. Interestingly, at that time NASA had already been using 8086 chips in the shuttle for years.

Feynman’s minority report on the Challenger contains misunderstandings and technical errors from the perspective of a systems engineer. While these errors may have little impact on his findings, they should be called out because of the possible influence they may have on future generations of engineers. The tyranny of pedigree, as we saw with Galileo, can extend a wrong idea’s life for generations.

That said, Feynman makes several key points about the psychology of engineering management that deserve much more attention than they get in engineering circles. First among these in my mind is the fallacy of induction from near-misses viewed as successes, thereby producing undue confidence about future missions.

 “His legs were weary, but his mind was at ease, free from the presentiment of change. The sense of security more frequently springs from habit than from conviction, and for this reason it often subsists after such a change in the conditions as might have been expected to suggest alarm. The lapse of time during which a given event has not happened is, in the logic of habit, constantly alleged as a reason why the event should never happen, even when the lapse of time is precisely the added condition which makes the event imminent. A man will tell you that he has worked in a mine for forty years unhurt by an accident, as a reason why he should apprehend no danger, though the roof is beginning to sink; and it is often observable that the older a man gets, the more difficult it is to retain a believing conception of his own death.”

 – from Silas Marner, by George Eliot (Mary Ann Evans Cross), 1861

—–

Text and aircraft photos copyright 2013 by William Storage. NASA shuttle photos public domain.

, ,

4 Comments

Intuitive Probabilities – Conjunction Malfunction

In a recent post I wrote about Vic, who might not look like a Christian, but probably is one. The Vic example reminded me of a famous study of unintuitive probabilities done in 1983. Amos Tversky and Daniel Kahneman surveyed students at the University of British Columbia using something similar to my Vic puzzle:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?

A.    Linda is a bank teller.
B.    Linda is a bank teller and is active in the feminist movement.

All's well that endsAbout 90% of students said (B) was more probable. Mathematicians point out that, without needing to know anything about Linda, (A) has to be more probable than (B). Thinking otherwise is the conjunction fallacy. It’s simple arithmetic. The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the extension (possibility set) of the conjunction is included in the extension of its constituents. In a coin toss, the probability of heads has to exceed the probability of heads AND that it will rain today.

Putting numbers to Linda, one might guess there’s 1% probability that Linda, based on the description given, is a bank teller, but a 99% probability that she’s a feminist. Even so, 1% is still a bigger number (probability) than 1% AND 99%, which means 1% times 99% – which is a tad less than 1%.

So why does it seem like (B) is more likely? Lots of psychological and semantic reasons have been proposed. For example, in normal communications, we usually obey some unspoken principle of relevance; a sane person would not mention Linda’s marital status, political views and values if they were irrelevant to the question at hand – which somehow seems to have something to do with Linda’s profession. Further, humans learn pattern recognition and apply heuristics. It may be a fair bit of inductive reasoning based on past evidence that women active in the feminist movement are more likely than those who are not to major in philosophy, be single, and be concerned with discrimination. This may be a reasonable inference, or it may just prove you’re a sexist pig for even thinking such a thing. I attended a lecture at UC Berkeley where I was told that any statement by men that connects attributes (physical, ideological or otherwise) to any group (except white men) constituted sexism, racism or some otherism. This made me wonder how feminists are able to recognize other feminists.

In any case, there are reasons that student would not give the mathematically correct answer about Linda beyond the possibility that they are mathematically illiterate. Tversky and Kahneman tried various wordings of the problem, pretty much getting the same results. At some point they came up with this statement of the problem that seems to drive home the point that they were seeking a mathematical interpretation of the problem:

Argument 1: Linda is more likely to be a bank teller than she is to be a feminist bank teller, because every feminist bank teller is a bank teller, but some bank tellers are not feminists, and Linda could be one of them.

 Argument 2: Linda is more likely to be a feminists bank  teller than she is likely to be a bank teller, because she resembles an active feminist more than she resembles a bank teller.

In this case 65% of students chose the extension argument (2), despite its internal logical flaw. Note that argument 1 explains why the conjunction fallacy is invalid and that argument 2 doesn’t really make much sense.

Whatever the reason we tend to botch such probability challenges, there are cases in engineering that are surprisingly analogous to the Linda problem. For example, when building a fault tree (see fig. 1), your heuristics can make you miss event dependencies and common causes between related failures. For example, if an aircraft hydraulic brake system accumulator fails by exploding instead of by leaking, and in doing so severs a hydraulic line, an “AND” relationship disappears so that what appeared to be P(A&B) becomes simply P(A). Such logic errors can make calculations of probability of catastrophe off by factors of thousands or millions. This is bad, when lives are at stake. Fortunately, engineers apply great skill and discipline to modeling this sort of thing. We who fly owe our lives to good engineers. Linda probably does too.

– – –

Fig. 1. Segment of a fault tree for loss of braking in a hypothetical 8-wheeled aircraft using FTA software I authored in 1997. This fault tree addresses only a single Class IV hazard in aircraft braking – uncontrolled departure from the end of the runway due to loss of braking during a rejected takeoff. It calculates the probability of this “top event” as being more remote than the one-per-billion flight hours probability limit specified by the guidelines of FAA Advisory Circular 25.1309-1A, 14CFR/CS 25.1309, and SAE ARP4754. This fault tree, when simplified by standard techniques, results in about 200,000 unique cut sets – combinations of basic events leading to the catastrophic condition.

Segment of a fault tree for uncontrolled runway departure of an 8-wheeled aircraft

– – –

Uncertainty is an unavoidable aspect of the human condition- Opening sentence of “Extensional Versus Intuitive Reasoning” by Tversky and Kahneman, Oct. 1983 Psychological Review.

Leave a comment

Intuitive Probabilities

GothGuy3Meet Vic. Vic enjoys a form of music that features heavily distorted guitars, slow growling vocals, atonality, frequent tempo changes, and what is called “blast beat” drumming in the music business. His favorite death metal bands are Slayer, Leviticus, Dark Tranquility, Arch Enemy, Behemoth, Kreator, Venom, and Necrophagist.

Vic has strong views on theology and cosmology. Which is more likely?

  1. Vic is a Christian
  2. Vic is a Satanist

I’ve taught courses on probabilistic risk analysis over the years, and have found that very intelligent engineers, much more experienced than I, often find probability extremely unintuitive. Especially when very large (or very small) numbers are involved. Other aspects of probability and statistics are unintuitive for other interesting reasons. More on those later.

The matter of Vic’s belief system involves several possible biases and unintuitive aspects of statistics. While pondering the issue of Vic’s beliefs, you can enjoy Slayer’s Raining Blood. Then check out my take on judging Vic’s beliefs below the embedded YouTube video – which, by the way, demonstrates all of the attributes of death metal listed above.

.

Vic is almost certainly a Christian. Any other conclusion would involve the so-called base-rate fallacy, where the secondary, specific facts (affinity for death metal) somehow obscure the primary, base-rate relative frequency of Christians versus Satanists. The Vatican claims over one billion Catholics, and most US Christians are not Catholic. Even with papal exaggeration, we can guess that there are well over a billion Christians on earth. I know hundreds if not thousands of them. I don’t know any Satanists personally, and don’t know of any public figures who are (there is conflicting evidence on Marilyn Manson). A quick Google search suggests a range of numbers of Satanists in the world, the largest of which is under 100,000. Further, I don’t ever remember seeing a single Satanist meeting facility, even in San Francisco. A web search also reveals a good number of conspicuously Christian death metal bands, including Leviticus, named above. Without getting into the details of Bayes Theorem, it is probably obvious that the relative frequencies of Christians against Satanists governs the outcome. And judging Vic by his appearance is likely very unreliable.

South Park Community Presbyterian Church
South Park Community Presbyterian Church
Fairplay, Colorado

,

1 Comment

Wisdom and Madness of the Yelp Crowd

Yelp rating standard deviation vs average ratingI’ve been digging deep into Yelp and other sites that collect crowd ratings lately; and I’ve discovered wondrous and fascinating things. I’ve been doing this to learn more about when and how crowds are wise. Potential inferences about “why” are alluring too. I looked at two main groups of reviews, those for doctors and medical services, and reviews for restaurants and entertainment.

As doctors, dentists and those in certain other service categories are painfully aware, Yelp ratings do not follow the expected distribution of values. This remains true despite Yelp’s valiant efforts to weed out shills, irate one-offs and spam.

Just how skewed are Yelp ratings when viewed in the aggregate? I took a fairly deep look and concluded that big bias lurks in the big data of Yelp. I’ll get to some hard numbers and take a crack at some analysis. First a bit of background.

Yelp data comes from a very non-random sample of a population. One likely source of this adverse selection is that those who are generally satisfied with service tend not to write reviews. Many who choose to write reviews want their ratings to be important, so they tend to avoid ratings near the mean value. Another source of selection bias stems from Yelp’s huge barrier – in polling terms anyway – to voting. Yelp users have to write a review before they can rate, and most users can’t be bothered. Further, those who vote are Yelp members who have (hopefully) already used the product or service, which means there’s a good chance they read other reviews before writing theirs. This brings up the matter of independence of members.

Plenty of tests – starting with Francis Galton’s famous ox-weighing study in 1906 – have shown that the median value of answers to quantitative questions in a large random crowd is often more accurate than answers by panels of experts. Crowds do very well at judging the number of jellybeans in the jar and reasonably well at guessing the population of Sweden, the latter if you take the median value rather than the mean. But gross misapplications of this knowledge permeate the social web. Fans of James Surowiecki’s “The Wisdom of Crowds” very often forget that independence is essential condition of crowd wisdom. Without that essential component to crowd wisdom, crowds can do things like burning witches and running up stock prices during the dot com craze. Surowiecki acknowledges the importance of this from the start (page 5):

There are two lessons to be drawn from the experiments. In most of them the members of the group were not talking to each other or working on a problem together.

Influence and communication love connections; but crowd wisdom relies on independence of its members, not collaboration between them. Surowiecki also admits, though rather reluctantly, that crowds do best in a subset of what he calls cognition problems – specifically, objective questions with quantitative answers. Surowiecki has great hope for use of crowds in subjective cognition problems along with coordination and cooperation problems. I appreciate his optimism, but don’t find his case for these very convincing.

In Yelp ratings, the question being answered is far from objective, despite the discrete star ratings. Subjective questions (quality of service) cannot be made objective by constraining answers to numerical values. Further, there is no agreement on what quality is really being measured. For doctors, some users rate bedside manner, some the front desk, some the outcome of ailment, and some billing and insurance handling. Combine that with self-selection bias and non-independence of users and the wisdom of the crowd – if present – can have difficulty expressing itself.

Two doctors on my block have mean Yelp ratings of 3.7 and 3.0 stars on a scale of 1 to 5. Their sample standard deviations are 1.7 and 1.9 (mean absolute deviations: 1.2 and 1.8). Since the maximum possible population standard deviation for a doctor on Yelp is 2.0, everything about this doctor data should probably be considered next to useless; it’s mean and even median aren’t reliable. The distributions of ratings isn’t merely skewed; it’s bimodal in these two cases and for half of the doctors in San Francisco. That means the rating survey yields highly conflicting results for doctors. Here are the Yelp scores of doctors in my neighborhood.

Untitled
Yelp rating distribution for 9 nearby doctors

I’ve been watching the doctor ratings over the last few years. A year ago, Dr. E’s ratings looked rather like Dr. I’s ratings look today. Unlike restaurants, which experience a rating warm-start on Yelp, the 5-star ratings of doctors grow over time at a higher rate than their low ratings. Doctors, some having been in business for decades, appear to get better as Yelp gets more popular. Three possible explanations come to mind. The first deals with competition. The population of doctors, like any provider in a capitalist system, is not fixed. Those who fare poorly in ratings are likely to get fewer customers and go out of business. The crowd selects doctors for quality, so in a mature system, most doctors, restaurants, and other businesses will have above-average ratings.

The second possible explanation for the change in ratings over time deals with selection, not in the statistics sense (not adverse selection) but in the social-psychology sense (clan or community formation). This would seem more likely to apply to restaurants than to doctors, but the effect on urban doctors may still be large. People tend to select friends or communities of people like themselves – ethnic, cultural, political, or otherwise. Referrals by satisfied customers tend to bring in more customers who are more likely to be satisfied. Businesses end up catering to the preferences of a group, which pre-selects customers more likely to be satisfied and give high ratings.

A third reason for the change over time could be a social-influence effect. People may form expectations based on the dominant mood of reviews they read before their visit. So later reviews might greatly exaggerate any preferences visible in early reviews.

Automotive services don’t fare much better on Yelp than doctors and dentists. But rating distributions for music venues, hotels and restaurants, though skewed toward high ratings, aren’t bimodal like the doctor data. The two reasons given above for positive skew in doctors’ ratings are likely both at work in restaurants and hotels. Yelp ratings for restaurants give clues about those who contribute them.

Dinner, Plate 1I examined about 10,000 of my favorite California restaurants, excluding fast food chains. I was surprised to find that the standard deviation of ratings for each restaurant increased – compared to theoretical maximum values – as average ratings increased. If that’s hard to follow in words, the below scatter plot will drive the point home. It shows average rating vs. standard deviation for each of 10,000 restaurants. Ratings are concentrated at the right side of the plot, and are clustered fairly near the theoretical maximum standard deviation (the gray elliptical arc enclosing the data points) for any given average rating. Color indicate rough total rating counts contributing to each spot on the plot – yellow for restaurants with 5 or less ratings, red for those having 40 or less, and blue for those with more than 40 ratings. (Some points are outside the ellipse because it represents maximum population deviations and the points are sample standard deviations.)

The second scatter shows average rating vs. standard deviation for the Yelp users who rated these restaurants, with the same color scheme. Similarly, it shows that most raters rate high on average, but each voter still tends to rate at the extreme ends possible to yield his average value. For example, many raters whose average rating is 4 stars use far more 3 and 5-star ratings than nature would expect.

Untitled
Scatter plot of standard deviation vs. average Yelp rating for about 10,000 restaurants

Untitled
Scatter plot of standard deviation vs. average rating for users who rated 10,000 restaurants

Next I looked at the rating behavior of users who rate restaurants. The first thing that jumps out of Yelp user data is that the vast majority of Yelp restaurant ratings are made by users who have rated only one to five restaurants. A very small number have rated more than twenty.

Untitled
Rating counts of restaurant raters by activity level

A look at comparative distribution of the three activity levels (1 to 5, 6 to 20, and over 20) as percentages of category total shows that those who rate least are more much more likely to give extreme ratings. This is a considerable amount of bias, throughout 100,000 users making half a million ratings. In a 2009 study of Amazon users, Vassilis Kostakos found similar results in their ratings to what we’re seeing here for bay area restaurants.

Untitled
Normalized rating counts of restaurant raters by activity level

Can any practical wisdom be applied to this observation of crowd bias? Perhaps a bit. For those choosing doctors based on reviews, we can suggest that doctors with low rating counts, having both very high and very low ratings, will likely look better a year from now. Restaurants with low rating counts (count of  ratings, not value) are likely to be more average than their average rating values suggest (no negative connotation to average here). Yelp raters should refrain from hyperbole, especially in their early days of rating. Those putting up rating/review sites should be aware that seemingly small barriers to the process of rating may be important, since the vast majority of raters only rate a few items.

This data doesn’t really give much insight into the contribution of social influence to the crowd bias we see here. That fascinating and important topic is at the intersection of crowdsourcing and social technology. More on that next time.

, , , , , ,

10 Comments