More Philosophy for Engineers

In a post on Richard Feynman and philosophy of science, I suggested that engineers would benefit from a class in philosophy of science. A student recently asked if I meant to say that a course in philosophy would make engineers better at engineering – or better philosophers. Better engineers, I said.

Here’s an example from my recent work as an engineer  that drives the point home.

I was reviewing an FMEA (Failure Mode Effects Analysis) prepared by a high-priced consultancy and encountered many cases where a critical failure mode had been deemed highly improbable on the basis that the FMEA was for a mature system with no known failures.

How many hours of operation has this system actually seen, I asked. The response indicated about 10 thousand hours total.

I said on that basis we could assume a failure rate of about one per 10,001 hours. The direct cost of the failure was about $1.5 million. Thus the “expected value” (or “mathematical expectation” – the probabilistic cost of the loss) of this failure mode in a 160 hour mission is $24,000 or about $300,000 per year (excluding any secondary effects such as damaged reputation). With that number in mind, I asked the client if they wanted to consider further mitigation by adding monitoring circuitry.

I was challenged on the failure rate I used. It was, after all, a mature, ten year old system with no recorded failures of this type.

Here’s where the analytic philosophy course those consultants never took would have been useful.

You simply cannot justify calling a failure mode extremely rare based on evidence that it is at least somewhat rare. All unique events – like the massive rotor failure that took out all three hydraulic systems of a DC-10 in Sioux City – were very rare before they happened.

The authors of the FMEA I was reviewing were using unjustifiable inductive reasoning. Philosopher David Hume debugged this thoroughly in his 1738 A Treatise of Human Nature.

Hume concluded that there simply is no rational or deductive basis for  induction, the belief that the future will be like the past.

Hume understood that, despite the lack of justification for induction, betting against the sun rising tomorrow was not a good strategy either. But this is a matter of pragmatism, not of rationality. A bet against the sunrise would mean getting behind counter-induction; and there’s no rational justification for that either.

In the case of the failure mode not yet observed, however, there is ample justification for counter-induction. All mechanical parts and all human operations necessarily have nonzero failure or error rates. In the world of failure modeling, the knowledge “known pretty good” does not support the proposition “probably extremely good”, no matter how natural the step between them feels.

Hume’s problem of induction, despite the efforts of Immanuel Kant and the McKinsey consulting firm, has not been solved.

A fabulously entertaining – in my view – expression of the problem of induction was given by philosopher Carl Hempel in 1965.

Hempel observed that we tend to take each new observation of a black crow as incrementally supporting the inductive conclusion that all crows are black. Deductive logic tells us that if a conditional statement is true, its contrapositive is also true, since the statement and its contrapositive are logically equivalent. Thus if all crows are black then all non-black things are non-crow.

It then follows that if each observation of black crows is evidence that all crows are black (compare: each observation of no failure is evidence that no failure will occur), then each observation of a non-black non-crow is also evidence that all crows are black.

Following this line, my red shirt is confirming evidence for the proposition that all crows are black. It’s a hard argument to oppose, but it simply does not “feel” right to most people.

Many try to salvage the situation by suggesting that observing that my shirt is red is in fact evidence that all crows are black, but provides only unimaginably small support to that proposition.

But pushing the thing just a bit further destroys even this attempt at rescuing induction from the clutches of analysis.

If my red shirt gives a tiny bit of evidence that all crows are black, it then also gives equal support to the proposition that all crows are white. After all, my red shirt is a non-white non-crow.

,

  1. #1 by criticalenviro on January 16, 2015 - 2:35 pm

    I learned more from your piece than I remember learning from my engineering philosophy course in college! Thank you for the great article.

  2. #2 by SuperJesus on April 20, 2015 - 10:56 am

    It sounds like the consultants could also use a Logic 101 class, and definitely one in probability and statistics. I’m surprised they pushed so hard against your observations.

  3. #3 by disenchantedscholar on July 6, 2015 - 9:09 pm

    Reblogged this on Philosophies of a Disenchanted Scholar and commented:
    True.

  4. #4 by Anonymous on February 20, 2016 - 7:58 am

    I hope you’re not given to wearing black shirts.

  5. #5 by W Scott Dunbar on August 29, 2018 - 7:02 am

    Great piece.
    Hume debunked induction. He didn’t debug it.

  6. #6 by Matthew Squair on September 27, 2021 - 7:07 pm

    Hempel’s example is a good example of the trouble you get into in mixing up deductive steps and inductive steps in an argument. Observing the colour of your shirt is also an observation, and just as inductive for example. John Rushby in his work on safety arguments recommends explicitly reifying the inductive part as a visible assumption plus a deductive claim so that we can separately argue about whether that assumption is actually valid.

  1. Great Philosophers Damned to Hell | The Multidisciplinarian
  2. Mark Jacobson’s Science | The Multidisciplinarian
  3. My Trouble with Bayes | The Multidisciplinarian
  4. My Trouble with Bayes | on risk of

Leave a comment