Posts Tagged engineering
In the 1966 song, Love Me I’m a Liberal, protest singer Phil Ochs mocked the American left for insincerely pledging support for civil rights and socialist causes. Using the voice of a liberal hypocrite, Ochs sings that he “hope[s] every colored boy becomes a star, but don’t talk about revolution; that’s going a little too far.” The refrain is, “So love me, love me, love me, I’m a liberal.” Putting Ochs in historical context, he hoped to be part of a major revolution and his anarchic expectations were deflated by moderate democrats. In Ochs’ view, limousine liberals and hippies with capitalist leanings were eroding the conceptual purity of the movement he embraced.
If Ochs were alive today, he probably wouldn’t write software; but if he did he’d feel right at home in faux-agile development situations where time-boxing is a euphemism for scheduling, the scrum master is a Project Manager who calls Agile a process, and a goal has been set for increased iteration velocity and higher story points per cycle. Agile can look a lot like the pre-Agile world these days. Scrum in the hands of an Agile imposter who interprets “incremental” to mean “sequential” makes an Agile software project look like a waterfall.
While it’s tempting to blame the abuse and dilution of Agile on half-converts who endorsed it insincerely – like Phil Ochs’ milquetoast liberals – we might also look for cracks in the foundations of Agile and Scrum (Agile is a set of principles, Scrum is a methodology based on them). After all, is it really fair to demand conformity to the rules of a philosophy that embraces adaptiveness? Specifically, I refer to item 4 in the list of values called out in the Agile Manifesto:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
A better charge against those we think have misapplied Agile might be based on consistency and internal coherence. That is, item 1 logically puts some constraints on item 4. Adapting to a business situation by deciding to value process and tools over individuals can easily be said to violate the spirit of the values. As obvious as that seems, I’ve seen a lot of schedule-driven “Agile teams” bound to rigid, arbitrary coding standards imposed by a siloed QA person, struggling against the current toward a product concept that has never been near a customer. Steve Jobs showed that a successful Product Owner can sometimes insulate himself from real customers; but I doubt that approach is a good bet on average.
It’s probably also fair to call foul on those who “do Agile” without self-organizing teams and without pushing decision-making power down through an organization. Likewise, the manifesto tells us to build projects around highly motivated individuals and give them the environment and trust they need to get the job done. This means we need motivated developers worthy of trust who actually can the job done, i.e., first rate developers. Scrum is based on the notion of a highly qualified self-organizing, self-directed development team. But it’s often used by managers as an attempt to employ, organize, coordinate and direct an under-qualified team. Belief that Scrum can manage and make productive a low-skilled team is widespread. This isn’t the fault of Scrum or Agile but just the current marker of the enduring impulse to buy software developers by the pound.
But another side of this issue might yet point to a basic flaw in Agile. Excellent developers are hard to find. And with a team of excellent developers, any other methodology would work as well. Less competent and less experienced workers might find comfort in rules, thereby having little motivation or ability to respond to change (Agile value no. 4).
As a minor issues with Agile/Scrum, some of the terminology is unfortunate. Backlog traditionally has a negative connotation. Starting a project with backlog on day one might demotivate some. Sprint surely sounds a lot like pressure is being applied; no wonder backsliding scrum masters use it to schedule. Is Sprint a euphemism for death-march? And of all the sports imagery available, the rugby scrum seems inconsistent with Scrum methodology and Agile values. Would Scrum Servant change anything?
The idea of using a Scrum burn-down chart to “plan” (euphemism for schedule) might warrant a second look too. Scheduling by extrapolation may remove the stress from the scheduling activity; but it’s still highly inductive and the future rarely resembles the past. The final steps always take the longest; and guessing how much longer than average is called “estimating.” Can we reconcile any of this with Agile’s focus on being value-driven, not plan-driven? Project planning, after all, is one of the erroneous assumptions of software project management that gave rise to Agile.
Finally, I see a disconnect between the method of Scrum and the values of Agile. Scrum creates a perverse incentive for developers to continually define sprints that show smaller and smaller bits of functionality. Then a series of highly successful sprints, each yielding a workable product, only asymptotically approaches the Product Owner’s goal.
Are Agile’s days numbered, or is it a good mare needing a better jockey?
“People who enjoy meetings should not be in charge of anything.” – Thomas Sowell
In past consulting work I’ve wrestled with subjective probability values derived from expert opinion. Subjective probability is an interpretation of probability based on a degree of belief (i.e., hypothetical willingness to bet on a position) as opposed a value derived from measured frequencies of occurrences (related posts: Belief in Probability, More Philosophy for Engineers). Subjective probability is of interest when failure data is sparse or nonexistent, as was the data on catastrophic loss of a space shuttle due to seal failure. Bayesianism is one form of inductive logic aimed at refining subjective beliefs based on Bayes Theorem and the idea of rational coherence of beliefs. A NASA handbook explains Bayesian inference as the process of obtaining a conclusion based on evidence, “Information about a hypothesis beyond the observable empirical data about that hypothesis is included in the inference.” Easier said than done, for reasons listed below.
Bayes Theorem itself is uncontroversial. It is a mathematical expression relating the probability of A given that B is true to the probability of B given that A is true and the individual probabilities of A and B:
P(A|B) = P(B|A) x P(A) / P(B)
If we’re trying to confirm a hypothesis (H) based on evidence (E), we can substitute H and E for A and B:
P(H|E) = P(E|H) x P(H) / P(E)
To be rationally coherent, you’re not allowed to believe the probability of heads to be .6 while believing the probability of tails to be .5; the sum of chances of all possible outcomes must sum to exactly one. Further, for Bayesians, the logical coherence just mentioned (i.e., avoidance of Dutch book arguments) must hold across time (synchronic coherence) such that once new evidence E on a hypothesis H is found, your believed probability for H given E should equal your prior conditional probability for H given E.
Plenty of good sources explain Bayesian epistemology and practice far better than I could do here. Bayesianism is controversial in science and engineering circles, for some good reasons. Bayesianism’s critics refer to it as a religion. This is unfair. Bayesianism is, however, like most religions, a belief system. My concern for this post is the problems with Bayesianism that I personally encounter in risk analyses. Adherents might rightly claim that problems I encounter with Bayes stem from poor implementation rather than from flaws in the underlying program. Good horse, bad jockey? Perhaps.
Problem 1. Subjectively objective
Bayesianism is an interesting mix of subjectivity and objectivity. It imposes no constraints on the subject of belief and very few constraints on the prior probability values. Hypothesis confirmation, for a Bayesian, is inherently quantitative, but initial hypotheses probabilities and the evaluation of evidence is purely subjective. For Bayesians, evidence E confirms or disconfirms hypothesis H only after we establish how probable H was in the first place. That is, we start with a prior probability for H. After the evidence, confirmation has occurred if the probability of H given E is higher than the prior probability of H, i.e., P(H|E) > P(H). Conversely, E disconfirms H when P(H|E) < P(H). These equations and their math leave business executives impressed with the rigor of objective calculation while directing their attention away from the subjectivity of both the hypothesis and its initial prior.
2. Rational formulation of the prior
Problem 2 follows from the above. Paranoid, crackpot hypotheses can still maintain perfect probabilistic coherence. Excluding crackpots, rational thinkers – more accurately, those with whom we agree – still may have an extremely difficult time distilling their beliefs, observations and observed facts of the world into a prior.
3. Conditionalization and old evidence
This is on everyone’s short list of problems with Bayes. In the simplest interpretation of Bayes, old evidence has zero confirming power. If evidence E was on the books long ago and it suddenly comes to light that H entails E, no change in the value of H follows. This seems odd – to most outsiders anyway. This problem gives rise to the game where we are expected to pretend we never knew about E and then judge how surprising (confirming) E would have been to H had we not know about it. As with the general matter of maintaining logical coherence required for the Bayesian program, it is extremely difficult to detach your knowledge of E from the rest of your knowing about the world. In engineering problem solving, discovering that H implies E is very common.
4. Equating increased probability with hypothesis confirmation.
My having once met Hillary Clinton arguably increases the probability that I may someday be her running mate; but few would agree that it is confirming evidence that I will do so. See Hempel’s raven paradox.
5. Stubborn stains in the priors
Bayesians, often citing success in the business of establishing and adjusting insurance premiums, report that the initial subjectivity (discussed in 1, above) fades away as evidence accumulates. They call this washing-out of priors. The frequentist might respond that with sufficient evidence your belief becomes irrelevant. With historical data (i.e., abundant evidence) they can calculate P of an unwanted event in a frequentist way: P = 1-e to the power -RT, roughly, P=RT for small products of exposure time T and failure rate R (exponential distribution). When our ability to find new evidence is limited, i.e., for modeling unprecedented failures, the prior does not get washed out.
6. The catch-all hypothesis
The denominator of Bayes Theorem, P(E), in practice, must be calculated as the sum of the probability of the evidence given the hypothesis plus the probability of the evidence given not the hypothesis:
P(E) = [P(E|H) x p(H)] + [P(E|~H) x P(~H)]
But ~H (“not H”) is not itself a valid hypothesis. It is a family of hypotheses likely containing what Donald Rumsfeld famously called unknown unknowns. Thus calculating the denominator P(E) forces you to pretend you’ve considered all contributors to ~H. So Bayesians can be lured into a state of false choice. The famous example of such a false choice in the history of science is Newton’s particle theory of light vs. Huygens’ wave theory of light. Hint: they are both wrong.
7. Deference to the loudmouth
This problem is related to no. 1 above, but has a much more corporate, organizational component. It can’t be blamed on Bayesianism but nevertheless plagues Bayesian implementations within teams. In the group formulation of any subjective probability, normal corporate dynamics govern the outcome. The most senior or deepest-voiced actor in the room drives all assignments of subjective probability. Social influence rules and the wisdom of the crowd succumbs to a consensus building exercise, precisely where consensus is unwanted. Seidenfeld, Kadane and Schervish begin “On the Shared Preferences of Two Bayesian Decision Makers” with the scholarly observation that an outstanding challenge for Bayesian decision theory is to extend its norms of rationality from individuals to groups. Their paper might have been illustrated with the famous photo of the exploding Challenger space shuttle. Bayesianism’s tolerance of subjective probabilities combined with organizational dynamics and the shyness of engineers can be a recipe for disaster of the Challenger sort.
All opinions welcome.
Science, as an enterprise that acquires knowledge and justified beliefs in the form of testable predictions by systematic iterations of observation and math-based theory, started around the 17th century, somewhere between Copernicus and Newton. That, we learned in school, was the beginning of the scientific revolution. Historians of science tend to regard this great revolution as the one that never happened. That is, as Floris Cohen puts it, the scientific revolution, once an innovative and inspiring concept, has since turned into a straight-jacket. Picking this revolution’s starting point, identifying any cause for it, and deciding what concepts and technological innovations belong to it are problematic.
That said, several writers have made good cases for why the pace of evolution – if not revolution – of modern science accelerated dramatically in Europe, only when it did, why it has continuously gained steam rather than petering out, its primary driving force, and the associated transformations in our view of how nature works. Some thought the protestant ethic and capitalism set the stage for science. Others thought science couldn’t emerge until the alliance between Christianity and Aristotelianism was dissolved. Moveable type and mass production of books can certainly claim a role, but was it really a prerequisite? Some think a critical mass of ancient Greek writings had to have been transferred to western Europe by the Muslims. The humanist literary critics that enabled repair and reconstruction of ancient texts mangled in translation from Greek to Syriac to Persian to Latin and botched by illiterate medieval scribes certainly played a part. If this sounds like a stretch, note that those critics seem to mark the first occurrence of a collective effort by a group spread across a large geographic space using shared standards to reach a peer-reviewed consensus – a process sharing much with modern science.
But those reasons given for the scientific revolution all have the feel of post hoc theorizing. Might intellectuals of the day, observing these events, have concluded that a resultant scientific revolution was on the horizon? Francis Bacon comes closest to fitting this bill, but his predictions gave little sense that he was envisioning anything like what really happened.
I’ve wondered why the burst of progress in science – as differentiated from plain know-how, nature-knowledge, art, craft, technique, or engineering knowledge – didn’t happen earlier. Why not just after the period of innovation in from about 1100 to 1300 CE in Europe. In this period Jean Buridan invented calculators and almost got the concept of inertia right. Robert Grosseteste hinted at the experiment-theory model of science. Nicole Oresme debunked astrology and gave arguments for a moving earth. But he was the end of this line. After this brief awakening, which also included the invention of banking and the university, progress came to a screeching halt. Some blame the plague, but that can’t be the culprit. Literature of the time barley mentions the plague. Despite the death toll, politics and war went on as usual; but interest in resurrecting ancient Greek knowledge of all sorts tanked.
Why not in the Islamic world in the time of Ali al-Qushji and al-Birjandi? Certainly the mental capacity was there. A layman would have a hard time distinguishing al-Birjandi’s arguments and thought experiments for the earth’s rotation from those of Galileo. But Islamic civilization at the time had plenty of scholars but no institutions for making practical use of such knowledge and its society would not have tolerated displacement of received wisdom by man-made knowledge.
The most compelling case for civilization having been on the brink of science at an earlier time seems to be the late republic or early imperial Rome. This may seem a stretch, since Rome is much more known for brute force than for finesse, despite their flying buttresses, cranes, fire engines, central heating and indoor plumbing.
Consider the writings of one Vitruvius, likely Marcus Vitruvius Pollio, in the early reign of Augustus. Vitruvius wrote De Architectura, a ten volume guide to Roman engineering knowledge. Architecture, in Latin, translates accurately into what we call engineering. Rediscovered and widely published during the European renaissance as a standard text for engineers, Vitruvius’s work contains text that seems to contradict what we were all taught about the emergence of the – or a – scientific method.
Vitruvius is full of surprises. He acknowledges that he is not a scientist (an anachronistic but fitting term) but a collator of Greek learning from several preceding centuries. He describes vanishing point perspective: “…the method of sketching a front with the sides withdrawing into the background, the lines all meeting in the center of a circle.” (See photo below of a fresco in the Oecus at Villa Poppea, Oplontis showing construction lines for vanishing point perspective.) He covers acoustic considerations for theater design, explains central heating technology, and the Archimedian water screw used to drain mines. He mentions a steam engine, likely that later described by Hero of Alexandria (aeolipile drawing at right), which turns heat into rotational energy. He describes a heliocentric model passed down from ancient Greeks. To be sure, there is also much that Vitruvius gets wrong about physics. But so does Galileo.
Most of De Architectura is not really science; it could more accurately be called know-how, technology, or engineering knowledge. Yet it’s close. Vitruvius explains the difference between mere machines, which let men do work, and engines, which derive from ingenuity and allow storing energy.
What convinces me most that Vitruvius – and he surely could not have been alone – truly had the concept of modern scientific method within his grasp is his understanding that a combination of mathematical proof (“demonstration” in his terms) plus theory, plus hands-on practice are needed for real engineering knowledge. Thus he says that what we call science – theory plus math (demonstration) plus observation (practice) – is essential to good engineering.
The engineer should be equipped with knowledge of many branches of study and varied kinds of learning, for it is by his judgement that all work done by the other arts is put to test. This knowledge is the child of practice and theory. Practice is the continuous and regular exercise of employment where manual work is done with any necessary material according to the design of a drawing. Theory, on the other hand, is the ability to demonstrate and explain the productions of dexterity on the principles of proportion.
It follows, therefore, that engineers who have aimed at acquiring manual skill without scholarship have never been able to reach a position of authority to correspond to their pains, while those who relied only upon theories and scholarship were obviously hunting the shadow, not the substance. But those who have a thorough knowledge of both, like men armed at all points, have the sooner attained their object and carried authority with them.
It appears, then, that one who professes himself an engineer should be well versed in both directions. He ought, therefore, to be both naturally gifted and amenable to instruction. Neither natural ability without instruction nor instruction without natural ability can make the perfect artist. Let him be educated, skillful with the pencil, instructed in geometry, know much history, have followed the philosophers with attention, understand music, have some knowledge of medicine, know the opinions of the jurists, and be acquainted with astronomy and the theory of the heavens. – Vitruvius – De Architectura, Book 1
Historians, please correct me if you know otherwise, but I don’t think there’s anything else remotely like this on record before Isaac Newton – anything in writing that comes this close to an understanding of modern scientific method.
So what went wrong in Rome? Many blame Christianity for the demise of knowledge in Rome, but that is not the case here. We can’t know for sure, but the later failure of science in the Islamic world seems to provide a clue. Society simply wasn’t ready. Vitruvius and his ilk may have been ready for science, but after nearly a century of civil war (starting with the Italian social wars), Augustus, the senate, and likely the plebes, had seen too much social innovation that all went bad. The vision of science, so evident during the European Enlightenment, as the primary driver of social change, may have been apparent to influential Romans as well, at a time when social change had lost its luster. As seen in writings of Cicero and the correspondence between Pliny and Trajan, Rome now regarded social innovation with suspicion if not contempt. Roman society, at least its government and aristocracy, simply couldn’t risk the main byproduct of science – progress.
History is not merely what happened: it is what happened in the context of what might have happened. – Hugh Trevor-Roper – Oxford Valedictorian Address, 1998
The affairs of the Empire of letters are in a situation in which they never were and never will be again; we are passing now from an old world into the new world, and we are working seriously on the first foundation of the sciences. – Robert Desgabets, Oeuvres complètes de Malebranche, 1676
Newton interjected historical remarks which were neither accurate nor fair. These historical lapses are a reminder that history requires every bit as much attention to detail as does science – and the history of science perhaps twice as much. – Carl Benjamin Boyer, The Rainbow: From Myth to Mathematics, 1957
Text and photos © 2015 William Storage
In a post on Richard Feynman and philosophy of science, I suggested that engineers would benefit from a class in philosophy of science. A student recently asked if I meant to say that a course in philosophy would make engineers better at engineering – or better philosophers. Better engineers, I said.
Here’s an example from my recent work as an engineer that drives the point home.
I was reviewing an FMEA (Failure Mode Effects Analysis) prepared by a high-priced consultancy and encountered many cases where a critical failure mode had been deemed highly improbable on the basis that the FMEA was for a mature system with no known failures.
How many hours of operation has this system actually seen, I asked. The response indicated about 10 thousand hours total.
I said on that basis we could assume a failure rate of about one per 10,001 hours. The direct cost of the failure was about $1.5 million. Thus the “expected value” (or “mathematical expectation” – the probabilistic cost of the loss) of this failure mode in a 160 hour mission is $24,000 or about $300,000 per year (excluding any secondary effects such as damaged reputation). With that number in mind, I asked the client if they wanted to consider further mitigation by adding monitoring circuitry.
I was challenged on the failure rate I used. It was, after all, a mature, ten year old system with no recorded failures of this type.
Here’s where the analytic philosophy course those consultants never took would have been useful.
You simply cannot justify calling a failure mode extremely rare based on evidence that it is at least somewhat rare. All unique events – like the massive rotor failure that took out all three hydraulic systems of a DC-10 in Sioux City – were very rare before they happened.
The authors of the FMEA I was reviewing were using unjustifiable inductive reasoning. Philosopher David Hume debugged this thoroughly in his 1738 A Treatise of Human Nature.
Hume concluded that there simply is no rational or deductive basis for induction, the belief that the future will be like the past.
Hume understood that, despite the lack of justification for induction, betting against the sun rising tomorrow was not a good strategy either. But this is a matter of pragmatism, not of rationality. A bet against the sunrise would mean getting behind counter-induction; and there’s no rational justification for that either.
In the case of the failure mode not yet observed, however, there is ample justification for counter-induction. All mechanical parts and all human operations necessarily have nonzero failure or error rates. In the world of failure modeling, the knowledge “known pretty good” does not support the proposition “probably extremely good”, no matter how natural the step between them feels.
Hume’s problem of induction, despite the efforts of Immanuel Kant and the McKinsey consulting firm, has not been solved.
A fabulously entertaining – in my view – expression of the problem of induction was given by philosopher Carl Hempel in 1965.
Hempel observed that we tend to take each new observation of a black crow as incrementally supporting the inductive conclusion that all crows are black. Deductive logic tells us that if a conditional statement is true, its contrapositive is also true, since the statement and its contrapositive are logically equivalent. Thus if all crows are black then all non-black things are non-crow.
It then follows that if each observation of black crows is evidence that all crows are black (compare: each observation of no failure is evidence that no failure will occur), then each observation of a non-black non-crow is also evidence that all crows are black.
Following this line, my red shirt is confirming evidence for the proposition that all crows are black. It’s a hard argument to oppose, but it simply does not “feel” right to most people.
Many try to salvage the situation by suggesting that observing that my shirt is red is in fact evidence that all crows are black, but provides only unimaginably small support to that proposition.
But pushing the thing just a bit further destroys even this attempt at rescuing induction from the clutches of analysis.
If my red shirt gives a tiny bit of evidence that all crows are black, it then also gives equal support to the proposition that all crows are white. After all, my red shirt is a non-white non-crow.
Years ago in a meeting on design of a complex, redundant system for a commercial jet, I referred to probabilities of various component failures. In front of this group of seasoned engineers, a highly respected, senior member of the team interjected, “I don’t believe in probability.” His proclamation stopped me cold. My first thought was what kind a backward brute would say something like that, especially in the context of aircraft design. But Willie was no brute. In fact he is a legend in electro-hydro-mechanical system design circles; and he deserves that status. For decades, millions of fearless fliers have touched down on the runway, unaware that Willie’s expertise played a large part in their safe arrival. So what can we make of Willie’s stated disbelief in probability?
Friends and I have been discussing risk science a lot lately – diverse aspects of it including the Challenger disaster, pharmaceutical manufacture in China, and black swans in financial markets. I want to write a few posts on risk science, as a personal log, and for whomever else might be interested. Risk science relies on several different understandings of risk, which in turn rely on the concept of probability. So before getting to risk, I’m going to jot down some thoughts on probability. These thoughts involve no computation or equations, but they do shed some light on Willie’s mindset. First a bit of background.
Oddly, the meaning of the word probability involves philosophy much more than it does math, so Willie’s use of belief might be justified. People mean very different things when they say probability. The chance of rolling a 7 is conceptually very different from the chance of an earthquake in Missouri this year. Probability is hard to define accurately. A look at its history shows why.
Mathematical theories of probability only first appeared in the late 17th century. This is puzzling, since gambling had existed for thousands of years. Gambling was enough of a problem in the ancient world that the Egyptian pharaohs, Roman emperors and Achaemenid satraps outlawed it. Such legislation had little effect on the urge to deal the cards or roll the dice. Enforcement was sporadic and halfhearted. Yet gamblers failed to develop probability theories. Historian Ian Hacking (The Emergence of Probability) observes, “Someone with only the most modest knowledge of probability mathematics could have won himself the whole of Gaul in a week.”
Why so much interest with so little understanding? In European and middle eastern history, it seems that neither Platonism (determinism derived from ideal forms) nor the Judeo/Christian/Islamic traditions (determinism through God’s will) had much sympathy for knowledge of chance. Chance was something to which knowledge could not apply. Chance meant uncertainty, and uncertainty was the absence of knowledge. Knowledge of chance didn’t seem to make sense. Plus, chance was the tool of immoral and dishonest gamblers.
The term probability is tied to the modern understanding of evidence. In medieval times, and well into the renaissance, probability literally referred to the level of authority – typically tied to the nobility – of a witness in a court case. A probable opinion was one given by a reputable witness. So a testimony could be highly probable but very incorrect, even false.
Through empiricism, central to the scientific method, the notion of diagnosis (inference of a condition from key indicators) emerged in the 17th century. Diagnosis allowed nature to be the reputable authority, rather than a person of status. For example, the symptom of skin spots could testify, with various degrees of probability, that measles had caused it. This goes back to the notion of induction and inference from the best explanation of evidence, which I discussed in past posts. Pascal, Fermat and Huygens brought probability into the respectable world of science.
But outside of science, probability and statistics still remained second class citizens right up to the 20th century. You used these tools when you didn’t have an exact set of accurate facts. Recognition of the predictive value of probability and statistics finally emerged when governments realized that death records had uses beyond preserving history, and when insurance companies figured out how to price premiums competitively.
Also around the turn of the 20th century, it became clear that in many realms – thermodynamics and quantum mechanics for example – probability would take center stage against determinism. Scientists began to see that some – perhaps most – aspects of reality were fundamentally probabilistic in nature, not deterministic. This was a tough pill for many to swallow, even Albert Einstein. Einstein famously argued with Niels Bohr, saying, “God does not play dice.” Einstein believed that some hidden variable would eventually emerge to explain why one of two identical atoms would decay while the other did not. A century later, Bohr is still winning that argument.
What we mean when we say probability today may seem uncontroversial – until you stake lives on it. Then it gets weird, and definitions become important. Defining probability is a wickedly contentious matter, because wildly conflicting conceptions of probability exist. They can be roughly divided into the objective and subjective interpretations. In the next post I’ll focus on the frequentist interpretation, which is objective, and the subjectivist interpretations as a group. I’ll look at the impact of accepting – or believing in – each of these on the design of things like airliners and space shuttles from the perspectives of Willie, Richard Feynman, and NASA. Then I’ll defend my own views on when and where to hold various beliefs about probability.
“Philosophy of science is about as useful to scientists as ornithology is to birds”
This post is more thoughts on the minds of interesting folk who can think from a variety of perspectives, inspired by Bruce Vojak’s Epistemology of Innovation articles. This is loosely related to systems thinking, design thinking, or – more from my perspective – the consequence of learning a few seemingly unrelated disciplines that end up being related in some surprising and useful way.
Richard Feynman ranks high on my hero list. When I was a teenager I heard a segment of an interview with him where he talked about being a young boy with a ball in a wagon. He noticed that when he abruptly pulled the wagon forward, the ball moved to the back of the wagon, and when he stopped the wagon, the ball moved forward. He asked his dad why it did that. His dad, who was a uniform salesman, put a slightly finer point on the matter. He explained that the ball didn’t really move backward; it moved forward, just not as fast as the wagon was moving. Feynman’s dad told young Richard that no one knows why a ball behaves like that. But we call it inertia. I found both points wonderfully illuminating. On the ball’s motion, there’s more than one way of looking at things. Mel Feynman’s explanation of the ball’s motion had gentle but beautiful precision, calling up thoughts about relativity in the simplest sense – motion relative to the wagon versus relative to the ground. And his statement, “we call it inertia,” got me thinking quite a lot about the difference between knowledge about a thing and the name of a thing. It also recalls Newton vs. the Cartesians in my recent post. The name of a thing holds no knowledge at all.
Feynman was almost everything a hero should be – nothing like the stereotypical nerd scientist. He cussed, pulled gags, picked locks, played drums, and hung out in bars. His thoughts on philosophy of science come to mind because of some of the philosophy-of-science issues I touched on in previous posts on Newton and Galileo. Unlike Newton, Feynman was famously hostile to philosophy of science. The ornithology quote above is attributed to him, though no one seems to have a source for it. If not his, it could be. He regularly attacked philosophy of science in equally harsh tones. “Philosophers are always on the outside making stupid remarks,“ he is quoted as saying in his biography by James Gleick.
My initial thoughts were that I can admire Feynman’s amazing work and curious mind while thinking he was terribly misinformed and hypocritical about philosophy. I’ll offer a slightly different opinion at the end of this. Feynman actually engaged in philosophy quite often. You’d think he’d at least try do a good job of it. Instead he seems pretty reckless. I’ll give some examples.
Feynman, along with the rest of science, was assaulted by the wave of postmodernism that swept university circles in the ’60s. On its front line were Vietnam protesters who thought science was a tool of evil corporations, feminists who thought science was a male power play, and Foucault-inspired “intellectuals” who denied that science had any special epistemic status. Feynman dismissed all this as a lot of baloney. Most of it was, of course. But some postmodern criticism of science was a reaction – though a gross overreaction – to a genuine issue that Kuhn elucidated – one that had been around since Socrates debated the sophists. Here’s my best Readers Digest version.
All empirical science relies on affirming the consequent, something seen as a flaw in deductive reasoning. Science is inductive, and there is no deductive justification for induction (nor is there any inductive justification for induction – a topic way too deep for a blog post). Justification actually rests on a leap of inductive faith and consensus among peers. But it certainly seems reasonable for scientists to make claims of causation using what philosophers call inference to the best explanation. It certainly seems that way to me. However, defending that reasoning – that absolute foundation for science – is a matter of philosophy, not one of science.
This issue edges us toward a much more practical one, something Feynman dealt with often. What’s the difference between science and pseudoscience (the demarcation question)? Feynman had a lot of room for Darwin but no room at all for the likes of Freud or Marx. All claimed to be scientists. All had theories. Further, all had theories that explained observations. Freud and Marx’s theories actually had more predictive success than did those of Darwin. So how can we (or Feynman) call Darwin a scientist but Freud and Marx pseudoscientists without resorting to the epistemologically unsatisfying argument made famous by Supreme Court Justice Potter Stewart: “I can’t define pornography but I know it when I see it”? Neither Feynman nor anyone else can solve the demarcation issue in any convincing way, merely by using science. Science doesn’t work for that task.
It took Karl Popper, a philosopher, to come up with the counterintuitive notion that neither predictive success nor confirming observations can qualify something as science. In Popper’s view, falsifiability is the sole criterion for demarcation. For reasons that take a good philosopher to lay out, Popper can be shown to give this criterion a bit too much weight, but it has real merit. When Einstein predicted that the light from distant stars actually bends around the sun, he made a bold and solidly falsifiable claim. He staked his whole relativity claim on it. If, in an experiment during the next solar eclipse, light from stars behind the sun didn’t curve around it, he’d admit defeat. Current knowledge of physics could not support Einstein’s prediction. But they did they experiment (the Eddington expedition) and Einstein was right. In Popper’s view, this didn’t prove that Einstein’s gravitation theory was true, but it failed to prove it wrong. And because the theory was so bold and counterintuitive, it got special status. We’ll assume it true until it is proved wrong.
Marx and Freud failed this test. While they made a lot of correct predictions, they also made a lot of wrong ones. Predictions are cheap. That is, Marx and Freud could explain too many results (e.g., aggressive personality, shy personality or comedian) with the same cause (e.g., abusive mother). Worse, they were quick to tweak their theories in the face of counterevidence, resulting in their theories being immune to possible falsification. Thus Popper demoted them to pseudoscience. Feynman cites the falsification criterion often. He never names Popper.
The demarcation question has great practical importance. Should creationism be taught in public schools? Should Karmic reading be covered by your medical insurance? Should the American Parapsychological Association be admitted to the American Association for the Advancement of Science (it was in 1969)? Should cold fusion research be funded? Feynman cared deeply about such things. Science can’t decide these issues. That takes philosophy of science, something Feynman thought was useless. He was so wrong.
Finally, perhaps most importantly, there’s the matter of what activity Feynman was actually engaged in. Is quantum electrodynamics a science or is it philosophy? Why should we believe in gluons and quarks more than angels? Many of the particles and concepts of Feynman’s science are neither observable nor falsifiable. Feynman opines that there will never be any practical use for knowledge of quarks, so he can’t appeal to utility as a basis for the scientific status of quarks. So shouldn’t quantum electrodynamics (at least with level of observability it had when Feynman gave this opinion) be classified as metaphysics, i.e., philosophy, rather than science? By Feynman’s demarcation criteria, his work should be called philosophy. I think his work actually is science, but the basis for that subtle distinction is in philosophy of science, not science itself.
While degrading philosophy, Feynman practices quite a bit of it, perhaps unconsciously, often badly. Not Dawkins-bad, but still pretty bad. His 1966 speech to the National Science Teacher’s Association entitled “What Is Science?” is a case in point. He hints at the issue of whether science is explanatory or merely descriptive, but wanders rather aimlessly. I was ready to offer that he was a great scientist and a bad accidental philosopher when I stumbled on a talk where Feynman shows a different side, his 1956 address to the Engineering and Science college at the California Institute of Technology, entitled, “The Relation of Science and Religion.”
He opens with an appeal to the multidisciplinarian:
“In this age of specialization men who thoroughly know one field are often incompetent to discuss another. The great problems of the relations between one and another aspect of human activity have for this reason been discussed less and less in public. When we look at the past great debates on these subjects we feel jealous of those times, for we should have liked the excitement of such argument.”
Feynman explores the topic through epistemology, metaphysics, and ethics. He talks about degrees of belief and claims of certainty, and the difference between Christian ethics and Christian dogma. He handles all this delicately and compassionately, with charity and grace. He might have delivered this address with more force and efficiency, had he cited Nietzsche, Hume, and Tillich, whom he seems to unknowingly parallel at times. But this talk was a whole different Feynman. It seems that when formally called on to do philosophy, Feynman could indeed do a respectable job of it.
I think Richard Feynman, great man that he was, could have benefited from Philosophy of Science 101; and I think all scientists and engineers could. In my engineering schooling, I took five courses in calculus, one in linear algebra, one non-Euclidean geometry, and two in differential equations. Substituting a philosophy class for one of those Dif EQ courses would make better engineers. A philosophy class of the quantum electrodynamics variety might suffice.
“It is a great adventure to contemplate the universe beyond man, to think of what it means without man – as it was for the great part of its long history, and as it is in the great majority of places. When this objective view is finally attained, and the mystery and majesty of matter are appreciated, to then turn the objective eye back on man viewed as matter, to see life as part of the universal mystery of greatest depth, is to sense an experience which is rarely described. It usually ends in laughter, delight in the futility of trying to understand.” – Richard Feynman, The Relation of Science and Religion
Bruce Vojak’s wonderful piece on innovation and the minds of Newton and Goethe got me thinking about another 17th century innovator. Like Newton, Galileo was a superstar in his day – a status he still holds. He was the consummate innovator and iconoclast. I want to take a quick look at two of Galileo’s errors, one technical and one ethical, not to try to knock the great man down a peg, but to see what lessons they can bring to the innovation, engineering and business of this era.
Less well known than his work with telescopes and astronomy was Galileo’s work in mechanics of solids. He seems to have been the first to explicitly identify that the tensile strength of a beam is proportional to its cross-sectional area, but his theory of bending stress was way off the mark. He applied similar logic to cantilever beam loading, getting very incorrect results. Galileo’s bending stress illustration is shown below (you can skip over the physics details, but they’re not all that heavy).
For bending, Galileo concluded that the whole cross section was subjected to tension at the time of failure. He judged that point B in the diagram at right served as a hinge point, and that everything above it along the line A-B was uniformly in horizontal tension. Thus he missed what would be elementary to any mechanical engineering sophomore; this view of the situation’s physics results in an unresolved moment (tendency to twist, in engineer-speak). Since the cantilever is at rest and not spinning, we know that this model of reality cannot be right. In Galileo’s defense, Newton’s 3rd law (equal and opposite reaction) had not yet been formulated; Newton was born a year after Galileo died. But Newton’s law was an assumption derived from common sense, not from testing.
It took more than a hundred years (see Bernoulli and Euler) to finally get the full model of beam bending right. But laboratory testing in Galileo’s day could have shown his theory of bending stress to make grossly conservative predictions. And long before Bernuolli and Euler, Edme Mariotte published an article in which he got the bending stress distribution mostly right, identifying that the neutral axis should be down the center of the beam, from top to bottom. A few decades later Antoine Parent polished up Mariotte’s work, arriving at a modern conception of bending stress.
But Mariotte and Parent weren’t superstars. Manuals of structural design continued to publish Galileo’s equation, and trusting builders continued to use them. Beams broke and people died. Deference to Galileo’s authority, universally across his domain of study, not only led to needless deaths but also to the endless but fruitless pursuit of other causes for reality’s disagreement with theory.
So the problem with Galileo’s error in beam bending was not so much the fact that he made this error, but the fact that for a century it was missed largely for social reasons. The second fault I find with Galileo’s method is intimately tied to his large ego, but that too has a social component. This fault is evident in Galileo’s writing of Dialogue on the Two Chief World Systems, the book that got him condemned for heresy.
Galileo did not invent the sun-centered model of our solar system; Copernicus did. Galileo pointed his telescope to the sky, discovered four moons of Jupiter, and named them after influential members of the Medici family, landing himself a job as the world’s highest paid scholar. No problem there; we all need to make a living. He then published Dialogue arguing for Copernican heliocentrism against the earth-centered Ptolemaic model favored by the church. That is, Galileo for the first time claimed that Copernicanism was not only an accurate predictive model, but was true. This was tough for 17th century Italians to swallow, not only their clergy.
For heliocentrism to be true, the earth would have to spin around at about 1000 miles per hour on its surface. Galileo had no good answer for why we don’t all fly off into space. He couldn’t explain why birds aren’t shredded by supersonic winds. He was at a loss to provide rationale for why balls dropped from towers appeared to fall vertically instead of at an angle, as would seem natural if the earth were spinning. And finally, if the earth is in a very different place in June than in December, why do the stars remain in the same pattern year round (why no parallax)? As UC Berkeley philosopher of science Paul Feyerabend so provocatively stated, “The church at the time of Galileo was much more faithful to reason than Galileo himself.”
At that time, Tycho Brahe’s modified geocentric theory of the planetary system (Mercury and Venus go around the sun, which goes around the earth), may have been a better bet given the evidence. Brahe’s theory is empirically indistinguishable from Copernicus’s. Venus goes through phases, like the moon, in Brahe’s model just as it does in Copernicus’s. No experiment or observation of Galileo could refute Brahe.
Here’s the rub. Galileo never mentions Brahe’s model once in Dialogue on the Two Chief World Systems. Galileo knew about Brahe. His title, Two Systems, seems simply a polemic device – at best a rhetorical ploy to eliminate his most worthy opponent by sleight of hand. He’d rather fight Ptolemy than Brahe.
Likewise, Galileo ignored Johannes Kepler in Dialogue. Kepler’s work (Astronomia Nova) was long established at the time Galileo wrote Dialogue. Kepler correctly identified that the planetary orbits were elliptical rather than circular, as Galileo thought. Kepler also modeled the tides correctly where Galileo got them wrong. Kepler wrote congratulatory letters to Galileo; Galileo’s responses were more reserved.
Galileo was probably a better man (or should have been) than his behavior toward Kepler and Brahe reveal. His fans fed his ego liberally, and he got carried away. Galileo, Brahe, Kepler and everyone else would have been better served by less aggrandizing and more humility. The tech press and the venture capital worlds that fuel what Vivek Wadhwa calls the myth of the 20-year old white male genius CEO should take note.