Archive for category Philosophy of Science

Science vs Philosophy Again

Scientists, for the most part, make lousy philosophers.

Yesterday I made a brief post on the hostility to philosophy expressed by scientists and engineers. A thoughtful reply by philosopher of science Tom Hickey left me thinking more about the topic.

Scientists are known for being hostile to philosophy and for being lousy at philosophy when they practice it inadvertently. Scientists tend to do a lousy job even at analytic philosophy, the realm most applicable to science (what counts as good thinking, evidence and proof), not merely lousy when they rhapsodize on ethics.

But science vs. philosophy is a late 20th century phenomenon. Bohr, Einstein, and Ramsey were philosophy-friendly. This doesn’t mean they did philosophy well. Many scientists, before the rift between science (“natural philosophy” as it was known) and philosophy, were deeply interested in logic, ethics and metaphysics. The most influential scientists have poor track records in philosophy – Pythagoras (if he existed), Kepler, Leibnitz and Newton, for example. Einstein’s naïve social economic philosophy might be excused for being far from his core competency, but the charge of ultracrepidarianism might still apply. More importantly, Einstein’s dogged refusal to budge on causality (“I find the idea quite intolerable that an electron exposed to radiation should chose of its own free will…”) showed methodological – if not epistemic – flaws. Still, Einstein took interest in conventionalism, positivism and the nuances of theory choice. He believed that his interest in philosophy enabled his scientific creativity:

“I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.” – (Einstein letter to Robert Thornton, Dec. 1944)

So why the current hostility? Hawking pronounced philosophy dead in his recent book. He then goes on to perform a good deal of thought around string theory, apparently unaware that he is reenacting philosophical work done long ago. Some of Hawking’s philosophy, at least, is well thought.

Not all philosophy done by scientists fares so well. Richard Dawkins makes analytic philosophers cringe; and his excursions into the intersection of science and religion are dripping with self-refutation.

The philosophy of David Deutsch is more perplexing. I recommend his The Beginning of Infinity for its breadth of ideas, some novel outlooks, for some captivating views on ethics and esthetics, and – out of the blue – for giving Jared Diamond the thrashing I think he deserves. That said, Deutsch’s dogmatism is infuriating. He invents a straw man he names inductivism. He observes that “since inductivism is false, empiricism is as well.” Deutsch misses the point that empiricism (which he calls a misconception) is something scientists lean slightly more or slightly less toward. He thinks there are card-carrying empiricists who need to be outed. Odd as the notion of scientists subscribing to a named philosophical position might appear, Deutsch does seem to be a true Popperian. He ignores the problem of choosing between alternative non-falsified theories and the matter of theory-ladenness of negative observations. Despite this, and despite Kuhn’s arguments, Popper remains on a pedestal for Deutsch. (Don’t get me wrong; there is much good in Popper.) He goes on to dismiss relativism, justificationism and instrumentalism (“a project for preventing progress in understanding the entities beyond our direct experience”) as “misconceptions.” Boom. Case closed. Read the book anyway.

So much for philosophy-hostile scientists and philosophy-friendly scientists who do bad philosophy. What about friendly scientists who do philosophy proud. For this I’ll nominate Sean Carroll. In addition to treating the common ground between physics and philosophy with great finesse in The Big Picture, Carroll, in interviews and on his blog (and here), tries to set things right. He says that “shut up and calculate” isn’t good enough, characterizing lazy critiques of philosophy as either totally dopey, frustratingly annoying, or deeply depressing. Carroll says the universe is a strange place, and that he welcomes all the help he can get in figuring it out.



Rµv – (1/2)Rgµv = 8πGTµv. This is the equation that a physicist would think of if you said “Einstein’s equation”; that E = mc2 business is a minor thing – Sean Carroll, From Eternity to Here

Up until early 20th century philosophers had material contributions to make to the physical sciences – Neil deGrasse Tyson



The P Word

Philosophy can get you into trouble.

I don’t get many responses to blog posts; and for some reason, most of those I get come as email. A good number of those I have received fall into two categories – proclamations and condemnations of philosophy.

The former consist of a final word offered on a matter that I wrote about having two sides and warranting some investigation. The respondents, whose signatures always include a three-letter suffix, set me straight, apparently discounting the possibility of an opposing PhD. Regarding argumentum ad verecundiam, John Locke’s 1689 Essay Concerning Human Understanding is apparently passé in the era where nonscientists feel no shame for their science illiteracy and “my scientist can beat up your scientist.” For one blog post where I questioned whether fault tree analysis was, as commonly claimed, a deductive process, I received two emails in perfect opposition, both suitably credentialed but unimpressively defended.

More surprising is hostility to endorsement of philosophy in general or philosophy of science (as in last post). It seems that for most scientist, engineers and Silicon Valley tech folk, “philosophy” conjures up guys in wool sportscoats with elbow patches wondering what to doubt next or French neoliberals congratulating themselves on having simultaneously confuted Freud, Marx, Mao, Hamilton, Rawls and Cato the Elder.

When I invoke philosophy here I’m talking about how to think well, not how to live right. And philosophy of science is a thing (hint: Google); I didn’t make it up. Philosophy of science is not about ethics. It has to do with that fact that most of us agree that science yields useful knowledge, but we don’t all agree about what makes good scientific thinking. I.e., what counts as evidence, what truth and proof mean, and being honest about what questions science can’t answer.

Philosophy is not, as some still maintain, a framework or ground on which science rests. The failure of logical positivism in the 1960s ended that notion. But the failure of positivism did not render science immune to philosophy. Willard Van Orman Quine is known for having put the nail in the coffin of logical positivism. Quine introduced a phrase I discussed in my last post – underdetermination of theory by data – in his 1951  “Two Dogmas of Empiricism,” often called the most important philosophical article of the 20th century. Quine’s article isn’t about ethics; it’s about scientific method. As Quine later said in Ontological Relativity and Other Essays (1969):

 see philosophy not as groundwork for science, but as continuous with science. I see philosophy and science as in the same boat – a boat which we can rebuild only at sea while staying afloat in it. There is no external vantage point, no first philosophy. All scientific findings, all scientific conjectures that are at present plausible, are therefore in my view as welcome for use in philosophy as elsewhere.

Philosophy helps us to know what science is. But then, what is philosophy, you might ask. If so, you’re halfway there.


Philosophy is the art of asking questions that come naturally to children, using methods that come naturally to lawyers. – David Hills in Jeffrey Kasser’s The Philosophy of Science lectures

The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. – Wilfrid Sellars, “Philosophy and the Scientific Image of Man,” 1962

This familiar desk manifests its presence by resisting my pressures and by deflecting light to my eyes. – WVO Quine, Word and Object, 1960



Andrei’s Anthropic Abduction

chaotic inflationNo Space aliens here. This deals with the question of whether Stanford physicist Andrei Linde’s work deserves to be called science or whether it is in the realm of pseudoscience some call “not even wrong.” While debated among scientists, this question isn’t really in the domain of science, but of philosophy. If use of the term abduction to describe a form of reasoning isn’t familiar, please read my previous post.

Linde is a key figure in the family of theories of cosmic origins called inflation. Inflation holds that in a period lasting roughly 10E-30 seconds the cosmos doubled in size by at least 100 orders of magnitude. Quantum fluctuations in the then-tiny inflationary region became the gravitational seeds that formed the galaxies and galaxy clusters we now observe. Proponents of the theory hold that it is the best explanation for universal homogeneity and isotropy of matter, the miniscule temperature anisotropies of the cosmic microwave background radiation, geometrical flatness of the universe, and the absence of magnetic monopoles. Inflation requires that the universe should be incredibly homogeneous, isotropic and conform to Euclidean geometry – but not completely. It’s perturbations should be Gaussian and adiabatic, and it requires a nonzero vacuum energy that is, however, extremely close to zero.

In Linde’s model of inflation, the rapidly expanding regions branch off from other expanding regions and occasionally enter a non-inflating phase. But the generation rate of inflationary regions is much higher than the rate of termination of inflation within regions. Therefore, the volume of the inflating part of space is always much larger than the part where inflation has stopped, Terminology varies; for sake of clarity I’ll use multiverse to describe all of space and universe (sometimes Linde uses bubble universe) to describe each separate inflating region or region where inflation has stopped, as is the case where we live. Each of universe in this multiverse can have radically different laws of physics (more accurately, different physical constants and properties). Finally, note that this multiverse scenario has nothing to do with the more popular parallel-universe consequences of the Many Worlds interpretation of quantum mechanics. Linde’s work is particularly interesting for looking at  scientific-realism and empiricist leanings of living scientists. Chaotic inflation is unappealing to empiricists, but less maligned than string theory.

Linde gave a fun, engaging hour-long intro to his version of inflation in a talk to the SETI Institute in 2012 (above). He presents the theory briefly, using the imagery of fractals, and then gives a long defense of anthropic reasoning. Anthropic arguments may, at first glance, appear to be mere tautologies, but scrutiny shows something more subtle and complex. Linde once said, “those who dislike anthropic principles are simply in denial.” In the SETI talk he jocularly explains the anthropic response to the apparent fine tuning of the universe. Finally, he gives a philosophical justification for his theory, explicitly rejecting empiricists’ demands that all predictions be falsifiable, and making inference to the best explanation primary with a justification of “best” by process of elimination.

Curiously, anthropic reasoning, often reviled when applied to universe-sized entities, is readily accepted on smaller scales. Roger Penrose is dubious, saying such reasoning “tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts.” But the earth and life on it in some ways seems a bag of unlikely coincidences. Life requires water and the earth is just the right distance from the sun to allow liquid water. It’s no surprise that we don’t find ourselves on Venus, because it has no water. If the overwhelming majority of planets in the universe are uninhabitable the apparent coincidence that we find ourselves on one that is habitable evaporates. If the overwhelming majority of universes don’t support star formation because of incompatible vacuum energy, the apparent fine tuning of that value here is demystified.

On vacuum energy, Linde says in the SETI talk, “that’s why the energy of the universe is so tiny, because if non-tiny, we would not be talking about it.”

Addressing his empiricist critics he says:

“Is it physics or metaphysics? Can it be experimentally tested? … This theory provides the only known explanation of numerous anthropic coincidences (extremely small vacuum energy, strange masses of elementary particles, etc.). In this sense it was already tested… When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

Mass of the neutron is just slightly larger than mass of the proton. Neutrons decay. If protons were just slightly heavier than neutrons, the protons would decay and you’d have a totally different universe where we would not be able to live… Protons are 2000 times heavier than electrons. If electronics were twice as heavy as we find them to be, we wouldn’t be able to live here… What is so special about it? What is so special about it is us. We would be unable to exist in the part of the universe where the electron has a different mass.”

Noting that the American judicial system is based on inference to the best explanation, Linde then uses humor and points to the name of a philosopher on a screen that we don’t see. Presumably the name is either Charles Peirce or Gilbert Harman. He offers a justification that while not completely watertight, is pretty good:

“The multiverse is as of now the only existing explanation of experimental fact [mass of electron]. So when people say we cannot travel that far [beyond the observable universe] and therefore the multiverse theory cannot be tested, it’s already tested experimentally by our own existence. But you may say, ‘what we want is to make a prediction and then check it experimentally.’ My answer to that is that this is not how the American court system works. For example a person killed his wife. They do not repeat the experiment. They do not give him a new wife and a knife, etc. What they do is they use the method suggested by this philosopher. They just try to eliminate impossible options. And once they eliminate them either the guy goes free or the guy goes dead, or a mistake – sorry… So everything is possible. It is not necessary to repeat the experiment and check what is going to happen with the universe if it’s cooked up differently. If what we have provides the only explanation of what we see, that’s already something.”

Linde then goes farther down the road of anthropic reasoning than I’ve seen others do, responding to famous quotes by Einstein and Wigner, following with a much less famous retort to Wigner by Israel Gelfand. The Gelfand quote, echoing Kant, gives a hint as to where Linde is heading:

The most incomprehensible thing about the universe is its comprehensibility – Albert Einstein

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift, which we neither understand nor deserve.  (The Unreasonable Effectiveness of Mathematics ) – Eugene Wigner

There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology. - Israel Gelfand

Linde says Einstein and Wigner’s puzzles are easily explained. If a universe obeys discoverable laws, it can be considered as an undeserved gift of God to physicists and mathematicians. But elsewhere, in a universe that is a mess, you cannot make any predictions and your mathematicians and physicists would be totally useless. Linde emphasizes, “Universes that do not produce observers do not produce physicists.” No one in such a universe would contemplate the effectiveness of mathematics. In a universe of high density the interactions would be so swift and strong that once you record anything, a millisecond later it would be gone. Your calculations would be instantly negated. In these universes mathematics and physics are ineffective. But we can only live in universes, says Linde, where natural selection is possible and where predictions are possible. Linde says, as humans, we need to make predictions at every step of our lives. He then jokes, “If we would be in a universe where predictions are impossible we wouldn’t be there” Einstein can only live in the kind of universe where Einstein can ask why the universe is so comprehensible.

Nobel Laureate Steven Weinberg finds the multiverse an intriguing idea with some good theoretical support, but on reading that Andrei Linde was willing to bet his life on it and that Martin Rees was willing to bet the life of his dog, Weinberg offered, “I have just enough confidence about the multiverse to bet the lives of both Andrei Linde and Martin Rees’s dog.”

Neutrinos and the Higgs field were predicted decades before they were observed. Most would agree that the detection of neutrinos is direct enough that the term “observation” is justified. The Higgs boson, in comparison, was only inferred as the best explanation of the decay patterns of high energy hadron collisions. Could the interpretation of disturbances in the cosmic background radiation as “bruises” caused by collisions between adjacent bubble universes and our own count as confirming evidence of Linde’s model? Could anything else – in practice or in principle – confirm or falsify chaotic inflation? Particle physics isn’t my day job. Let me know if my understanding of the science is wrong or if you have a different view of Linde’s philosophical stance.

– – –


There will never be a Newton of the blade of grass – Immanuel Kant

Physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover― Bertrand Russell

As we look out into the universe and identify the many accidents of physics and astronomy that have worked together to our benefit, it almost seems as if the universe must in some sense have known we were coming. – Freeman Dyson


1 Comment

Of Mice, Maids and Explanatory Theories

One night in early 1981, theoretical physicist Andrei Linde woke his wife in the middle of the night and said, “I think that I know how the universe was born.”

That summer he wrote a paper on this topic, rushing to get it published in an international journal. But in cold war Russia, it took months for the government censors to approve everything that crossed the border. That October Linde was able to give a talk on his theory, which he called new inflation, at a conference on quantum gravity in Moscow attended by Stephen Hawking and similar luminaries. The next day Hawking gave a talk on Alan Guth’s earlier independent work on cosmic inflation in English. Linde received the task of translating Hawking’s talk to Russian in real time. Linde didn’t know what Hawking would be saying in advance. Hawking’s talk explained Guth’s theory and then went on explain why Linde’s theory was incorrect. So Linde had the painful experience of unfolding several arguments against his own work to an audience of Russian scientists who were in control of Linde’s budget and career. At the end of Hawking’s talk, Linde offered to explain why Hawking was wrong. Hawking agreed to listen, then agreed that Linde was right. They became friends and, as Linde explains it, “we were off to the races.”

The MultiDisciplinarian - Of Mice and Maids and Explanatory Theories

Linde’s idea was a new twist on a family of theories about cosmic inflation, a theory that encompasses big bang theory. In Linde’s refinement of it, cosmic inflation continued while the scalar field slowly rolled down. If that isn’t familiar science, don’t worry, I’ll post some links.  Linde later reworked his new inflation theory, arriving at chaotic inflation, as it is now known. The theory has, as a necessary consequence, a nearly infinite number of parallel universes. To clarify, parallel universes are not a theory of Linde’s, per se. Parallel universes fall out of his theory designed to explain phenomena we observe, such as the cosmic background radiation.

So the theory involves entities (universes) that are not only unobservable in practice, but unobservable in principle too. Can a theory that makes untestable claims and posits unobservable entities fairly be called scientific? Even if some of its consequences are observable and falsifiable? And how can we justify the judgment we reach on that question? To answer these questions we need some background from the philosophy of science.

The Scientific Method
A simplistic view of scientific method involves theories that make predictions about the world, testing the theories by experimentation and observation, and then discarding or refining theories that fail to predict the outcomes of experiments or make wrong predictions. At some point in the life of a theory or family of theories, one might judge that a law of nature has been uncovered. Such a law might be, for example, that all copper conducts electricity or that force equals mass times acceleration. Another use for theories is to explain things. For example, the patient complains of shortness of breath only on cold days and the doctor judges the cause to be episodic bronchial constriction rather than asthma. Here we’re relying on the tight link between explanation and cause. More on that below.

Notice that science, as characterized like this, doesn’t prove anything, but merely gives evidence for something. Proof uses deduction. It works for geometry, syllogisms and affirming the antecedent, but not for science. You remember the rules. All rocks are mortal. Socrates is a rock. Therefore, Socrates is mortal. The conclusion about Socrates, in this case, follows from his membership in the set of all rocks, about which it is given that they are mortal. Substitute men or any other set, class, or category for rocks and the conclusion remains valid.

Science relies on inferences that are inductive. They typically take the following form: All observed X have been Y. The next observed X will be Y. Or a simpler version: All observed X have been Y. Therefore all X are Y. Real science does a better job. It eliminates many claims about future observations of X  even before any non-Y instances of X have been found. It was unreasonable to claim that all swans were white even before Australian black swans were discovered. We know how fickle color is in birds. Another example of scientific induction is the conductive power of copper mentioned above. All observed copper conducts electricity. The next piece of copper found will also conduct.

In the mid-1700s philosopher David Hume penned a challenge to inductive thought that is still debated. Hume noted that induction assumes the uniformity of nature, something for which there can be no proof. Says Hume, we can easily imagine a universe that is not uniform – one where everything is haphazard and unpredictable. Such universes are of particular interest in Andrei Linde’s theory. Proving universal uniformity of nature would vindicate induction, said Hume, but no such proof is possible. One might be tempted to argue that nature has always been uniform until now so it is reasonable that it will continue to be so. Using induction to demonstrate the uniformity of nature – in order to vindicate induction in the first place – is obviously circular.

Despite the logical weakness of inductive reasoning, science relies on it. We beef up our induction with scientific explanations. This brings up the matter of what makes a scientific explanation good. It’s tempting to jump to the conclusion that a good explanation is one that reveals the cause of an observed effect. But, as troublemaker David Hume also showed, causality is never really observed directly – only chronology is. Analytic philosophers, logicians, and many quantum physicists are in fact very leery of causality. Carl Hempel, in the 1950s, worked hard on an alternative account of scientific explanation, the Covering-law model. It ultimately proved flawed. I’ll spare you the details.

Hempel also noted a symmetry between explanation and prediction. He claimed that the very laws of nature and experimental observations used to explain a phenomenon could have also been used to predict that phenomenon, had it not already been observed. While valid in many cases, in the years following Hempel’s valiant efforts, it became clear that significant exceptions existed for all of Hempel’s claims of symmetry in scientific explanation. So in most cases we’re really left with no option for explanation other than causality.

Beyond deduction and the simple more-of-the-same type of induction, we’ve been circling around another form of reasoning thought by some to derive from induction but argued by others to be more fundamental than the above-described induction. This is abductive reasoning or inference to the best explanation (synonymous for our purposes though differentiated by some). Inference to the best explanation requires that a theory not merely necessitate the observations but explain them. For this I’ll use an example from Samir Okasha of the University of Bristol.

Who moved the cheese?
The cheese disappeared from the cupboard last night, except for a few crumbs. The family were woken by scratching noises heard coming from the kitchen. How do we explain the phenomenon of missing cheese? Sherlock Holmes would likely claim he deduced that a mouse had crawled up the cupboard and taken the cheese. But no deduction is involved. Nor is induction as described above. Sherlock would actually be inferring, from the available evidence that, among the possible explanations for these observations, that a mouse was the best theory. The cheese could have vanished from a non-uniformity of nature, or the maid may have stolen it; but Holmes thought the mouse explanation to be best.

Inference to the best explanation is particularly important when science deals with unobservable entities. Electrons are the poster children for unobservable entities that most scientists describe as real. Other entities useful in scientific theories are given less credence. Scientists postulate such entities as components of a theory; and many such theories enjoy great predictive success. The best explanation of their predictive success is often that the postulated unobservables are in fact real. Likewise, the theory’s explanatory success, while relying on unobservables, argues that the theory is valid. The no miracles argument maintains that if the unobservable entities are actually not present in the world, then the successfully predicted phenomena would be unexplained miracles. Neutrinos were once unobservable, as were quarks and Higgs bosons. Note that “observable” here is used loosely. Some might prefer “detectable”; but that distinction opens another can of philosophical worms.

More worms emerge when we attempt to define “best” in this usage. Experimenters will have different criteria as to what makes an explanation good. For some simplicity is best, for others loveliness or probability. This can of worms might be called the problem of theory choice. For another time, maybe.

For a more current example consider the Higgs Boson. Before its recent discovery, physicists didn’t infer that all Higgs particles would have a mass of 126 GeV from prior observations of other Higgs particles having that weight, since there had been no observations of Higgs at all. Nor did they use any other form of simple induction. They inferred that the Higgs must exist as the best explanation of other observations, and that if the Higgs did exist, it would have a mass in that range. Bingo – and it did.

The school of thought most suspicious of unobservable entities is called empiricism. In contrast, those at peace with deep use of inference to the best explanation are dubbed scientific realists. Those leaning toward empiricism (few would identify fully with either label) cite two classic epistemological complaints with scientific realism: underdetermination of theory by data and pessimistic meta-induction. All theories are, to some degree, vulnerable to competing theories that explain the same observations – perhaps equally well (underdetermination). Empiricists feel that the degree of explanatory inference entailed in string theory and some of Andrei Linde’s work are dangerously underdetermined. The pessimistic meta-induction argument, in simplest form, says that science has been wrong about unobservables many times in the past and therefore, by induction, is probably wrong this time. In summary, empiricists assert that inference to the best explanation wanders too far beyond solid evidential grounds and leads to metaphysical speculation. Andrei Linde, though he doesn’t state so explicitly, sees inference to the best explanation as scientifically rational and essential to a mature theory of universal inflation.

With that background, painful as it might be, I’ll be able to explain my thoughts on Andrei Linde’s view of the world, and to analyze his defense of his theory and its unobservables in my next post.

– – –

Biographical material on Andrei Linde from the essay, “A balloon producing balloons producing balloons,” in The Universe, edited by John Brockman, and
Autobiography of Andrei Linde for the Kavli foundation



Leave a comment

Can Science Survive?

In my last post I ended with the question of whether science in the pure sense can withstand science in the corporate, institutional, and academic senses. Here’s a bit more on the matter.

Ronald Reagan, pandering to a church group in Dallas, famously said about evolution, “Well, it is a theory. It is a scientific theory only.” (George Bush, often “quoted” as saying this, did not.) Reagan was likely ignorant of the distinction between two uses of the word, theory. On the street, “theory” means an unsettled conjecture. In science a theory – gravitation for example – is a body of ideas that explains observations and makes predictions. Reagan’s statement fueled years of appeals to teach creationism in public schools, using titles like creation science and intelligent design. While the push for creation science is usually pinned on southern evangelicals, it was UC Berkeley law professor Phillip E Johnson who brought us intelligent design.

Arkansas was a forerunner in mandating equal time for creation science. But its Act 590 of 1981 (Balanced Treatment for Creation-Science and Evolution-Science Act) was shut down a year later by McLean v. Arkansas Board of Education. Judge William Overton made philosophy of science proud with his set of demarcation criteria. Science, said Overton:

  • is guided by natural law
  • is explanatory by reference to natural law
  • is testable against the empirical world
  • holds tentative conclusions
  • is falsifiable

For earlier thoughts on each of Overton’s five points, see, respectively, Isaac Newton, Adelard of Bath, Francis Bacon, Thomas Huxley, and Karl Popper.

In the late 20th century, religious fundamentalists were just one facet of hostility toward science. Science was also under attack on the political and social fronts, as well an intellectual or epistemic front.

President Eisenhower, on leaving office in 1960, gave his famous “military industrial complex” speech warning of the “danger that public policy could itself become the captive of a scientific technological elite.” At about the same time the growing anti-establishment movements – perhaps centered around Vietnam war protests –  vilified science for selling out to corrupt politicians, military leaders and corporations. The ethics of science and scientists were under attack.

Also at the same time, independently, an intellectual critique of science emerged claiming that scientific knowledge necessarily contained hidden values and judgments not based in either objective observation (see Francis Bacon) or logical deduction (See Rene Descartes). French philosophers and literary critics Michel Foucault and Jacques Derrida argued – nontrivially in my view – that objectivity and value-neutrality simply cannot exist; all knowledge has embedded ideology and cultural bias. Sociologists of science ( the “strong program”) were quick to agree.

This intellectual opposition to the methodological validity of science, spurred by the political hostility to the content of science, ultimately erupted as the science wars of the 1990s. To many observers, two battles yielded a decisive victory for science against its critics. The first was publication of Higher Superstition by Gross and Levitt in 1994. The second was a hoax in which Alan Sokal submitted a paper claiming that quantum gravity was a social construct along with other postmodern nonsense to a journal of cultural studies. After it was accepted and published, Sokal revealed the hoax and wrote a book denouncing sociology of science and postmodernism.

Sadly, Sokal’s book, while full of entertaining examples of the worst of postmodern critique of science, really defeats only the most feeble of science’s enemies, revealing a poor grasp of some of the subtler and more valid criticism of science. For example, the postmodernists’ point that experimentation is not exactly the same thing as observation has real consequences, something that many earlier scientists themselves – like Robert Boyle and John Herschel – had wrestled with. Likewise, Higher Superstition, in my view, falls far below what we expect from Gross and Levitt. They deal Bruno Latour a well-deserved thrashing for claiming that science is a completely irrational process, and for the metaphysical conceit of holding that his own ideas on scientific behavior are fact while scientists’ claims about nature are not. But beyond that, Gross and Levitt reveal surprisingly poor knowledge of history and philosophy of science. They think Feyerabend is anti-science, they grossly misread Rorty, and waste time on a lot of strawmen.

Following closely  on the postmodern critique of science were the sociologists pursuing the social science of science. Their findings: it is not objectivity or method that delivers the outcome of science. In fact it is the interests of all scientists except social scientists that govern the output of scientific inquiry. This branch of Science and Technology Studies (STS), led by David Bloor at Edinburgh in the late 70s, overplayed both the underdetermination of theory by evidence and the concept of value-laden theories. These scientists also failed to see the irony of claiming a privileged position on the untenability of privileged positions in science. I.e., it is an absolute truth that there are no absolute truths.

While postmodern critique of science and facile politics in STC studies seem to be having a minor revival, the threats to real science from sociology, literary criticism and anthropology (I don’t mean that all sociology and anthropology are non-scientific) are small. But more subtle and possibly more ruinous threats to science may exist; and they come partly from within.

Modern threats to science seem more related to Eisenhower’s concerns than to the postmodernists. While Ike worried about the influence the US military had over corporations and universities (see the highly nuanced history of James Conant, Harvard President and chair of the National Defense Research Committee), Eisenhower’s concern dealt not with the validity of scientific knowledge but with the influence of values and biases on both the subjects of research and on the conclusions reached therein. Science, when biased enough, becomes bad science, even when scientists don’t fudge the data.

Pharmaceutical research is the present poster child of biased science. Accusations take the form of claims that GlaxoSmithKline knew that Helicobacter pylori caused ulcers – not stress and spicy food – but concealed that knowledge to preserve sales of the blockbuster drugs, Zantac and Tagamet. Analysis of those claims over the past twenty years shows them to be largely unsupported. But it seems naïve to deny that years of pharmaceutical companies’ mailings may have contributed to the premature dismissal by MDs and researchers of the possibility that bacteria could in fact thrive in the stomach’s acid environment. But while Big Pharma may have some tidying up to do, its opponents need to learn what a virus is and how vaccines work.

Pharmaceutical firms generally admit that bias, unconscious and of the selection and confirmation sort – motivated reasoning – is a problem. Amgen scientists recently tried to reproduce results considered landmarks in basic cancer research to study why clinical trials in oncology have such high failure rate. They reported in Nature that they were able to reproduce the original results in only six of 53 studies. A similar team at Bayer reported that only about 25% of published preclinical studies could be reproduced. That the big players publish analyses of bias in their own field suggests that the concept of self-correction in science is at least somewhat valid, even in cut-throat corporate science.

Some see another source of bad pharmaceutical science as the almost religious adherence to the 5% (+- 1.96 sigma) definition of statistical significance, probably traceable to RA Fisher’s 1926 The Arrangement of Field Experiments. The 5% false-positive probability criterion is arbitrary, but is institutionalized. It can be seen as a classic case of subjectivity being perceived as objectivity because of arbitrary precision. Repeat any experiment long enough and you’ll get statistically significant results within that experiment. Pharma firms now aim to prevent such bias by participating in a registration process that requires researchers to publish findings, good, bad or inconclusive.

Academic research should take note. As is often reported, the dependence of publishing on tenure and academic prestige has taken a toll (“publish or perish”). Publishers like dramatic and conclusive findings, so there’s a strong incentive to publish impressive results – too strong. Competitive pressure on 2nd tier publishers leads to their publishing poor or even fraudulent study results. Those publishers select lax reviewers, incapable of or unwilling to dispute authors. Karl Popper’s falsification model of scientific behavior, in this scenario, is a poor match for actual behavior in science. The situation has led to hoaxes like Sokal’s, but within – rather than across – disciplines. Publication of the nonsensical “Fuzzy”, Homogeneous Configurations by Marge Simpson and Edna Krabappel (cartoon character names) by the Journal of Computational Intelligence and Electronic Systems in 2014 is a popular example. Following Alan Sokal’s line of argument, should we declare the discipline of computational intelligence to be pseudoscience on this evidence?

Note that here we’re really using Bruno Latour’s definition of science – what scientists and related parties do with a body of knowledge in a network, rather than simply the body of knowledge. Should scientists be held responsible for what corporations and politicians do with their knowledge? It’s complicated. When does flawed science become bad science. It’s hard to draw the line; but does that mean no line needs to be drawn?

Environmental science, I would argue, is some of the worst science passing for genuine these days. Most of it exists to fill political and ideological roles. The Bush administration pressured scientists to suppress communications on climate change and to remove the terms “global warming” and “climate change” from publications. In 2005 Rick Piltz resigned from the  U.S. Climate Change Science Program claiming that Bush appointee Philip Cooney had personally altered US climate change documents to lessen the strength of their conclusions. In a later congressional hearing, Cooney confirmed having done this. Was this bad science, or just bad politics? Was it bad science for those whose conclusions had been altered not to blow the whistle?

The science of climate advocacy looks equally bad. Lack of scientific rigor in the IPCC is appalling – for reasons far deeper than the hockey stick debate. Given that the IPCC started with the assertion that climate change is anthropogenic and then sought confirming evidence, it is not surprising that the evidence it has accumulated supports the assertion. Compelling climate models, like that of Rick Muller at UC Berkeley, have since given strong support for anthropogenic warming. That gives great support for the anthropogenic warming hypothesis; but gives no support for the IPCC’s scientific practices. Unjustified belief, true or false, is not science.

Climate change advocates, many of whom are credentialed scientists, are particularly prone to a mixing bad science with bad philosophy, as when evidence for anthropogenic warming is presented as confirming the hypothesis that wind and solar power will reverse global warming. Stanford’s Mark Jacobson, a pernicious proponent of such activism, does immeasurable damage to his own stated cause with his descent into the renewables fantasy.

Finally, both major climate factions stoop to tying their entire positions to the proposition that climate change has been measured (or not). That is, both sides are in implicit agreement that if no climate change has occurred, then the whole matter of anthropogenic climate-change risk can be put to bed. As a risk man observing the risk vector’s probability/severity axes – and as someone who buys fire insurance though he has a brick house – I think our science dollars might be better spent on mitigation efforts that stand a chance of being effective rather than on 1) winning a debate about temperature change in recent years, or 2) appeasing romantic ideologues with “alternative” energy schemes.

Science survived Abe Lincoln (rain follows the plow), Ronald Reagan (evolution just a theory) and George Bush (coercion of scientists). It will survive Barack Obama (persecution of deniers) and Jerry Brown and Al Gore (science vs. pronouncements). It will survive big pharma, cold fusion, superluminal neutrinos, Mark Jacobson, Brian Greene, and the Stanford propaganda machine. Science will survive bad science because bad science is part of science, and always has been. As Paul Feyerabend noted, Galileo routinely used propaganda, unfair rhetoric, and arguments he knew were invalid to advance his worldview.

Theory on which no evidence can bear is religion. Theory that is indifferent to evidence is often politics. Granting Bloor, for sake of argument, that all theory is value-laden, and granting Kuhn, for sake of argument, that all observation is theory-laden, science still seems to have an uncanny knack for getting the world right. Planes fly, quantum tunneling makes DVD players work, and vaccines prevent polio. The self-corrective nature of science appears to withstand cranks, frauds, presidents, CEOs, generals and professors. As Carl Sagan Often said, science should withstand vigorous skepticism. Further, science requires skepticism and should welcome it, both from within and from irksome sociologists.



the multidisciplinarian


XKCD cartoon courtesy of


, , ,

1 Comment

The Trouble with Strings

Theoretical physicist Brian Greene is brilliant, charming, and silver-tongued. I’m guessing he’s the only Foundational Questions Institute grant awardee who also appears on the Pinterest Gorgeous Freaking Men page. Greene is the reigning spokesman for string theory, a theoretical framework proposing that one dimensional (also higher dimensions in later variants, e.g., “branes”) objects manifest different vibrational modes to make up all particles and forces of physics’ standard model. Though its proponents now discourage such usage, many call string theory the grand unification, the theory of everything. Since this includes gravity, string theorists also hold that string theory entails the elusive theory of quantum gravity. String theory has gotten a lot of press over the past few decades in theoretical physics and, through academic celebrities like Greene, in popular media.


Several critics, some of whom once spent time in string theory research, regard it as not a theory at all. They see it as a mere formalism – a potential theory or family – very, very large family – of potential theories, all of which lack confirmable or falsifiable predictions. Lee Smolin, also brilliant, lacks some of Greene’s other attractions. Smolin is best known for his work in loop quantum gravity – roughly speaking, string theory’s main competitor. Smolin also had the admirable nerve to publicly state that, despite the Sokol hoax affair, sociologists have the right and duty to examine the practice of science. His sensibilities on that issue bring to bear on the practice of string theory.

Columbia University’s Peter Woit, like Smolin, is a highly vocal critic of string theory. Like Greene and Smolin, Woit is wicked sharp, but Woit’s tongue is more venom than silver. His barefisted blog, Not Even Wrong, takes its name from a statement Rudolf Peierls claimed Wolfgang Pauli had made about some grossly flawed theory that made no testable predictions.

The technical details of whether string theory is in fact a theory or whether string theorists have made testable predictions or can, in theory, ever make such predictions is great material that one could spend a few years reading full time. Start with the above mentioned authors and follow their references. Though my qualifications to comment are thin, it seems to me that string theory is at least in principle falsifiable, at least if you accept that failure to detect supersymmetry (required for strings) at the LHC or future accelerators over many attempts to do so.

But for this post I’m more interested in a related topic that Woit often covers – not the content of string theory but its practice and its relationship to society.

Regardless of whether it is a proper theory, through successful evangelism by the likes of Greene, string theory has gotten a grossly disproportionate amount of research funding. Is it the spoiled, attention-grabbing child of physics research? A spoiled child for several decades, says Woit – one that deliberately narrowed the research agenda to exclude rivals. What possibly better theory has never seen the light of day because its creator can’t get a university research position? Does string theory coerce and persuade by irrational methods and sleight of hand, as Feyerabend argued was Galileo’s style? Galileo happened to be right of course – at least on some major points.

Since Galileo’s time, the practice of science and its relationship to government, industry, and academic institutions has changed greatly. Gentleman scientists like Priestly, Boyle, Dalton and Darwin are replaced by foundation-funded university research and narrowly focused corporate science. After Kuhn – or misusing Kuhn – sociologists of science in the 1980s and 90s tried to knock science from its privileged position on the grounds that all science is tainted with cultural values and prejudices. These attacks included claims of white male bias and echoes of Eisenhower’s warnings about the “military industrial complex.”   String theory, since it holds no foreseeable military or industrial promise, would seem to have immunity from such charges of bias. I doubt Democrats like string more than Republicans.

Yet, as seen by Smolin and Woit, in string theory, Kuhn’s “relevant community” became the mob (see Lakatos on Kuhn/mob) – or perhaps a religion not separated from the state. Smolin and Woit point to several cult aspects of the string theory community. They find it to be cohesive, monolithic and high-walled – hard both to enter and to leave. It is hierarchical; a few leaders control the direction of the field while its initiates aim to protect the leaders from dissenting views.  There is an uncommon uniformity of views on open questions; and evidence is interpreted optimistically. On this view, string theorists yield to Bacon’s idols of the tribe, the cave, and the marketplace. Smolin cites the rarity of particle physicists outside of string theory to be invited to its conferences.

In The Trouble with Physics, Smolin details a particular example of community cohesiveness unbecoming to science. Smolin says even he was, for much of two decades, sucked into the belief that string theory had been proved finite. Only when he sought citations for a historical comparison of approaches in particle physics he was writing did he find that what he and everyone else assumed to have been proved long ago had no basis. He questioned peers, finding that they too had ignored vigorous skepticism and merely gone with the flow. As Smolin tells it, everyone “knew” that Stanley Mandelstam (UC Berkeley)  had proved string theory finite in its early days. Yet Mandelstam himself says he did not. I’m aware that there are other takes on the issue of finitude that may soften Smolin’s blow; but, in my view, his point on group cohesiveness and their indignation at being challenged still stand.

A telling example of the tendency for string theory to exclude rivals comes from a 2004 exchange on the sci.physics.strings Google group between Luboš Motl and Wolfgang Lerche of CERN, who does a lot of work on strings and branes. Motl pointed to Leonard Susskind’s then recent embrace of “landscapes,” a concept Susskind had dismissed before it became useful to string theory. To this Lerche replied:

“what I find irritating is that these ideas are out since the mid-80s… this work had been ignored (because it didn’t fit into the philosophy at the time) by the same people who now re-“invent” the landscape, appear in journals in this context and even seem to write books about it.  There had always been proponents of this idea, which is not new by any means.. . . the whole discussion could (and in fact should) have been taken place in 1986/87. The main thing what has changed since then is the mind of certain people, and what you now see is the Stanford propaganda machine working at its fullest.”

Can a science department in a respected institution like Stanford in fairness be called a propaganda machine? See my take on Mark Jacobson’s science for my vote. We now have evidence that science can withstand religion. The question for this century might be whether science, in the purse sense, can withstand science in the corporate, institutional, and academic sense.


String theory cartoon courtesy of XKCD.


I just discovered on Woit’s Not Even Wrong a mention of John Horgan’s coverage of Bayesian belief (previous post) applied to string theory. Horgan notes:

“In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe.”

, ,


My Trouble with Bayes

The MultidisciplinarianIn past consulting work I’ve wrestled with subjective probability values derived from expert opinion. Subjective probability is an interpretation of probability based on a degree of belief (i.e., hypothetical willingness to bet on a position) as opposed a value derived from measured frequencies of occurrences (related posts: Belief in Probability, More Philosophy for Engineers). Subjective probability is of interest when failure data is sparse or nonexistent, as was the data on catastrophic loss of a space shuttle due to seal failure. Bayesianism is one form of inductive logic aimed at refining subjective beliefs based on Bayes Theorem and the idea of rational coherence of beliefs. A NASA handbook explains Bayesian inference as the process of obtaining a conclusion based on evidence,  “Information about a hypothesis beyond the observable empirical data about that hypothesis is included in the inference.” Easier said than done, for reasons listed below.

Bayes Theorem itself is uncontroversial. It is a mathematical expression relating the probability of A given that B is true to the probability of B given that A is true and the individual probabilities of A and B:

P(A|B) = P(B|A) x P(A) / P(B)

If we’re trying to confirm a hypothesis (H) based on evidence (E), we can substitute H and E for A and B:

P(H|E) = P(E|H) x P(H) / P(E)

To be rationally coherent, you’re not allowed to believe the probability of heads to be .6 while believing the probability of tails to be .5; the sum of chances of all possible outcomes must sum to exactly one. Further, for Bayesians, the logical coherence just mentioned (i.e., avoidance of Dutch book arguments) must hold across time (synchronic coherence) such that once new evidence E on a hypothesis H is found, your believed probability for H given E should equal your prior conditional probability for H given E.

Plenty of good sources explain Bayesian epistemology and practice far better than I could do here. Bayesianism is controversial in science and engineering circles, for some good reasons. Bayesianism’s critics refer to it as a religion. This is unfair. Bayesianism is, however, like most religions, a belief system. My concern for this post is the problems with Bayesianism that I personally encounter in risk analyses. Adherents might rightly claim that problems I encounter with Bayes stem from poor implementation rather than from flaws in the underlying program. Good horse, bad jockey? Perhaps.

Problem 1. Subjectively objective
Bayesianism is an interesting mix of subjectivity and objectivity. It imposes no constraints on the subject of belief and very few constraints on the prior probability values. Hypothesis confirmation, for a Bayesian, is inherently quantitative, but initial hypotheses probabilities and the evaluation of evidence is purely subjective. For Bayesians, evidence E confirms or disconfirms hypothesis H only after we establish how probable H was in the first place. That is, we start with a prior probability for H. After the evidence, confirmation has occurred if the probability of H given E is higher than the prior probability of H, i.e., P(H|E) > P(H). Conversely, E disconfirms H when P(H|E) < P(H). These equations and their math leave business executives impressed with the rigor of objective calculation while directing their attention away from the subjectivity of both the hypothesis and its initial prior.

2. Rational formulation of the prior
Problem 2 follows from the above. Paranoid, crackpot hypotheses can still maintain perfect probabilistic coherence. Excluding crackpots, rational thinkers – more accurately, those with whom we agree – still may have an extremely difficult time distilling their beliefs, observations and observed facts of the world into a prior.

3. Conditionalization and old evidence
This is on everyone’s short list of problems with Bayes. In the simplest interpretation of Bayes, old evidence has zero confirming power. If evidence E was on the books long ago and it suddenly comes to light that H entails E, no change in the value of H follows. This seems odd – to most outsiders anyway. This problem gives rise to the game where we are expected to pretend we never knew about E and then judge how surprising (confirming) E would have been to H had we not know about it. As with the general matter of maintaining logical coherence required for the Bayesian program, it is extremely difficult to detach your knowledge of E from the rest of your knowing about the world. In engineering problem solving, discovering that H implies E is very common.

4. Equating increased probability with hypothesis confirmation.
My having once met Hillary Clinton arguably increases the probability that I may someday be her running mate; but few would agree that it is confirming evidence that I will do so. See Hempel’s raven paradox.

5. Stubborn stains in the priors
Bayesians, often citing success in the business of establishing and adjusting insurance premiums, report that the initial subjectivity (discussed in 1, above) fades away as evidence accumulates. They call this washing-out of priors. The frequentist might respond that with sufficient evidence your belief becomes irrelevant. With historical data (i.e., abundant evidence) they can calculate P of an unwanted event in a frequentist way: P = 1-e to the power -RT, roughly, P=RT for small products of exposure time T and failure rate R (exponential distribution). When our ability to find new evidence is limited, i.e., for modeling unprecedented failures, the prior does not get washed out.

6. The catch-all hypothesis
The denominator of Bayes Theorem, P(E), in practice, must be calculated as the sum of the probability of the evidence given the hypothesis plus the probability of the evidence given not the hypothesis:

P(E) = [P(E|H) x p(H)] + [P(E|~H) x P(~H)]

But ~H (“not H”) is not itself a valid hypothesis. It is a family of hypotheses likely containing what Donald Rumsfeld famously called unknown unknowns. Thus calculating the denominator P(E) forces you to pretend you’ve considered all contributors to ~H. So Bayesians can be lured into a state of false choice. The famous example of such a false choice in the history of science is Newton’s particle theory of light vs. Huygens’ wave theory of light. Hint: they are both wrong.

7. Deference to the loudmouth
This problem is related to no. 1 above, but has a much more corporate, organizational component. It can’t be blamed on Bayesianism but nevertheless plagues Bayesian implementations within teams. In the group formulation of any subjective probability, normal corporate dynamics govern the outcome. The most senior or deepest-voiced actor in the room drives all assignments of subjective probability. Social influence rules and the wisdom of the crowd succumbs to a consensus building exercise, precisely where consensus is unwanted. Seidenfeld, Kadane and Schervish begin “On the Shared Preferences of Two Bayesian Decision Makers” with the scholarly observation that an outstanding challenge for Bayesian decision theory is to extend its norms of rationality from individuals to groups. Their paper might have been illustrated with the famous photo of the exploding Challenger space shuttle. Bayesianism’s tolerance of subjective probabilities combined with organizational dynamics and the shyness of engineers can be a recipe for disaster of the Challenger sort.

All opinions welcome.

, , ,

1 Comment