Archive for category History of Science

Of Mice, Maids and Explanatory Theories

One night in early 1981, theoretical physicist Andrei Linde woke his wife in the middle of the night and said, “I think that I know how the universe was born.”

That summer he wrote a paper on this topic, rushing to get it published in an international journal. But in cold war Russia, it took months for the government censors to approve everything that crossed the border. That October Linde was able to give a talk on his theory, which he called new inflation, at a conference on quantum gravity in Moscow attended by Stephen Hawking and similar luminaries. The next day Hawking gave a talk on Alan Guth’s earlier independent work on cosmic inflation in English. Linde received the task of translating Hawking’s talk to Russian in real time. Linde didn’t know what Hawking would be saying in advance. Hawking’s talk explained Guth’s theory and then went on explain why Linde’s theory was incorrect. So Linde had the painful experience of unfolding several arguments against his own work to an audience of Russian scientists who were in control of Linde’s budget and career. At the end of Hawking’s talk, Linde offered to explain why Hawking was wrong. Hawking agreed to listen, then agreed that Linde was right. They became friends and, as Linde explains it, “we were off to the races.”

The MultiDisciplinarian - Of Mice and Maids and Explanatory Theories

Linde’s idea was a new twist on a family of theories about cosmic inflation, a theory that encompasses big bang theory. In Linde’s refinement of it, cosmic inflation continued while the scalar field slowly rolled down. If that isn’t familiar science, don’t worry, I’ll post some links.  Linde later reworked his new inflation theory, arriving at chaotic inflation, as it is now known. The theory has, as a necessary consequence, a nearly infinite number of parallel universes. To clarify, parallel universes are not a theory of Linde’s, per se. Parallel universes fall out of his theory designed to explain phenomena we observe, such as the cosmic background radiation.

So the theory involves entities (universes) that are not only unobservable in practice, but unobservable in principle too. Can a theory that makes untestable claims and posits unobservable entities fairly be called scientific? Even if some of its consequences are observable and falsifiable? And how can we justify the judgment we reach on that question? To answer these questions we need some background from the philosophy of science.

The Scientific Method
A simplistic view of scientific method involves theories that make predictions about the world, testing the theories by experimentation and observation, and then discarding or refining theories that fail to predict the outcomes of experiments or make wrong predictions. At some point in the life of a theory or family of theories, one might judge that a law of nature has been uncovered. Such a law might be, for example, that all copper conducts electricity or that force equals mass times acceleration. Another use for theories is to explain things. For example, the patient complains of shortness of breath only on cold days and the doctor judges the cause to be episodic bronchial constriction rather than asthma. Here we’re relying on the tight link between explanation and cause. More on that below.

Notice that science, as characterized like this, doesn’t prove anything, but merely gives evidence for something. Proof uses deduction. It works for geometry, syllogisms and affirming the antecedent, but not for science. You remember the rules. All rocks are mortal. Socrates is a rock. Therefore, Socrates is mortal. The conclusion about Socrates, in this case, follows from his membership in the set of all rocks, about which it is given that they are mortal. Substitute men or any other set, class, or category for rocks and the conclusion remains valid.

Science relies on inferences that are inductive. They typically take the following form: All observed X have been Y. The next observed X will be Y. Or a simpler version: All observed X have been Y. Therefore all X are Y. Real science does a better job. It eliminates many claims about future observations of X  even before any non-Y instances of X have been found. It was unreasonable to claim that all swans were white even before Australian black swans were discovered. We know how fickle color is in birds. Another example of scientific induction is the conductive power of copper mentioned above. All observed copper conducts electricity. The next piece of copper found will also conduct.

In the mid-1700s philosopher David Hume penned a challenge to inductive thought that is still debated. Hume noted that induction assumes the uniformity of nature, something for which there can be no proof. Says Hume, we can easily imagine a universe that is not uniform – one where everything is haphazard and unpredictable. Such universes are of particular interest in Andrei Linde’s theory. Proving universal uniformity of nature would vindicate induction, said Hume, but no such proof is possible. One might be tempted to argue that nature has always been uniform until now so it is reasonable that it will continue to be so. Using induction to demonstrate the uniformity of nature – in order to vindicate induction in the first place – is obviously circular.

Despite the logical weakness of inductive reasoning, science relies on it. We beef up our induction with scientific explanations. This brings up the matter of what makes a scientific explanation good. It’s tempting to jump to the conclusion that a good explanation is one that reveals the cause of an observed effect. But, as troublemaker David Hume also showed, causality is never really observed directly – only chronology is. Analytic philosophers, logicians, and many quantum physicists are in fact very leery of causality. Carl Hempel, in the 1950s, worked hard on an alternative account of scientific explanation, the Covering-law model. It ultimately proved flawed. I’ll spare you the details.

Hempel also noted a symmetry between explanation and prediction. He claimed that the very laws of nature and experimental observations used to explain a phenomenon could have also been used to predict that phenomenon, had it not already been observed. While valid in many cases, in the years following Hempel’s valiant efforts, it became clear that significant exceptions existed for all of Hempel’s claims of symmetry in scientific explanation. So in most cases we’re really left with no option for explanation other than causality.

Beyond deduction and the simple more-of-the-same type of induction, we’ve been circling around another form of reasoning thought by some to derive from induction but argued by others to be more fundamental than the above-described induction. This is abductive reasoning or inference to the best explanation (synonymous for our purposes though differentiated by some). Inference to the best explanation requires that a theory not merely necessitate the observations but explain them. For this I’ll use an example from Samir Okasha of the University of Bristol.

Who moved the cheese?
The cheese disappeared from the cupboard last night, except for a few crumbs. The family were woken by scratching noises heard coming from the kitchen. How do we explain the phenomenon of missing cheese? Sherlock Holmes would likely claim he deduced that a mouse had crawled up the cupboard and taken the cheese. But no deduction is involved. Nor is induction as described above. Sherlock would actually be inferring, from the available evidence that, among the possible explanations for these observations, that a mouse was the best theory. The cheese could have vanished from a non-uniformity of nature, or the maid may have stolen it; but Holmes thought the mouse explanation to be best.

Inference to the best explanation is particularly important when science deals with unobservable entities. Electrons are the poster children for unobservable entities that most scientists describe as real. Other entities useful in scientific theories are given less credence. Scientists postulate such entities as components of a theory; and many such theories enjoy great predictive success. The best explanation of their predictive success is often that the postulated unobservables are in fact real. Likewise, the theory’s explanatory success, while relying on unobservables, argues that the theory is valid. The no miracles argument maintains that if the unobservable entities are actually not present in the world, then the successfully predicted phenomena would be unexplained miracles. Neutrinos were once unobservable, as were quarks and Higgs bosons. Note that “observable” here is used loosely. Some might prefer “detectable”; but that distinction opens another can of philosophical worms.

More worms emerge when we attempt to define “best” in this usage. Experimenters will have different criteria as to what makes an explanation good. For some simplicity is best, for others loveliness or probability. This can of worms might be called the problem of theory choice. For another time, maybe.

For a more current example consider the Higgs Boson. Before its recent discovery, physicists didn’t infer that all Higgs particles would have a mass of 126 GeV from prior observations of other Higgs particles having that weight, since there had been no observations of Higgs at all. Nor did they use any other form of simple induction. They inferred that the Higgs must exist as the best explanation of other observations, and that if the Higgs did exist, it would have a mass in that range. Bingo – and it did.

The school of thought most suspicious of unobservable entities is called empiricism. In contrast, those at peace with deep use of inference to the best explanation are dubbed scientific realists. Those leaning toward empiricism (few would identify fully with either label) cite two classic epistemological complaints with scientific realism: underdetermination of theory by data and pessimistic meta-induction. All theories are, to some degree, vulnerable to competing theories that explain the same observations – perhaps equally well (underdetermination). Empiricists feel that the degree of explanatory inference entailed in string theory and some of Andrei Linde’s work are dangerously underdetermined. The pessimistic meta-induction argument, in simplest form, says that science has been wrong about unobservables many times in the past and therefore, by induction, is probably wrong this time. In summary, empiricists assert that inference to the best explanation wanders too far beyond solid evidential grounds and leads to metaphysical speculation. Andrei Linde, though he doesn’t state so explicitly, sees inference to the best explanation as scientifically rational and essential to a mature theory of universal inflation.

With that background, painful as it might be, I’ll be able to explain my thoughts on Andrei Linde’s view of the world, and to analyze his defense of his theory and its unobservables in my next post.

– – –

Biographical material on Andrei Linde from the essay, “A balloon producing balloons producing balloons,” in The Universe, edited by John Brockman, and
Autobiography of Andrei Linde for the Kavli foundation

 

 

Leave a comment

My Grandfather’s Science

Velociraptor by Ben TownsendThe pace of technology is breathtaking. For that reason we’re tempted to believe our own time to be the best of times, the worst, the most wise and most foolish, most hopeful and most desperate, etc. And so we insist that our own science and technology be received, for better or worse, in the superlative degree of comparison only. For technology this may be valid. For science, technology’s foundation, perhaps not. Some perspective is humbling.

This may not be your grandfather’s Buick – or his science. It contemplates my grandfather’s science – the mind-blowing range of scientific progress during his life. It may dwarf the scientific progress of the next century. In terms of altering the way we view ourselves and our relationship to the world, the first half of the 20th century dramatically outpaced the second half.

My grandfather was born in 1898 and lived through nine decades of the 20th century. That is, he saw the first manned airplane flight and the first man on the moon. He also witnessed scientific discoveries that literally changed worldviews.

My grandfather was fascinated by the Mount Wilson observatory. The reason was the role it had played in one of the several scientific discoveries of his youth that rocked not only scientist’s view of nature but everyone’s view of themselves and of reality. These were cosmological blockbusters with metaphysical side effects.

When my grandfather was a teen, the universe was the Milky Way. The Milky Way was all the stars we could see; and it included some cloudy areas called nebulae. Edwin Hubble studied these nebulae when he arrived at Mount Wilson in 1919. Using the brand new Hooker Telescope at Mt. Wilson, Hubble located Cepheid variables in several nebulae. Cepheids are the “standard candle” stars that allow astronomers to measure their distance from earth. Hubble studied the Andromeda Nebula, as it was then known. He concluded that this nebula was not glowing gas in the Milky Way, but was a separate galaxy far away. Really far.

In one leap, the universe grew from our little galaxy to about 100,000,000 light years across. That huge number had been previously argued but was ruled out in the “Great Debate” between Shapley and Curtis in April 1920. To earlier arguments that Andromeda was a galaxy, Harvard University’s Harlow Shapley had convinced most scientists that Andromeda was just some glowing gas. Assuming galaxies of the same size, Shapley noted that Andromeda would have to be 100 million light years away to occupy the angular distance we observe. Most scientists simply could not fathom a universe that big. By 1925 Hubble and his telescope on Mt. Wilson had fixed all that.

Over the next few decades Hubble’s observations showed galaxies far more distant than Andromeda – millions of them. Stranger yet, they showed that the universe was expanding, something that even Albert Einstein did not want to accept.

The big expanding universe so impressed my grandfather that he put Mt. Wilson on his bucket list. His first trip to California in 1981 included a visit there. Nothing known to us today comes close to the cosmological, philosophical and psychological weight of learning, as a steady-state Milky Way believer, that there was a beginning of time and that space is stretching. Well, nothing except the chaotic inflation theory also proposed during my grandfather’s life. The Hubble-era universe grew by three orders of magnitude. Inflation theory asks us to accept hundreds of orders of magnitude more. Popular media doesn’t push chaotic inflation, despite its mind-blowing implications. This could stem from our lacking the high school math necessary to grasp inflation theory’s staggering numbers. The Big Bang and Cosmic Inflation will be tough acts for the 21st century to follow.

Another conceptual hurdle for the early 20th century was evolution. Yes, everyone knows that Darwin wrote in the mid-1800s; but many are unaware of the low status the theory of evolution had in biology at the turn of the century. Biologists accepted that life derived from a common origin, but the mechanism Darwin proposed was impossible. In the late 1800’s the thermodynamic calculations of Lord Kelvin (William Thomson, an old-earth creationist) conflicted with Darwin’s model of the emergence of biological diversity. Thomson’s 50-million year old earth couldn’t begin to accommodate prokaryotes, velociraptors and hominids. Additionally, Darwin didn’t have a discreet (Mendelian) theory of inheritance to allow retention of advantageous traits. The “blending theory of inheritance” then in vogue let such features regress toward the previous mean.

Darwinian evolution was rescued in the early 1900s by the discovery of radioactive decay. In 1913 Arthur Holmes, using radioactive decay as a marker, showed that certain rocks on earth were two billion years old. Evolution now had time to work. At about the same time, Mendel’s 1865 paper was rediscovered. Following Mendel, William Bateson proposed the term genetics in 1903 and the word gene in 1909 to describe the mechanism of inheritance. By 1920, Darwinian evolution and the genetic theory were two sides of the same coin. In just over a decade, 20th century thinkers let scientific knowledge change their self-image and their relationship to the world. The universe was big, the earth was old, and apes were our cousins.

Another “quantum leap” our recent ancestors had to make was quantum physics. It’s odd that we say “quantum leap” to mean a big jump. Quanta are extremely small, as are the quantum jumps of electrons. Max Planck kicked off the concept of quanta in 1900. It got a big boost in 1905 from Einstein. Everyone knows that Einstein revolutionized science with the idea of relativity in 1905. But that same year – in his spare time – he also published papers on Brownian motion and the photoelectric effect (illuminated metals give off electrons). In explaining Brownian motion, Einstein argued that atoms are real, not just a convenient model for chemistry calculations as was commonly held. In some ways the last topic, photoelectric effect, was the most profound. Like many had done with atoms Planck considered quanta as a convenient fiction. Einstein’s work on the photoelectric effect, for which he later got the Nobel Prize, made quanta real. This was the start of quantum physics.

Relativity told us that light bends and that matter warps space. This was weird stuff, but at least it spared most of the previous century’s theories – things like the atomic theory of matter and electromagnetism. Quantum physics uprooted everything. It overturned the conceptual framework of previous science and even took a bite out of basic rationality. It told us that reality at small scales is nothing like what we perceive. It said that everything, including light perhaps even time and space – is ultimately discreet, not continuous; nature is digital. Future events can affect the past and the ball can pass through the wall. Beyond the weird stuff, quantum physics makes accurate and practical predictions. It also makes your iPhone work. My grandfather didn’t have one, but his transistor radio was quantum-powered.

Technology’s current heyday is built on the science breakthroughs of a century earlier. If that seems like a stretch consider the following. Planck invented the quantum in 1900, Einstein the photon in 1903, and Von Lieben the vacuum tube in 1906. Schwarzschild predicted black holes in 1916, a few years before Hubble found foreign galaxies. Georges Lemaitre proposed a Big Bang in 1927, Dirac antimatter in 1928, and Chadwick the atomic nucleus in 1932. Ruska invented the electron microscope the following year, two years before plastic was invented. In 1942 Fermi tested controlled nuclear reactions. Avery identified DNA as the carrier of genes in 1944; Crick and Watson found the double helix in 1953. In 1958 Kilby invented the integrated circuit. Two years later Maiman had a working laser, just before the Soviets put a man in orbit. Gell-Man invented quarks in 1964. Recombinant DNA, neutron stars, and interplanetary probes soon followed. My grandfather, born in the 1800s, lived to see all of this, along with personal computers, cell phones and GPS. He liked science and so should you, your kids and your school board.

While recent decades have seen marvelous inventions and cool gadgets, conceptual breakthroughs like those my grandfather witnessed are increasingly rare. It’s time to pay the fiddler. Science education is in crisis. Less than half of New York City’s high schools offer a class in physics and only a third of US high school students take a physics class. Women, African Americans and Latinos are grossly underrepresented in the hard sciences.

Political and social science don’t count. Learn physics, kids. Then teach it to your parents.

6 Comments

Stop Orbit Change Denial Now

April 1, 2016.

Just like you, I grew up knowing that, unless we destroy it, the earth would  be around for another five billion years. At least I thought I knew we had a comfortable window to find a new home. That’s what the astronomical establishment led us to believe. Well it’s not true. There is a very real possibility that long before the sun goes red giant on us, instability of the multi-body gravitational dynamics at work in the solar system will wreak havoc. Some computer models show such deadly dynamism in as short as a few hundred millions years.

One outcome is that Jupiter will pull Mercury off course so that it will cross Venus’s orbit and collide with the earth. “To call this catastrophic is a gross understatement,” says Berkeley astronomer Ken Croswell. Gravitational instability might also hurl Mars from the solar system, thereby warping Earth’s orbit so badly that our planet will be ripped to shreds. If you can imagine nothing worse, hang on to your helmet. In another model, the earth itself is heaved out of orbit and we’re on a cosmic one-way journey into the blackness of interstellar space for eternity. Hasta la vista, baby.

Knowledge of the risk of orbit change isn’t new; awareness is another story. The knowledge goes right back to Isaac Newton. In 1687 Newton concluded that in a two-body system, each body attracts the other with a force (which we do not understand, but call gravity) that is proportional to the product of their masses and inversely proportional to the square of the distance between them. That is, he gave a mathematical justification for what Keppler had merely inferred from observing the movement of planets. Newton then proposed that every body in the universe attracts every other body according to the same rule. He called it the universal law of gravitation. Newton’s law predicted how bodies would behave if only gravitational forces acted upon them. This cannot be tested in the real world, as there are no such bodies. Bodies in the universe are also affected by electromagnetism and the nuclear forces. Thus no one can test Newton’s theory precisely.

Ignoring the other forces of nature, Newton’s law plus simple math allows us to predict the future position of a two-body system given their properties at a specific time. Newton also noted, in Book 3 of his Principia, that predicting the future of a three body system was an entirely different problem. Many set out solve the so-called three-body (or generalized n-body) problem. Finally, over two hundred years later, Henri Poincaré, after first wrongly believing he had worked it out – and forfeiting the prize offered by King Oscar of Sweden for a solution – gave mathematical evidence that there can be no analytical solution to the n-body problem. The problem is in the realm of what today is called chaos theory. Even with powerful computers, rounding errors in the numbers used to calculate future paths of planets prevent conclusive results. The butterfly effect takes hold. In a computer planetary model, changing the mass of Mercury by a billionth of a percent might mean the difference between it ultimately being pulled into the sun and it’s colliding with Venus.

Too many mainstream astronomers are utterly silent on the issue of potential earth orbit change. Given that the issue of instability has been known since Poincaré, why is academia silent on the matter. Even Carl Sagan, whom I trusted in my youth, seems party to the conspiracy. In Episode 9 of Cosmos, he told us:

“Some 5 billion years from now, there will be a last perfect day on Earth. Then the sun will slowly change and the earth will die. There is only so much hydrogen fuel in the sun, and when it’s almost all converted to helium the solar interior will continue its original collapse… life will be extinguished, the oceans will evaporate and boil, and gush away to space. The sun will become a bloated red giant star filling the sky, enveloping and devouring the planets Mercury and Venus, and probably the earth as well. The inner planets will be inside the sun. But perhaps by then our descendants will have ventured somewhere else.”

He goes on to explain that we are built of star stuff, dodging the whole matter of orbital instability. But there is simply no mechanistic predictability in the solar system to ensure the earth will still be orbiting when the sun goes red-giant. As astronomer Caleb Scharf says, “the notion of the clockwork nature of the heavens now counts as one of the greatest illusions of science.” Scharf is one of the bold scientists who’s broken with the military-industrial-astronomical complex to spread the truth about earth orbit change.

But for most astronomers, there is a clear denial of the potential of earth orbit change and the resulting doomsday; and this has to stop. Let’s stand with science. It’s time to expose orbit change deniers. Add your name to the list, and join the team to call them out, one by one.

,

2 Comments

Can Science Survive?

galileo
In my last post I ended with the question of whether science in the pure sense can withstand science in the corporate, institutional, and academic senses. Here’s a bit more on the matter.

Ronald Reagan, pandering to a church group in Dallas, famously said about evolution, “Well, it is a theory. It is a scientific theory only.” (George Bush, often “quoted” as saying this, did not.) Reagan was likely ignorant of the distinction between two uses of the word, theory. On the street, “theory” means an unsettled conjecture. In science a theory – gravitation for example – is a body of ideas that explains observations and makes predictions. Reagan’s statement fueled years of appeals to teach creationism in public schools, using titles like creation science and intelligent design. While the push for creation science is usually pinned on southern evangelicals, it was UC Berkeley law professor Phillip E Johnson who brought us intelligent design.

Arkansas was a forerunner in mandating equal time for creation science. But its Act 590 of 1981 (Balanced Treatment for Creation-Science and Evolution-Science Act) was shut down a year later by McLean v. Arkansas Board of Education. Judge William Overton made philosophy of science proud with his set of demarcation criteria. Science, said Overton:

  • is guided by natural law
  • is explanatory by reference to natural law
  • is testable against the empirical world
  • holds tentative conclusions
  • is falsifiable

For earlier thoughts on each of Overton’s five points, see, respectively, Isaac Newton, Adelard of Bath, Francis Bacon, Thomas Huxley, and Karl Popper.

In the late 20th century, religious fundamentalists were just one facet of hostility toward science. Science was also under attack on the political and social fronts, as well an intellectual or epistemic front.

President Eisenhower, on leaving office in 1960, gave his famous “military industrial complex” speech warning of the “danger that public policy could itself become the captive of a scientific technological elite.” At about the same time the growing anti-establishment movements – perhaps centered around Vietnam war protests –  vilified science for selling out to corrupt politicians, military leaders and corporations. The ethics of science and scientists were under attack.

Also at the same time, independently, an intellectual critique of science emerged claiming that scientific knowledge necessarily contained hidden values and judgments not based in either objective observation (see Francis Bacon) or logical deduction (See Rene Descartes). French philosophers and literary critics Michel Foucault and Jacques Derrida argued – nontrivially in my view – that objectivity and value-neutrality simply cannot exist; all knowledge has embedded ideology and cultural bias. Sociologists of science ( the “strong program”) were quick to agree.

This intellectual opposition to the methodological validity of science, spurred by the political hostility to the content of science, ultimately erupted as the science wars of the 1990s. To many observers, two battles yielded a decisive victory for science against its critics. The first was publication of Higher Superstition by Gross and Levitt in 1994. The second was a hoax in which Alan Sokal submitted a paper claiming that quantum gravity was a social construct along with other postmodern nonsense to a journal of cultural studies. After it was accepted and published, Sokal revealed the hoax and wrote a book denouncing sociology of science and postmodernism.

Sadly, Sokal’s book, while full of entertaining examples of the worst of postmodern critique of science, really defeats only the most feeble of science’s enemies, revealing a poor grasp of some of the subtler and more valid criticism of science. For example, the postmodernists’ point that experimentation is not exactly the same thing as observation has real consequences, something that many earlier scientists themselves – like Robert Boyle and John Herschel – had wrestled with. Likewise, Higher Superstition, in my view, falls far below what we expect from Gross and Levitt. They deal Bruno Latour a well-deserved thrashing for claiming that science is a completely irrational process, and for the metaphysical conceit of holding that his own ideas on scientific behavior are fact while scientists’ claims about nature are not. But beyond that, Gross and Levitt reveal surprisingly poor knowledge of history and philosophy of science. They think Feyerabend is anti-science, they grossly misread Rorty, and waste time on a lot of strawmen.

Following closely  on the postmodern critique of science were the sociologists pursuing the social science of science. Their findings: it is not objectivity or method that delivers the outcome of science. In fact it is the interests of all scientists except social scientists that govern the output of scientific inquiry. This branch of Science and Technology Studies (STS), led by David Bloor at Edinburgh in the late 70s, overplayed both the underdetermination of theory by evidence and the concept of value-laden theories. These scientists also failed to see the irony of claiming a privileged position on the untenability of privileged positions in science. I.e., it is an absolute truth that there are no absolute truths.

While postmodern critique of science and facile politics in STC studies seem to be having a minor revival, the threats to real science from sociology, literary criticism and anthropology (I don’t mean that all sociology and anthropology are non-scientific) are small. But more subtle and possibly more ruinous threats to science may exist; and they come partly from within.

Modern threats to science seem more related to Eisenhower’s concerns than to the postmodernists. While Ike worried about the influence the US military had over corporations and universities (see the highly nuanced history of James Conant, Harvard President and chair of the National Defense Research Committee), Eisenhower’s concern dealt not with the validity of scientific knowledge but with the influence of values and biases on both the subjects of research and on the conclusions reached therein. Science, when biased enough, becomes bad science, even when scientists don’t fudge the data.

Pharmaceutical research is the present poster child of biased science. Accusations take the form of claims that GlaxoSmithKline knew that Helicobacter pylori caused ulcers – not stress and spicy food – but concealed that knowledge to preserve sales of the blockbuster drugs, Zantac and Tagamet. Analysis of those claims over the past twenty years shows them to be largely unsupported. But it seems naïve to deny that years of pharmaceutical companies’ mailings may have contributed to the premature dismissal by MDs and researchers of the possibility that bacteria could in fact thrive in the stomach’s acid environment. But while Big Pharma may have some tidying up to do, its opponents need to learn what a virus is and how vaccines work.

Pharmaceutical firms generally admit that bias, unconscious and of the selection and confirmation sort – motivated reasoning – is a problem. Amgen scientists recently tried to reproduce results considered landmarks in basic cancer research to study why clinical trials in oncology have such high failure rate. They reported in Nature that they were able to reproduce the original results in only six of 53 studies. A similar team at Bayer reported that only about 25% of published preclinical studies could be reproduced. That the big players publish analyses of bias in their own field suggests that the concept of self-correction in science is at least somewhat valid, even in cut-throat corporate science.

Some see another source of bad pharmaceutical science as the almost religious adherence to the 5% (+- 1.96 sigma) definition of statistical significance, probably traceable to RA Fisher’s 1926 The Arrangement of Field Experiments. The 5% false-positive probability criterion is arbitrary, but is institutionalized. It can be seen as a classic case of subjectivity being perceived as objectivity because of arbitrary precision. Repeat any experiment long enough and you’ll get statistically significant results within that experiment. Pharma firms now aim to prevent such bias by participating in a registration process that requires researchers to publish findings, good, bad or inconclusive.

Academic research should take note. As is often reported, the dependence of publishing on tenure and academic prestige has taken a toll (“publish or perish”). Publishers like dramatic and conclusive findings, so there’s a strong incentive to publish impressive results – too strong. Competitive pressure on 2nd tier publishers leads to their publishing poor or even fraudulent study results. Those publishers select lax reviewers, incapable of or unwilling to dispute authors. Karl Popper’s falsification model of scientific behavior, in this scenario, is a poor match for actual behavior in science. The situation has led to hoaxes like Sokal’s, but within – rather than across – disciplines. Publication of the nonsensical “Fuzzy”, Homogeneous Configurations by Marge Simpson and Edna Krabappel (cartoon character names) by the Journal of Computational Intelligence and Electronic Systems in 2014 is a popular example. Following Alan Sokal’s line of argument, should we declare the discipline of computational intelligence to be pseudoscience on this evidence?

Note that here we’re really using Bruno Latour’s definition of science – what scientists and related parties do with a body of knowledge in a network, rather than simply the body of knowledge. Should scientists be held responsible for what corporations and politicians do with their knowledge? It’s complicated. When does flawed science become bad science. It’s hard to draw the line; but does that mean no line needs to be drawn?

Environmental science, I would argue, is some of the worst science passing for genuine these days. Most of it exists to fill political and ideological roles. The Bush administration pressured scientists to suppress communications on climate change and to remove the terms “global warming” and “climate change” from publications. In 2005 Rick Piltz resigned from the  U.S. Climate Change Science Program claiming that Bush appointee Philip Cooney had personally altered US climate change documents to lessen the strength of their conclusions. In a later congressional hearing, Cooney confirmed having done this. Was this bad science, or just bad politics? Was it bad science for those whose conclusions had been altered not to blow the whistle?

The science of climate advocacy looks equally bad. Lack of scientific rigor in the IPCC is appalling – for reasons far deeper than the hockey stick debate. Given that the IPCC started with the assertion that climate change is anthropogenic and then sought confirming evidence, it is not surprising that the evidence it has accumulated supports the assertion. Compelling climate models, like that of Rick Muller at UC Berkeley, have since given strong support for anthropogenic warming. That gives great support for the anthropogenic warming hypothesis; but gives no support for the IPCC’s scientific practices. Unjustified belief, true or false, is not science.

Climate change advocates, many of whom are credentialed scientists, are particularly prone to a mixing bad science with bad philosophy, as when evidence for anthropogenic warming is presented as confirming the hypothesis that wind and solar power will reverse global warming. Stanford’s Mark Jacobson, a pernicious proponent of such activism, does immeasurable damage to his own stated cause with his descent into the renewables fantasy.

Finally, both major climate factions stoop to tying their entire positions to the proposition that climate change has been measured (or not). That is, both sides are in implicit agreement that if no climate change has occurred, then the whole matter of anthropogenic climate-change risk can be put to bed. As a risk man observing the risk vector’s probability/severity axes – and as someone who buys fire insurance though he has a brick house – I think our science dollars might be better spent on mitigation efforts that stand a chance of being effective rather than on 1) winning a debate about temperature change in recent years, or 2) appeasing romantic ideologues with “alternative” energy schemes.

Science survived Abe Lincoln (rain follows the plow), Ronald Reagan (evolution just a theory) and George Bush (coercion of scientists). It will survive Barack Obama (persecution of deniers) and Jerry Brown and Al Gore (science vs. pronouncements). It will survive big pharma, cold fusion, superluminal neutrinos, Mark Jacobson, Brian Greene, and the Stanford propaganda machine. Science will survive bad science because bad science is part of science, and always has been. As Paul Feyerabend noted, Galileo routinely used propaganda, unfair rhetoric, and arguments he knew were invalid to advance his worldview.

Theory on which no evidence can bear is religion. Theory that is indifferent to evidence is often politics. Granting Bloor, for sake of argument, that all theory is value-laden, and granting Kuhn, for sake of argument, that all observation is theory-laden, science still seems to have an uncanny knack for getting the world right. Planes fly, quantum tunneling makes DVD players work, and vaccines prevent polio. The self-corrective nature of science appears to withstand cranks, frauds, presidents, CEOs, generals and professors. As Carl Sagan Often said, science should withstand vigorous skepticism. Further, science requires skepticism and should welcome it, both from within and from irksome sociologists.

.

 

the multidisciplinarian

.

XKCD cartoon courtesy of xkcd.com

 

, , ,

1 Comment

The Trouble with Strings

Theoretical physicist Brian Greene is brilliant, charming, and silver-tongued. I’m guessing he’s the only Foundational Questions Institute grant awardee who also appears on the Pinterest Gorgeous Freaking Men page. Greene is the reigning spokesman for string theory, a theoretical framework proposing that one dimensional (also higher dimensions in later variants, e.g., “branes”) objects manifest different vibrational modes to make up all particles and forces of physics’ standard model. Though its proponents now discourage such usage, many call string theory the grand unification, the theory of everything. Since this includes gravity, string theorists also hold that string theory entails the elusive theory of quantum gravity. String theory has gotten a lot of press over the past few decades in theoretical physics and, through academic celebrities like Greene, in popular media.

XKCD

Several critics, some of whom once spent time in string theory research, regard it as not a theory at all. They see it as a mere formalism – a potential theory or family – very, very large family – of potential theories, all of which lack confirmable or falsifiable predictions. Lee Smolin, also brilliant, lacks some of Greene’s other attractions. Smolin is best known for his work in loop quantum gravity – roughly speaking, string theory’s main competitor. Smolin also had the admirable nerve to publicly state that, despite the Sokol hoax affair, sociologists have the right and duty to examine the practice of science. His sensibilities on that issue bring to bear on the practice of string theory.

Columbia University’s Peter Woit, like Smolin, is a highly vocal critic of string theory. Like Greene and Smolin, Woit is wicked sharp, but Woit’s tongue is more venom than silver. His barefisted blog, Not Even Wrong, takes its name from a statement Rudolf Peierls claimed Wolfgang Pauli had made about some grossly flawed theory that made no testable predictions.

The technical details of whether string theory is in fact a theory or whether string theorists have made testable predictions or can, in theory, ever make such predictions is great material that one could spend a few years reading full time. Start with the above mentioned authors and follow their references. Though my qualifications to comment are thin, it seems to me that string theory is at least in principle falsifiable, at least if you accept that failure to detect supersymmetry (required for strings) at the LHC or future accelerators over many attempts to do so.

But for this post I’m more interested in a related topic that Woit often covers – not the content of string theory but its practice and its relationship to society.

Regardless of whether it is a proper theory, through successful evangelism by the likes of Greene, string theory has gotten a grossly disproportionate amount of research funding. Is it the spoiled, attention-grabbing child of physics research? A spoiled child for several decades, says Woit – one that deliberately narrowed the research agenda to exclude rivals. What possibly better theory has never seen the light of day because its creator can’t get a university research position? Does string theory coerce and persuade by irrational methods and sleight of hand, as Feyerabend argued was Galileo’s style? Galileo happened to be right of course – at least on some major points.

Since Galileo’s time, the practice of science and its relationship to government, industry, and academic institutions has changed greatly. Gentleman scientists like Priestly, Boyle, Dalton and Darwin are replaced by foundation-funded university research and narrowly focused corporate science. After Kuhn – or misusing Kuhn – sociologists of science in the 1980s and 90s tried to knock science from its privileged position on the grounds that all science is tainted with cultural values and prejudices. These attacks included claims of white male bias and echoes of Eisenhower’s warnings about the “military industrial complex.”   String theory, since it holds no foreseeable military or industrial promise, would seem to have immunity from such charges of bias. I doubt Democrats like string more than Republicans.

Yet, as seen by Smolin and Woit, in string theory, Kuhn’s “relevant community” became the mob (see Lakatos on Kuhn/mob) – or perhaps a religion not separated from the state. Smolin and Woit point to several cult aspects of the string theory community. They find it to be cohesive, monolithic and high-walled – hard both to enter and to leave. It is hierarchical; a few leaders control the direction of the field while its initiates aim to protect the leaders from dissenting views.  There is an uncommon uniformity of views on open questions; and evidence is interpreted optimistically. On this view, string theorists yield to Bacon’s idols of the tribe, the cave, and the marketplace. Smolin cites the rarity of particle physicists outside of string theory to be invited to its conferences.

In The Trouble with Physics, Smolin details a particular example of community cohesiveness unbecoming to science. Smolin says even he was, for much of two decades, sucked into the belief that string theory had been proved finite. Only when he sought citations for a historical comparison of approaches in particle physics he was writing did he find that what he and everyone else assumed to have been proved long ago had no basis. He questioned peers, finding that they too had ignored vigorous skepticism and merely gone with the flow. As Smolin tells it, everyone “knew” that Stanley Mandelstam (UC Berkeley)  had proved string theory finite in its early days. Yet Mandelstam himself says he did not. I’m aware that there are other takes on the issue of finitude that may soften Smolin’s blow; but, in my view, his point on group cohesiveness and their indignation at being challenged still stand.

A telling example of the tendency for string theory to exclude rivals comes from a 2004 exchange on the sci.physics.strings Google group between Luboš Motl and Wolfgang Lerche of CERN, who does a lot of work on strings and branes. Motl pointed to Leonard Susskind’s then recent embrace of “landscapes,” a concept Susskind had dismissed before it became useful to string theory. To this Lerche replied:

“what I find irritating is that these ideas are out since the mid-80s… this work had been ignored (because it didn’t fit into the philosophy at the time) by the same people who now re-“invent” the landscape, appear in journals in this context and even seem to write books about it.  There had always been proponents of this idea, which is not new by any means.. . . the whole discussion could (and in fact should) have been taken place in 1986/87. The main thing what has changed since then is the mind of certain people, and what you now see is the Stanford propaganda machine working at its fullest.”

Can a science department in a respected institution like Stanford in fairness be called a propaganda machine? See my take on Mark Jacobson’s science for my vote. We now have evidence that science can withstand religion. The question for this century might be whether science, in the purse sense, can withstand science in the corporate, institutional, and academic sense.

______________________________

String theory cartoon courtesy of XKCD.

______________________________

I just discovered on Woit’s Not Even Wrong a mention of John Horgan’s coverage of Bayesian belief (previous post) applied to string theory. Horgan notes:

“In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe.”

, ,

2 Comments