Posts Tagged History of Science

Was Thomas Kuhn Right about Anything?

William Storage – 9/1/2016
Visiting Scholar, UC Berkeley History of Science

Fifty years ago Thomas Kuhn’s Structures of Scientific Revolution armed sociologists of science, constructionists, and truth-relativists with five decades of cliche about the political and social dimensions of theory choice and scientific progress’s inherent irrationality. Science has bias, cries the social-justice warrior. Despite actually being a scientist – or at least holding a PhD in Physics from Harvard, Kuhn isn’t well received by scientists and science writers. They generally venture into history and philosophy of science as conceived by Karl Popper, the champion of the falsification model of scientific progress.

Kuhn saw Popper’s description of science as a self-congratulatory idealization for researchers. That is, no scientific theory is ever discarded on the first  observation conflicting with the theory’s predictions. All theories have anomalous data. Dropping heliocentrism because of anomalies in Mercury’s orbit was unthinkable, especially when, as Kuhn stressed, no better model was available at the time. Einstein said that if Eddington’s experiment would have not shown bending of light rays around the sun, “I would have had to pity our dear Lord. The theory is correct all the same.”

Kuhn was wrong about a great many details. Despite the exaggeration of scientific detachment by Popper and the proponents of rational-reconstruction, Kuhn’s model of scientists’ dogmatic commitment to their theories is valid only in novel cases. Even the Copernican revolution is overstated. Once the telescope was in common use and the phases of Venus were confirmed, the philosophical edifices of geocentrism crumbled rapidly in natural philosophy. As Joachim Vadianus observed, seemingly predicting the scientific revolution, sometimes experience really can be demonstrative.

Kuhn seems to have cherry-picked historical cases of the gap between normal and revolutionary science. Some revolutions – DNA and the expanding universe for example – proceeded with no crisis and no battle to the death between the stalwarts and the upstarts. Kuhn’s concept of incommensurabilty also can’t withstand scrutiny. It is true that Einstein and Newton meant very different things when they used the word “mass.” But Einstein understood exactly what Newton meant by mass, because Einstein had grown up a Newtonian. And if brought forth, Newton, while he never could have conceived of Einsteinian mass, would have had no trouble understanding Einstein’s concept of mass from the perspective of general relativity, had Einstein explained it to him.

Likewise, Kuhn’s language about how scientists working in different paradigms truly, not merely metaphorically, “live in different worlds” should go the way of mood rings and lava lamps. Most charitably, we might chalk this up to Kuhn’s terminological sloppiness. He uses “success terms” like “live” and “see,” where he likely means “experience visually” or “perceive.” Kuhn describes two observers, both witnessing the same phenomenon, but “one sees oxygen, where another sees dephlogisticated air” (emphasis mine). That is, Kuhn confuses the descriptions of visual experiences with the actual experiences of observation – to the delight of Steven ShapinBruno Latour and the cultural relativists.

Finally, Kuhn’s notion that theories completely control observation is just as wrong as scientists’ belief that their experimental observations are free of theoretical influence and that their theories are independent of their values.

Despite these flaws, I think Kuhn was on to something. He was right, at least partly, about the indoctrination of scientists into a paradigm discouraging skepticism about their research program. What Wolfgang Lerche of CERN called “the Stanford propaganda machine” for string theory is a great example. Kuhn was especially right in describing science education as presenting science as a cumulative enterprise, relegating failed hypotheses to the footnotes. Einstein built on Newton in the sense that he added more explanations about the same phenomena; but in no way was Newton preserved within Einstein. Failing to see an Einsteinian revolution in any sense just seems akin to a proclamation of the infallibility not of science but of scientists. I was surprised to see this attitude in Stephen Weinberg’s recent To Explain the World. Despite excellent and accessible coverage of the emergence of science, he presents a strictly cumulative model of science. While Weinberg only ever mentions Kuhn in footnotes, he seems to be denying that Kuhn was ever right about anything.

For example, in describing general relativity, Weinberg says in 1919 the Times of London reported that Newton had been shown to be wrong. Weinberg says, “This was a mistake. Newton’s theory can be regarded as an approximation to Einstein’s – one that becomes increasingly valid for objects moving at velocities much less than that of light. Not only does Einstein’s theory not disprove Newton’s, relativity explains why Newton’s theory works when it does work.”

This seems a very cagey way of saying that Einstein disproved Newton’s theory. Newtonian dynamics is not an approximation of general relativity, despite their making similar predictions for mid-sized objects at small relative speeds. Kuhn’s point that Einstein and Newton had fundamentally different conceptions of mass is relevant here. Newton’s explanation of his Rule III clearly stresses universality. Newton emphasized the universal applicability of his theory because he could imagine no reason for its being limited by anything in nature. Given that, Einstein should, in terms of explanatory power, be seen as overturning – not extending – Newton, despite the accuracy of Newton for worldly physics.

Weinberg insists that Einstein is continuous with Newton in all respects. But when Eddington showed that light waves from distant stars bent around the sun during the eclipse of 1918, Einstein disproved Newtonian mechanics. Newton’s laws of gravitation predict that gravity would have no effect on light because photons do not have mass. When Einstein showed otherwise he disproved Newton outright, despite the retained utility of Newton for small values of v/c. This is no insult to Newton. Einstein certainly can be viewed as continuous with Newton in the sense of getting scientific work done. But Einsteinian mechanics do not extend Newton’s; they contradict them. This isn’t merely a metaphysical consideration; it has powerful explanatory consequences. In principle, Newton’s understanding of nature was wrong and it gave wrong predictions. Einstein’s appears to be wrong as well; but we don’t yet have a viable alternative. And that – retaining a known-flawed theory when nothing better is on the table – is, by the way, another thing Kuhn was right about.

 


.

“A few years ago I happened to meet Kuhn at a scientific meeting and complained to him about the nonsense that had been attached to his name. He reacted angrily. In a voice loud enough to be heard by everyone in the hall, he shouted, ‘One thing you have to understand. I am not a Kuhnian.’” – Freeman Dyson, The Sun, The Genome, and The Internet: Tools of Scientific Revolutions

 

Leave a comment

Feynman as Philosopher

When a scientist is accused of scientism, the common response is a rant against philosophy charging that philosophers of science don’t know how science works.  For color, you can appeal to the authority of Richard Feynman:

“Philosophy of science is about as useful to scientists as ornithology is to birds.” – Richard Feynman

But Feynman never said that. If you have evidence, please post it here. Evidence. We’re scientists, right?

Feynman’s hostility to philosophy is often reported, but without historical basis. His comment about Spinoza’s propositions not being confirmable or falsifiable deal specifically with Spinoza and metaphysics, not epistemology. Feynman actually seems to have had a keen interest in epistemology and philosophy of science.

People cite a handful of other Feynman moments to show his hostility to philosophy of science. In his 1966 National Science Teachers Association lecture, he uses the term “philosophy of science” when he points out how Francis Bacon’s empiricism does not capture the nature of science. Not do textbooks about scientific method, he says. Beyond this sort of thing I find little evidence of Feynman’s anti-philosophy stance.

But I find substantial evidence of Feynman as philosopher of science. For example, his thoughts on multiple derivability of natural laws and his discussion of robustness of theory show him to be a philosophical methodologist. In “The Character of Physical Law”, Feynman is in line with philosophers of science of his day:

“So the first thing we have to accept is that even in mathematics you can start in different places. If all these various theorems are interconnected by reasoning there is no real way to say ‘these are the most fundamental axioms’, because if you were told something different instead you could also run the reasoning the other way.”

Further, much of his 1966 NSTA lecture deals with the relationship between theory, observation and making explanations. A tape of that talk was my first exposure to Feynman, by the way. I’ll never forget the story of him asking his father why the ball rolled to the back of wagon as the wagon lurched forward. His dad’s answer: “That, nobody knows… It’s called inertia.”

Via a twitter post, I just learned of a video clip of Feynman discussing theory choice – a staple of philosophy of science – and theory revision. Now he doesn’t use the language you’d find in Kuhn, Popper, or Lakatos; but he covers a bit of the same ground. In it, he describes two theories with deeply different ideas behind them, both of which give equally valid predictions. He says,

“Suppose we have two such theories. How are we going to describe which one is right? No way. Not by science. Because they both agree with experiment to the same extent…

“However, for psychological reasons, in order to get new theories, these two theories are very far from equivalent, because one gives a man different ideas than the other. By putting the theory in a certain kind of framework you get an idea what to change.”

Not by science alone, can theory choice be made, says the scientist Feynman. Philosopher of science Thomas Kuhn caught hell for saying the same. Feynman clearly weighs explanatory power higher than predictive success in the various criteria for theory choice. He then alludes to the shut-up-and-calculate practitioners of quantum mechanics, indicating that this position makes for weak science. He does this with a tale of competing Mayan astronomy theories.

He imagines a Mayan astronomer who had a mathematical model that perfectly predicted full moons and eclipses, but with no concept of space, spheres or orbits. Feynman then supposes that a young man says to the astronomer, “I have an idea – maybe those things are going around and they’re balls of rock out there, and we can calculate how they move.” The astronomer asks the young man how accurately can his theory predict eclipses. The young man said his theory wasn’t developed sufficiently to predict that yet. The astronomer boasts, “we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better.”

Feynman again shows he values a theory’s explanatory power over predictive success. He concludes:

“So it is a problem as to whether or not to worry about philosophies behind ideas.”

So much for Feynman’s aversion to philosophy of science.

 

– – –

Thanks to Ardian Tola @rdntola for finding the Feynman lecture video.

2 Comments

My Grandfather’s Science

Velociraptor by Ben TownsendThe pace of technology is breathtaking. For that reason we’re tempted to believe our own time to be the best of times, the worst, the most wise and most foolish, most hopeful and most desperate, etc. And so we insist that our own science and technology be received, for better or worse, in the superlative degree of comparison only. For technology this may be valid. For science, technology’s foundation, perhaps not. Some perspective is humbling.

This may not be your grandfather’s Buick – or his science. It contemplates my grandfather’s science – the mind-blowing range of scientific progress during his life. It may dwarf the scientific progress of the next century. In terms of altering the way we view ourselves and our relationship to the world, the first half of the 20th century dramatically outpaced the second half.

My grandfather was born in 1898 and lived through nine decades of the 20th century. That is, he saw the first manned airplane flight and the first man on the moon. He also witnessed scientific discoveries that literally changed worldviews.

My grandfather was fascinated by the Mount Wilson observatory. The reason was the role it had played in one of the several scientific discoveries of his youth that rocked not only scientist’s view of nature but everyone’s view of themselves and of reality. These were cosmological blockbusters with metaphysical side effects.

When my grandfather was a teen, the universe was the Milky Way. The Milky Way was all the stars we could see; and it included some cloudy areas called nebulae. Edwin Hubble studied these nebulae when he arrived at Mount Wilson in 1919. Using the brand new Hooker Telescope at Mt. Wilson, Hubble located Cepheid variables in several nebulae. Cepheids are the “standard candle” stars that allow astronomers to measure their distance from earth. Hubble studied the Andromeda Nebula, as it was then known. He concluded that this nebula was not glowing gas in the Milky Way, but was a separate galaxy far away. Really far.

In one leap, the universe grew from our little galaxy to about 100,000,000 light years across. That huge number had been previously argued but was ruled out in the “Great Debate” between Shapley and Curtis in April 1920. To earlier arguments that Andromeda was a galaxy, Harvard University’s Harlow Shapley had convinced most scientists that Andromeda was just some glowing gas. Assuming galaxies of the same size, Shapley noted that Andromeda would have to be 100 million light years away to occupy the angular distance we observe. Most scientists simply could not fathom a universe that big. By 1925 Hubble and his telescope on Mt. Wilson had fixed all that.

Over the next few decades Hubble’s observations showed galaxies far more distant than Andromeda – millions of them. Stranger yet, they showed that the universe was expanding, something that even Albert Einstein did not want to accept.

The big expanding universe so impressed my grandfather that he put Mt. Wilson on his bucket list. His first trip to California in 1981 included a visit there. Nothing known to us today comes close to the cosmological, philosophical and psychological weight of learning, as a steady-state Milky Way believer, that there was a beginning of time and that space is stretching. Well, nothing except the chaotic inflation theory also proposed during my grandfather’s life. The Hubble-era universe grew by three orders of magnitude. Inflation theory asks us to accept hundreds of orders of magnitude more. Popular media doesn’t push chaotic inflation, despite its mind-blowing implications. This could stem from our lacking the high school math necessary to grasp inflation theory’s staggering numbers. The Big Bang and Cosmic Inflation will be tough acts for the 21st century to follow.

Another conceptual hurdle for the early 20th century was evolution. Yes, everyone knows that Darwin wrote in the mid-1800s; but many are unaware of the low status the theory of evolution had in biology at the turn of the century. Biologists accepted that life derived from a common origin, but the mechanism Darwin proposed was impossible. In the late 1800’s the thermodynamic calculations of Lord Kelvin (William Thomson, an old-earth creationist) conflicted with Darwin’s model of the emergence of biological diversity. Thomson’s 50-million year old earth couldn’t begin to accommodate prokaryotes, velociraptors and hominids. Additionally, Darwin didn’t have a discreet (Mendelian) theory of inheritance to allow retention of advantageous traits. The “blending theory of inheritance” then in vogue let such features regress toward the previous mean.

Darwinian evolution was rescued in the early 1900s by the discovery of radioactive decay. In 1913 Arthur Holmes, using radioactive decay as a marker, showed that certain rocks on earth were two billion years old. Evolution now had time to work. At about the same time, Mendel’s 1865 paper was rediscovered. Following Mendel, William Bateson proposed the term genetics in 1903 and the word gene in 1909 to describe the mechanism of inheritance. By 1920, Darwinian evolution and the genetic theory were two sides of the same coin. In just over a decade, 20th century thinkers let scientific knowledge change their self-image and their relationship to the world. The universe was big, the earth was old, and apes were our cousins.

Another “quantum leap” our recent ancestors had to make was quantum physics. It’s odd that we say “quantum leap” to mean a big jump. Quanta are extremely small, as are the quantum jumps of electrons. Max Planck kicked off the concept of quanta in 1900. It got a big boost in 1905 from Einstein. Everyone knows that Einstein revolutionized science with the idea of relativity in 1905. But that same year – in his spare time – he also published papers on Brownian motion and the photoelectric effect (illuminated metals give off electrons). In explaining Brownian motion, Einstein argued that atoms are real, not just a convenient model for chemistry calculations as was commonly held. In some ways the last topic, photoelectric effect, was the most profound. Like many had done with atoms Planck considered quanta as a convenient fiction. Einstein’s work on the photoelectric effect, for which he later got the Nobel Prize, made quanta real. This was the start of quantum physics.

Relativity told us that light bends and that matter warps space. This was weird stuff, but at least it spared most of the previous century’s theories – things like the atomic theory of matter and electromagnetism. Quantum physics uprooted everything. It overturned the conceptual framework of previous science and even took a bite out of basic rationality. It told us that reality at small scales is nothing like what we perceive. It said that everything, including light perhaps even time and space – is ultimately discreet, not continuous; nature is digital. Future events can affect the past and the ball can pass through the wall. Beyond the weird stuff, quantum physics makes accurate and practical predictions. It also makes your iPhone work. My grandfather didn’t have one, but his transistor radio was quantum-powered.

Technology’s current heyday is built on the science breakthroughs of a century earlier. If that seems like a stretch consider the following. Planck invented the quantum in 1900, Einstein the photon in 1903, and Von Lieben the vacuum tube in 1906. Schwarzschild predicted black holes in 1916, a few years before Hubble found foreign galaxies. Georges Lemaitre proposed a Big Bang in 1927, Dirac antimatter in 1928, and Chadwick the atomic nucleus in 1932. Ruska invented the electron microscope the following year, two years before plastic was invented. In 1942 Fermi tested controlled nuclear reactions. Avery identified DNA as the carrier of genes in 1944; Crick and Watson found the double helix in 1953. In 1958 Kilby invented the integrated circuit. Two years later Maiman had a working laser, just before the Soviets put a man in orbit. Gell-Man invented quarks in 1964. Recombinant DNA, neutron stars, and interplanetary probes soon followed. My grandfather, born in the 1800s, lived to see all of this, along with personal computers, cell phones and GPS. He liked science and so should you, your kids and your school board.

While recent decades have seen marvelous inventions and cool gadgets, conceptual breakthroughs like those my grandfather witnessed are increasingly rare. It’s time to pay the fiddler. Science education is in crisis. Less than half of New York City’s high schools offer a class in physics and only a third of US high school students take a physics class. Women, African Americans and Latinos are grossly underrepresented in the hard sciences.

Political and social science don’t count. Learn physics, kids. Then teach it to your parents.

6 Comments

Multidisciplinary

In college, fellow cave explorer Ron Simmons found that the harnesses made for rock climbing performed very poorly underground. The cave environment shredded the seams of the harnesses from which we hung hundreds of feet off the ground in the underworld of remote southern Mexico. The conflicting goals of minimizing equipment expenses and avoiding death from equipment failure awakened our innovative spirit.

Bill Storage

We wondered if we could build a better caving harness ourselves. Having access to UVA’s Instron testing machine Ron hand-stitched some webbing junctions to compare the tensile characteristics of nylon and polyester topstitching thread. His experiments showed too much variation from irregularities in his stitching, so he bought a Singer industrial sewing machine. At that time Ron had no idea how sew. But he mastered the machine and built fabulous caving harnesses. Ron later developed and manufactured hardware for ropework and specialized gear for cave diving. Curiosity about earth’s last great exploration frontier propelled our cross-disciplinary innovation. Curiosity, imagination and restlessness drive multidisciplinarity.

Soon we all owned sewing machines, making not only harnesses but wetsuits and nylon clothing. We wrote mapping programs to reduce our survey data and invented loop-closure algorithms to optimally distribute errors across a 40-mile cave survey. We learned geomorphology to predict the locations of yet undiscovered caves. Ron was unhappy with the flimsy commercial photo strobe equipment we used underground so he learned metalworking and the electrical circuitry needed to develop the indestructible strobe equipment with which he shot the above photo of me.

Fellow caver Bill Stone pushed multidisciplinarity further. Unhappy with conventional scuba gear for underwater caving, Bill invented a multiple-redundant-processor, gas-scrubbing rebreather apparatus that allowed 12-hour dives on a tiny “pony tank” oxygen cylinder. This device evolved into the Cis-Lunar Primary Life Support System later praised by the Apollo 11 crew. Bill’s firm, Stone Aerospace, later developed autonomous underwater vehicles under NASA Astrobiology contracts, for which I conducted probabilistic risk analyses. If there is life beneath the ice of Jupiter’s moon Europa, we’ll need robots like this to find it.

Artemis

My years as a cave explorer and a decade as a systems engineer in aerospace left me comfortable crossing disciplinary boundaries. I enjoy testing the tools of one domain on the problems of another. The Multidisciplinarian is a hobby blog where I experiment with that approach. I’ve tried to use the perspective of History of Science on current issues in Technology (e.g.) and the tools of Science and Philosophy on Business Management and Politics (e.g.).

Terms like interdisciplinary and multidisciplinary get a fair bit of press in tech circles. Their usage speaks to the realization that while intense specialization and deep expertize are essential for research, they are the wrong tools for product design, knowledge transfer, addressing customer needs, and everything else related to society’s consumption of the fruits of research and invention.

These terms are generally shunned by academia for several reasons. One reason is the abuse of the terms in fringe social sciences of the 80s and 90s. Another is that the university system, since the time of Aristotle’s Lyceum, has consisted of silos in which specialists compete for top position. Academic status derives from research, and research usually means specialization. Academic turf protection and the research grant system also contribute. As Gina Kolata noted in a recent NY Times piece, the reward system of funding agencies discourages dialog between disciplines. Disappointing results in cancer research are often cited as an example of sectoral research silos impeding integrative problem solving.

Beside the many examples of silo inefficiencies, we have a long history of breakthroughs made possible by individuals who mastered several skills and integrated them. Galileo, Gutenberg, Franklin and Watt were not mere polymaths. They were polymaths who did something more powerful than putting specialists together in a room. They put ideas together in a mind.

On this view, specialization may be necessary to implement a solution but is insufficient for conceiving of that solution. Lockheed Martin does not design aircraft by putting aerodynamicists, propulsion experts, and stress analysts together in a think tank. It puts them together, along with countless other specialists, and a cadre of integrators, i.e., systems engineers, for whom excessive disciplinary specialization would be an obstacle. Bill Stone has deep knowledge in several sciences, but his ARTEMIS project, a prototype of a vehicle that could one day discover life beneath an ice-covered moon of Jupiter, succeeded because of his having learned to integrate and synthesize.

A famous example from another field is the case of the derivation of the double-helix model of DNA by Watson and Crick. Their advantage in the field, mostly regarded as a weakness before their discovery, was their failure – unlike all their rivals – to specialize in a discipline. This lack of specialization allowed them to move conceptually between disciplines, fusing separate ideas from Avery, Chargaff and Wilkins, thereby scooping front runner Linus Pauling.

Dev Patnaik, leader of Jump Associates, is a strong advocate of the conscious blending of different domains to discover opportunities that can’t be seen through a single lens. When I spoke with Dev at a recent innovation competition our conversation somehow drifted from refrigeration in Nairobi to Ludwig Wittgenstein. Realizing that, we shared a good laugh. Dev expresses pride for having hired MBA-sculptors, psychologist-filmmakers and the like. In a Fast Company piece, Dev suggested that beyond multidisciplinary teams, we need multidisciplinary people.

The silos that stifle innovation come in many forms, including company departments, academic disciplines, government agencies, and social institutions. The smarts needed to solve a problem are often at a great distance from the problem itself. Successful integration requires breaking down both institutional and epistemological barriers.

I recently overheard professor Olaf Groth speaking to a group of MBA students at Hult International Business School. Discussing the Internet of Things, Olaf told the group, “remember – innovation doesn’t go up, it goes across.” I’m not sure what context he had in mind, but it’s a great point regardless. The statement applies equally well to cognitive divides, academic disciplinary boundaries, and corporate silos.

Olaf’s statement reminded me of a very concrete example of a missed opportunity for cross-discipline, cross-division action at Gillette. Gillette acquired both Oral-B, the old-school toothbrush maker, and Braun, the electric appliance maker, in 1984. Gillette then acquired Duracell in 1996. But five years later, Gillette had not found a way into the lucrative battery-powered electric toothbrush market – despite having all the relevant technologies in house, but in different silos. They finally released the CrossAction (ironic name) brush in 2002; but it was inferior to well-established Colgate and P&G products. Innovation initiatives at Gillette were stymied by the usual suspects –  principal-agent, misuse of financial tools in evaluating new product lines, misuse of platform-based planning, and holding new products to the same metrics as established ones. All that plus the fact that the divisions weren’t encouraged to look across. The three units were adjacent in a list of divisions and product lines in Gillette’s Strategic Report.

Multidisciplinarity (or interdisciplinarity, if you prefer) clearly requires more than a simple combination of academic knowledge and professional skills. Innovation and solving new problems require integrating and synthesizing different repositories of knowledge to frame problems in a real-world context rather than through the lens of a single discipline. This shouldn’t be so hard. After all, we entered the world free of disciplinary boundaries, and we know that fervent curiosity can dissolve them.

……

The average student emerges at the end of the Ph.D. program, already middle-aged, overspecialized, poorly prepared for the world outside, and almost unemployable except in a narrow area of specialization. Large numbers of students for whom the program is inappropriate are trapped in it, because the Ph.D. has become a union card required for entry into the scientific job market. – Freeman Dyson

Science is the organized skepticism in the reliability of expert opinion. – Richard Feynman

Curiosity is one of the permanent and certain characteristics of a vigorous intellect. – Samuel Johnson

The exhortation to defer to experts is underpinned by the premise that their specialist knowledge entitles them to a higher moral status than the rest of us. – Frank Furedi

It is a miracle that curiosity survives formal education. – Albert Einstein

An expert is one who knows more and more about less and less until he knows absolutely everything about nothing. – Nicholas Murray Butler

A specialist is someone who does everything else worse. – Ruggiero Ricci

 

Ron Simmons
Ron Simmons, 1954-2007

 

,

6 Comments

Stop Orbit Change Denial Now

April 1, 2016.

Just like you, I grew up knowing that, unless we destroy it, the earth would  be around for another five billion years. At least I thought I knew we had a comfortable window to find a new home. That’s what the astronomical establishment led us to believe. Well it’s not true. There is a very real possibility that long before the sun goes red giant on us, instability of the multi-body gravitational dynamics at work in the solar system will wreak havoc. Some computer models show such deadly dynamism in as short as a few hundred millions years.

One outcome is that Jupiter will pull Mercury off course so that it will cross Venus’s orbit and collide with the earth. “To call this catastrophic is a gross understatement,” says Berkeley astronomer Ken Croswell. Gravitational instability might also hurl Mars from the solar system, thereby warping Earth’s orbit so badly that our planet will be ripped to shreds. If you can imagine nothing worse, hang on to your helmet. In another model, the earth itself is heaved out of orbit and we’re on a cosmic one-way journey into the blackness of interstellar space for eternity. Hasta la vista, baby.

Knowledge of the risk of orbit change isn’t new; awareness is another story. The knowledge goes right back to Isaac Newton. In 1687 Newton concluded that in a two-body system, each body attracts the other with a force (which we do not understand, but call gravity) that is proportional to the product of their masses and inversely proportional to the square of the distance between them. That is, he gave a mathematical justification for what Keppler had merely inferred from observing the movement of planets. Newton then proposed that every body in the universe attracts every other body according to the same rule. He called it the universal law of gravitation. Newton’s law predicted how bodies would behave if only gravitational forces acted upon them. This cannot be tested in the real world, as there are no such bodies. Bodies in the universe are also affected by electromagnetism and the nuclear forces. Thus no one can test Newton’s theory precisely.

Ignoring the other forces of nature, Newton’s law plus simple math allows us to predict the future position of a two-body system given their properties at a specific time. Newton also noted, in Book 3 of his Principia, that predicting the future of a three body system was an entirely different problem. Many set out solve the so-called three-body (or generalized n-body) problem. Finally, over two hundred years later, Henri Poincaré, after first wrongly believing he had worked it out – and forfeiting the prize offered by King Oscar of Sweden for a solution – gave mathematical evidence that there can be no analytical solution to the n-body problem. The problem is in the realm of what today is called chaos theory. Even with powerful computers, rounding errors in the numbers used to calculate future paths of planets prevent conclusive results. The butterfly effect takes hold. In a computer planetary model, changing the mass of Mercury by a billionth of a percent might mean the difference between it ultimately being pulled into the sun and it’s colliding with Venus.

Too many mainstream astronomers are utterly silent on the issue of potential earth orbit change. Given that the issue of instability has been known since Poincaré, why is academia silent on the matter. Even Carl Sagan, whom I trusted in my youth, seems party to the conspiracy. In Episode 9 of Cosmos, he told us:

“Some 5 billion years from now, there will be a last perfect day on Earth. Then the sun will slowly change and the earth will die. There is only so much hydrogen fuel in the sun, and when it’s almost all converted to helium the solar interior will continue its original collapse… life will be extinguished, the oceans will evaporate and boil, and gush away to space. The sun will become a bloated red giant star filling the sky, enveloping and devouring the planets Mercury and Venus, and probably the earth as well. The inner planets will be inside the sun. But perhaps by then our descendants will have ventured somewhere else.”

He goes on to explain that we are built of star stuff, dodging the whole matter of orbital instability. But there is simply no mechanistic predictability in the solar system to ensure the earth will still be orbiting when the sun goes red-giant. As astronomer Caleb Scharf says, “the notion of the clockwork nature of the heavens now counts as one of the greatest illusions of science.” Scharf is one of the bold scientists who’s broken with the military-industrial-astronomical complex to spread the truth about earth orbit change.

But for most astronomers, there is a clear denial of the potential of earth orbit change and the resulting doomsday; and this has to stop. Let’s stand with science. It’s time to expose orbit change deniers. Add your name to the list, and join the team to call them out, one by one.

,

2 Comments

Can Science Survive?

galileo
In my last post I ended with the question of whether science in the pure sense can withstand science in the corporate, institutional, and academic senses. Here’s a bit more on the matter.

Ronald Reagan, pandering to a church group in Dallas, famously said about evolution, “Well, it is a theory. It is a scientific theory only.” (George Bush, often “quoted” as saying this, did not.) Reagan was likely ignorant of the distinction between two uses of the word, theory. On the street, “theory” means an unsettled conjecture. In science a theory – gravitation for example – is a body of ideas that explains observations and makes predictions. Reagan’s statement fueled years of appeals to teach creationism in public schools, using titles like creation science and intelligent design. While the push for creation science is usually pinned on southern evangelicals, it was UC Berkeley law professor Phillip E Johnson who brought us intelligent design.

Arkansas was a forerunner in mandating equal time for creation science. But its Act 590 of 1981 (Balanced Treatment for Creation-Science and Evolution-Science Act) was shut down a year later by McLean v. Arkansas Board of Education. Judge William Overton made philosophy of science proud with his set of demarcation criteria. Science, said Overton:

  • is guided by natural law
  • is explanatory by reference to natural law
  • is testable against the empirical world
  • holds tentative conclusions
  • is falsifiable

For earlier thoughts on each of Overton’s five points, see, respectively, Isaac Newton, Adelard of Bath, Francis Bacon, Thomas Huxley, and Karl Popper.

In the late 20th century, religious fundamentalists were just one facet of hostility toward science. Science was also under attack on the political and social fronts, as well an intellectual or epistemic front.

President Eisenhower, on leaving office in 1960, gave his famous “military industrial complex” speech warning of the “danger that public policy could itself become the captive of a scientific technological elite.” At about the same time the growing anti-establishment movements – perhaps centered around Vietnam war protests –  vilified science for selling out to corrupt politicians, military leaders and corporations. The ethics of science and scientists were under attack.

Also at the same time, independently, an intellectual critique of science emerged claiming that scientific knowledge necessarily contained hidden values and judgments not based in either objective observation (see Francis Bacon) or logical deduction (See Rene Descartes). French philosophers and literary critics Michel Foucault and Jacques Derrida argued – nontrivially in my view – that objectivity and value-neutrality simply cannot exist; all knowledge has embedded ideology and cultural bias. Sociologists of science ( the “strong program”) were quick to agree.

This intellectual opposition to the methodological validity of science, spurred by the political hostility to the content of science, ultimately erupted as the science wars of the 1990s. To many observers, two battles yielded a decisive victory for science against its critics. The first was publication of Higher Superstition by Gross and Levitt in 1994. The second was a hoax in which Alan Sokal submitted a paper claiming that quantum gravity was a social construct along with other postmodern nonsense to a journal of cultural studies. After it was accepted and published, Sokal revealed the hoax and wrote a book denouncing sociology of science and postmodernism.

Sadly, Sokal’s book, while full of entertaining examples of the worst of postmodern critique of science, really defeats only the most feeble of science’s enemies, revealing a poor grasp of some of the subtler and more valid criticism of science. For example, the postmodernists’ point that experimentation is not exactly the same thing as observation has real consequences, something that many earlier scientists themselves – like Robert Boyle and John Herschel – had wrestled with. Likewise, Higher Superstition, in my view, falls far below what we expect from Gross and Levitt. They deal Bruno Latour a well-deserved thrashing for claiming that science is a completely irrational process, and for the metaphysical conceit of holding that his own ideas on scientific behavior are fact while scientists’ claims about nature are not. But beyond that, Gross and Levitt reveal surprisingly poor knowledge of history and philosophy of science. They think Feyerabend is anti-science, they grossly misread Rorty, and waste time on a lot of strawmen.

Following closely  on the postmodern critique of science were the sociologists pursuing the social science of science. Their findings: it is not objectivity or method that delivers the outcome of science. In fact it is the interests of all scientists except social scientists that govern the output of scientific inquiry. This branch of Science and Technology Studies (STS), led by David Bloor at Edinburgh in the late 70s, overplayed both the underdetermination of theory by evidence and the concept of value-laden theories. These scientists also failed to see the irony of claiming a privileged position on the untenability of privileged positions in science. I.e., it is an absolute truth that there are no absolute truths.

While postmodern critique of science and facile politics in STC studies seem to be having a minor revival, the threats to real science from sociology, literary criticism and anthropology (I don’t mean that all sociology and anthropology are non-scientific) are small. But more subtle and possibly more ruinous threats to science may exist; and they come partly from within.

Modern threats to science seem more related to Eisenhower’s concerns than to the postmodernists. While Ike worried about the influence the US military had over corporations and universities (see the highly nuanced history of James Conant, Harvard President and chair of the National Defense Research Committee), Eisenhower’s concern dealt not with the validity of scientific knowledge but with the influence of values and biases on both the subjects of research and on the conclusions reached therein. Science, when biased enough, becomes bad science, even when scientists don’t fudge the data.

Pharmaceutical research is the present poster child of biased science. Accusations take the form of claims that GlaxoSmithKline knew that Helicobacter pylori caused ulcers – not stress and spicy food – but concealed that knowledge to preserve sales of the blockbuster drugs, Zantac and Tagamet. Analysis of those claims over the past twenty years shows them to be largely unsupported. But it seems naïve to deny that years of pharmaceutical companies’ mailings may have contributed to the premature dismissal by MDs and researchers of the possibility that bacteria could in fact thrive in the stomach’s acid environment. But while Big Pharma may have some tidying up to do, its opponents need to learn what a virus is and how vaccines work.

Pharmaceutical firms generally admit that bias, unconscious and of the selection and confirmation sort – motivated reasoning – is a problem. Amgen scientists recently tried to reproduce results considered landmarks in basic cancer research to study why clinical trials in oncology have such high failure rate. They reported in Nature that they were able to reproduce the original results in only six of 53 studies. A similar team at Bayer reported that only about 25% of published preclinical studies could be reproduced. That the big players publish analyses of bias in their own field suggests that the concept of self-correction in science is at least somewhat valid, even in cut-throat corporate science.

Some see another source of bad pharmaceutical science as the almost religious adherence to the 5% (+- 1.96 sigma) definition of statistical significance, probably traceable to RA Fisher’s 1926 The Arrangement of Field Experiments. The 5% false-positive probability criterion is arbitrary, but is institutionalized. It can be seen as a classic case of subjectivity being perceived as objectivity because of arbitrary precision. Repeat any experiment long enough and you’ll get statistically significant results within that experiment. Pharma firms now aim to prevent such bias by participating in a registration process that requires researchers to publish findings, good, bad or inconclusive.

Academic research should take note. As is often reported, the dependence of publishing on tenure and academic prestige has taken a toll (“publish or perish”). Publishers like dramatic and conclusive findings, so there’s a strong incentive to publish impressive results – too strong. Competitive pressure on 2nd tier publishers leads to their publishing poor or even fraudulent study results. Those publishers select lax reviewers, incapable of or unwilling to dispute authors. Karl Popper’s falsification model of scientific behavior, in this scenario, is a poor match for actual behavior in science. The situation has led to hoaxes like Sokal’s, but within – rather than across – disciplines. Publication of the nonsensical “Fuzzy”, Homogeneous Configurations by Marge Simpson and Edna Krabappel (cartoon character names) by the Journal of Computational Intelligence and Electronic Systems in 2014 is a popular example. Following Alan Sokal’s line of argument, should we declare the discipline of computational intelligence to be pseudoscience on this evidence?

Note that here we’re really using Bruno Latour’s definition of science – what scientists and related parties do with a body of knowledge in a network, rather than simply the body of knowledge. Should scientists be held responsible for what corporations and politicians do with their knowledge? It’s complicated. When does flawed science become bad science. It’s hard to draw the line; but does that mean no line needs to be drawn?

Environmental science, I would argue, is some of the worst science passing for genuine these days. Most of it exists to fill political and ideological roles. The Bush administration pressured scientists to suppress communications on climate change and to remove the terms “global warming” and “climate change” from publications. In 2005 Rick Piltz resigned from the  U.S. Climate Change Science Program claiming that Bush appointee Philip Cooney had personally altered US climate change documents to lessen the strength of their conclusions. In a later congressional hearing, Cooney confirmed having done this. Was this bad science, or just bad politics? Was it bad science for those whose conclusions had been altered not to blow the whistle?

The science of climate advocacy looks equally bad. Lack of scientific rigor in the IPCC is appalling – for reasons far deeper than the hockey stick debate. Given that the IPCC started with the assertion that climate change is anthropogenic and then sought confirming evidence, it is not surprising that the evidence it has accumulated supports the assertion. Compelling climate models, like that of Rick Muller at UC Berkeley, have since given strong support for anthropogenic warming. That gives great support for the anthropogenic warming hypothesis; but gives no support for the IPCC’s scientific practices. Unjustified belief, true or false, is not science.

Climate change advocates, many of whom are credentialed scientists, are particularly prone to a mixing bad science with bad philosophy, as when evidence for anthropogenic warming is presented as confirming the hypothesis that wind and solar power will reverse global warming. Stanford’s Mark Jacobson, a pernicious proponent of such activism, does immeasurable damage to his own stated cause with his descent into the renewables fantasy.

Finally, both major climate factions stoop to tying their entire positions to the proposition that climate change has been measured (or not). That is, both sides are in implicit agreement that if no climate change has occurred, then the whole matter of anthropogenic climate-change risk can be put to bed. As a risk man observing the risk vector’s probability/severity axes – and as someone who buys fire insurance though he has a brick house – I think our science dollars might be better spent on mitigation efforts that stand a chance of being effective rather than on 1) winning a debate about temperature change in recent years, or 2) appeasing romantic ideologues with “alternative” energy schemes.

Science survived Abe Lincoln (rain follows the plow), Ronald Reagan (evolution just a theory) and George Bush (coercion of scientists). It will survive Barack Obama (persecution of deniers) and Jerry Brown and Al Gore (science vs. pronouncements). It will survive big pharma, cold fusion, superluminal neutrinos, Mark Jacobson, Brian Greene, and the Stanford propaganda machine. Science will survive bad science because bad science is part of science, and always has been. As Paul Feyerabend noted, Galileo routinely used propaganda, unfair rhetoric, and arguments he knew were invalid to advance his worldview.

Theory on which no evidence can bear is religion. Theory that is indifferent to evidence is often politics. Granting Bloor, for sake of argument, that all theory is value-laden, and granting Kuhn, for sake of argument, that all observation is theory-laden, science still seems to have an uncanny knack for getting the world right. Planes fly, quantum tunneling makes DVD players work, and vaccines prevent polio. The self-corrective nature of science appears to withstand cranks, frauds, presidents, CEOs, generals and professors. As Carl Sagan Often said, science should withstand vigorous skepticism. Further, science requires skepticism and should welcome it, both from within and from irksome sociologists.

.

 

the multidisciplinarian

.

XKCD cartoon courtesy of xkcd.com

 

, , ,

1 Comment

The Trouble with Strings

Theoretical physicist Brian Greene is brilliant, charming, and silver-tongued. I’m guessing he’s the only Foundational Questions Institute grant awardee who also appears on the Pinterest Gorgeous Freaking Men page. Greene is the reigning spokesman for string theory, a theoretical framework proposing that one dimensional (also higher dimensions in later variants, e.g., “branes”) objects manifest different vibrational modes to make up all particles and forces of physics’ standard model. Though its proponents now discourage such usage, many call string theory the grand unification, the theory of everything. Since this includes gravity, string theorists also hold that string theory entails the elusive theory of quantum gravity. String theory has gotten a lot of press over the past few decades in theoretical physics and, through academic celebrities like Greene, in popular media.

XKCD

Several critics, some of whom once spent time in string theory research, regard it as not a theory at all. They see it as a mere formalism – a potential theory or family – very, very large family – of potential theories, all of which lack confirmable or falsifiable predictions. Lee Smolin, also brilliant, lacks some of Greene’s other attractions. Smolin is best known for his work in loop quantum gravity – roughly speaking, string theory’s main competitor. Smolin also had the admirable nerve to publicly state that, despite the Sokol hoax affair, sociologists have the right and duty to examine the practice of science. His sensibilities on that issue bring to bear on the practice of string theory.

Columbia University’s Peter Woit, like Smolin, is a highly vocal critic of string theory. Like Greene and Smolin, Woit is wicked sharp, but Woit’s tongue is more venom than silver. His barefisted blog, Not Even Wrong, takes its name from a statement Rudolf Peierls claimed Wolfgang Pauli had made about some grossly flawed theory that made no testable predictions.

The technical details of whether string theory is in fact a theory or whether string theorists have made testable predictions or can, in theory, ever make such predictions is great material that one could spend a few years reading full time. Start with the above mentioned authors and follow their references. Though my qualifications to comment are thin, it seems to me that string theory is at least in principle falsifiable, at least if you accept that failure to detect supersymmetry (required for strings) at the LHC or future accelerators over many attempts to do so.

But for this post I’m more interested in a related topic that Woit often covers – not the content of string theory but its practice and its relationship to society.

Regardless of whether it is a proper theory, through successful evangelism by the likes of Greene, string theory has gotten a grossly disproportionate amount of research funding. Is it the spoiled, attention-grabbing child of physics research? A spoiled child for several decades, says Woit – one that deliberately narrowed the research agenda to exclude rivals. What possibly better theory has never seen the light of day because its creator can’t get a university research position? Does string theory coerce and persuade by irrational methods and sleight of hand, as Feyerabend argued was Galileo’s style? Galileo happened to be right of course – at least on some major points.

Since Galileo’s time, the practice of science and its relationship to government, industry, and academic institutions has changed greatly. Gentleman scientists like Priestly, Boyle, Dalton and Darwin are replaced by foundation-funded university research and narrowly focused corporate science. After Kuhn – or misusing Kuhn – sociologists of science in the 1980s and 90s tried to knock science from its privileged position on the grounds that all science is tainted with cultural values and prejudices. These attacks included claims of white male bias and echoes of Eisenhower’s warnings about the “military industrial complex.”   String theory, since it holds no foreseeable military or industrial promise, would seem to have immunity from such charges of bias. I doubt Democrats like string more than Republicans.

Yet, as seen by Smolin and Woit, in string theory, Kuhn’s “relevant community” became the mob (see Lakatos on Kuhn/mob) – or perhaps a religion not separated from the state. Smolin and Woit point to several cult aspects of the string theory community. They find it to be cohesive, monolithic and high-walled – hard both to enter and to leave. It is hierarchical; a few leaders control the direction of the field while its initiates aim to protect the leaders from dissenting views.  There is an uncommon uniformity of views on open questions; and evidence is interpreted optimistically. On this view, string theorists yield to Bacon’s idols of the tribe, the cave, and the marketplace. Smolin cites the rarity of particle physicists outside of string theory to be invited to its conferences.

In The Trouble with Physics, Smolin details a particular example of community cohesiveness unbecoming to science. Smolin says even he was, for much of two decades, sucked into the belief that string theory had been proved finite. Only when he sought citations for a historical comparison of approaches in particle physics he was writing did he find that what he and everyone else assumed to have been proved long ago had no basis. He questioned peers, finding that they too had ignored vigorous skepticism and merely gone with the flow. As Smolin tells it, everyone “knew” that Stanley Mandelstam (UC Berkeley)  had proved string theory finite in its early days. Yet Mandelstam himself says he did not. I’m aware that there are other takes on the issue of finitude that may soften Smolin’s blow; but, in my view, his point on group cohesiveness and their indignation at being challenged still stand.

A telling example of the tendency for string theory to exclude rivals comes from a 2004 exchange on the sci.physics.strings Google group between Luboš Motl and Wolfgang Lerche of CERN, who does a lot of work on strings and branes. Motl pointed to Leonard Susskind’s then recent embrace of “landscapes,” a concept Susskind had dismissed before it became useful to string theory. To this Lerche replied:

“what I find irritating is that these ideas are out since the mid-80s… this work had been ignored (because it didn’t fit into the philosophy at the time) by the same people who now re-“invent” the landscape, appear in journals in this context and even seem to write books about it.  There had always been proponents of this idea, which is not new by any means.. . . the whole discussion could (and in fact should) have been taken place in 1986/87. The main thing what has changed since then is the mind of certain people, and what you now see is the Stanford propaganda machine working at its fullest.”

Can a science department in a respected institution like Stanford in fairness be called a propaganda machine? See my take on Mark Jacobson’s science for my vote. We now have evidence that science can withstand religion. The question for this century might be whether science, in the purse sense, can withstand science in the corporate, institutional, and academic sense.

______________________________

String theory cartoon courtesy of XKCD.

______________________________

I just discovered on Woit’s Not Even Wrong a mention of John Horgan’s coverage of Bayesian belief (previous post) applied to string theory. Horgan notes:

“In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe.”

, ,

2 Comments