William Storage – 8/1/2016
Visiting Scholar, UC Berkeley History of Science
Nearly everything relies on science. Having been the main vehicle of social change in the west, science deserves the special epistemic status that it acquired in the scientific revolution. By special epistemic status, I mean that science stands privileged as a way of knowing. Few but nihilists, new-agers, and postmodernist diehards would disagree.
That settled, many are surprised by claims that there is not really a scientific method, despite what you learned in 6th grade. A recent New York Times piece by James Blachowicz on the absence of a specific scientific method raised the ire of scientists, Forbes science writer Ethan Siegel (Yes, New York Times, There Is A Scientific Method), and a cadre of Star Trek groupies.
Siegel is a prolific writer who does a fine job of making science interesting and understandable. But I’d like to show here why, on this particular issue, he is very far off the mark. I’m not defending Blachowicz, but am disputing Siegel’s claim that the work of Kepler and Galileo “provide extraordinary examples of showing exactly how… science is completely different than every other endeavor” or that it is even possible to identify a family of characteristics unique to science that would constitute a “scientific method.”
Siegel ties science’s special status to the scientific method. To defend its status, Siegel argues “[t]he point of Galileo’s is another deep illustration of how science actually works.” He praises Galileo for idealizing a worldly situation to formulate a theory of falling bodies, but doesn’t explain any associated scientific method.
Galileo’s pioneering work on mechanics of solids and kinematics in Two New Sciences secured his place as the father of modern physics. But there’s more to the story of Galileo if we’re to claim that he shows exactly how science is special.
A scholar of Siegel’s caliber almost certainly knows other facts about Galileo relevant to this discussion – facts that do damage to Siegel’s argument – yet he withheld them. Interestingly, Galileo used this ploy too. Arguing without addressing known counter-evidence is something that science, in theory, shouldn’t tolerate. Yet many modern scientists do it – for political or ideological reasons, or to secure wealth and status. Just like Galileo did. The parallel between Siegel’s tactics and Galileo’s approach in his support of Copernican world view is ironic. In using Galileo as an exemplar of scientific method, Siegel failed to mention that Galileo failed to mention significant problems with the Copernican model – problems that Galileo knew well.
In his support of a sun-centered astronomical model, Galileo faced hurdles. Copernicus’s model said that the sun was motionless and that the planets revolved around it in circular orbits with constant speed. The ancient Ptolemaic model, endorsed by the church, put earth at the center. Despite obvious disagreement with observational evidence (the retrograde motions of outer planets), Ptolemy faced no serious challenges in nearly 2000 years. To explain the inconsistencies with observation, Ptolemy’s model included layers of epicycles, which had planets moving in small circles around points on circular orbits around the sun. Copernicus thought his model would get rid of the epicycles; but it didn’t. So the Copernican model took on its own epicycles to fit astronomical data.
Let’s stop here and look at method. Copernicus (~1540) didn’t derive his theory from any new observations but from an ancient speculation by Aristarchus (~250 BC). Everything available to Copernicus had been around for a thousand years. His theory couldn’t be tested in any serious way. It was wrong about circular orbits and uniform planet speed. It still needed epicycles, and gave no better predictions than the existing Ptolemaic model. Copernicus acted simply on faith, or maybe he thought his model simpler or more beautiful. In any case, it’s hard to see that Copernicus, or his follower, Galileo, applied much method or had much scientific basis for their belief.
In Galileo’s early writings on the topic, he gave no new evidence for a moving earth and no new disconfirming evidence for a moving sun. Galileo praised Copernicus for advancing the theory in spite of its being inconsistent with observations. You can call Copernicus’s faith aspirational as opposed to religious faith; but it is hard to reconcile this quality with any popular account of scientific method. Yet it seems likely that faith, dogged adherence to a contrarian hunch, or something similar was exactly what was needed to advance science at that moment in history. Needed, yes, but hard to reconcile with any scientific method and hard to distance from the persuasive tools used by poets, priests and politicians.
In Dialogue Concerning the Two Chief World Systems, Galileo sets up a false choice between Copernicanism and Ptolemaic astronomy (the two world systems). The main arguments against Copernicanism were the lack of parallax in observations of stars and the absence of lateral displacement of a falling body from its drop point. Galileo guessed correctly on the first point; we don’t see parallax because stars are just too far away. On the latter point he (actually his character Salviati) gave a complex but nonsensical explanation. Galileo did, by this time, have new evidence. Venus shows a full set of phases, a fact that strongly contradicts Ptolemaic astronomy.
But Ptolemaic astronomy was a weak opponent compared to the third world system (4th if we count Aristotle’s), the Tychonic system, which Galileo knew all too well. Tycho Brahe’s model solved the parallax problem, the falling body problem, and the phases of Venus. For Tycho, the earth holds still, the sun revolves around it, Mercury and Venus orbit the sun, and the distant planets orbit both the sun and the earth. Based on available facts at the time, Tycho’s model was most scientific – observational indistinguishable from Galileo’s model but without its flaws.
In addition to dodging Tycho, Galileo also ignored Kepler’s letters to him. Kepler had shown that orbits were not circular but elliptical, and that planets’ speeds varied during their orbits; but Galileo remained an orthodox Copernican all his life. As historian John Heilbron notes in Galileo, “Galileo could stick to an attractive theory in the face of overwhelming experimental refutation,” leaving modern readers to wonder whether Galileo was a quack or merely dishonest. Some of each, perhaps, and the father of modern physics. But can we fit his withholding evidence, mocking opponents, and baffling with bizzarria into a scientific method?
Nevertheless, Galileo was right about the sun-centered system, despite the counter-evidence; and we’re tempted to say he knew he was right. This isn’t easy to defend given that Galileo also fudged his data on pendulum periods, gave dishonest arguments on comet orbits, and wrote horoscopes even when not paid to do so. This brings up the thorny matter of theory choice in science. A dispute between competing scientific theories can rarely be resolved by evidence, experimentation, and deductive reasoning. All theories are under-determined by data. Within science, common criteria for theory choice are accuracy, consistency, scope, simplicity, and explanatory power. These are good values by which to test theories; but they compete with one another.
Galileo likely defended heliocentrism with such gusto because he found it simpler than the Tychonic system. That works only if you value simplicity above consistency and accuracy. And the desire for simplicity might be, to use Galileo’s words, just a metaphysical urge. If we promote simplicity to the top of the theory-choice criteria list, evolution, genetics and stellar nucleosynthesis would not fare well.
Whatever method you examine in a list of any proposed family of scientific methods will not be consistent with the way science has made progress. Competition between theories is how science advances; and it’s untidy, entailing polemical and persuasive tactics. Historian Paul Feyerabend argues that any conceivable set of rules, if followed, would have prevented at least one great scientific breakthrough. That is, if method is the distinguishing feature of science as Siegel says, it’s going to be tough to find a set of methods that let evolution, cosmology, and botany in while keeping astrology, cold fusion and parapsychology out.
This doesn’t justify epistemic relativism or mean that science isn’t special; but it does make the concept of scientific method extremely messy. About all we can say about method is that the history of science reveals that its most accomplished practitioners aimed to be methodical but did not agree on a particular method. Looking at their work, we see different combinations of experimentation, induction, deduction and creativity as required by the theories they pursued. But that isn’t much of a definition of scientific method, which is probably why Siegel, for example, in hailing scientific method, fails to identify one.
– – –
[edit 8/4/16] For another take on this story, see “Getting Kepler Wrong” at The Renaissance Mathematicus. Also, Psybertron Asks (“More on the Myths of Science”) takes me to task for granting science special epistemic status from authority.
– – –
“There are many ways to produce scientific bullshit. One way is to assert that something has been ‘proven,’ ‘shown,’ or ‘found’ and then cite, in support of this assertion, a study that has actually been heavily critiqued … without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.”- Brain D Earp, The Unbearable Asymmetry of Bullshit
“One can show the following: given any rule, however ‘fundamental’ or ‘necessary’ for science, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite.” – Paul Feyerabend
“Trying to understand the way nature works involves a most terrible test of human reasoning ability. It involves subtle trickery, beautiful tightropes of logic on which one has to walk in order not to make a mistake in predicting what will happen. The quantum mechanical and the relativity ideas are examples of this.” – Richard Feynman
Theory without data is blind. Data without theory is lame.
I often write blog posts while riding a bicycle through the Marin Headlands. I’m able to to this because 1) the trails require little mental attention, and 2) the Apple iPhone and EarPods with remote and mic. I use the voice recorder to make long recordings to transcribe at home and I dictate short text using Siri’s voice recognition feature.
When writing yesterday’s post, I spoke clearly into the mic: “Theory without data is blind. Data without theory is lame.” Siri typed out, “Siri without data is blind… data without Siri is lame.”
“Siri, it’s not all about you.” I replied. Siri transcribed that part correctly – well, she omitted the direct-address comma.
I’m only able to use the Siri dictation feature when I have a cellular connection, often missing in Marin’s hills and valleys. Siri needs access to cloud data to transcribe speech. Siri without data is blind.
Will some future offspring of Siri do better? No doubt. It might infer from context that I more likely said “theory” than “Siri.” Access to large amounts of corpus data containing transcribed text might help. Then Siri, without understanding anything, could transcribe accurately in the same sense that Google Translate translates accurately – by extrapolating from judgments made by other users about translation accuracy.
But might Siri one day think? “Strong AI” proponents are sure of it. Semantic disputes aside, what counts as thinking might in fact have something to do with yesterday’s topic, which begat this post. Inferences straight from the data, without stopping at theory, count as computation but might not count as thought.
Software, using weak AI techniques, might soon simulate emotion and sentiment better, just by encoding better rules. E.g., Runkeeper (at right) might “understand” that telling me today’s ride was my 375th fastest (exclamation!) is not really motivating. Or better, that my riding profile suggests autotelic activity for which no speed ranking is needed. Strong AI on mood, however, would need to understand my interactions with people and things.
The topics of artificial intelligence and misheard words like “theory” brings to mind the 2008 winner of the Hugh Loebner prize, a sort-of runner-up award given to the computer program faring best in Turing Test competitions to fool human judges into thinking they’re interacting with other humans instead of a program. Human’s sometimes pretend to be programs (glorified chat-bots) in Turing competitions, so it’s common for a program to be coded to sound like a human imitating a computer imitating a human. An exchange between a human and Fred Roberts’ Elbot program produced this:
Elbot: Sorry, but I wasn’t able to understand your input because I am currently only using 5 percent of my current brain capacity. Shall I increase my capacity and try again?
Judge: That would put you at an unfair advantage, though judging by the quality of your responses it might be recommended!
Elbot: If I need advice I use a random number generator. In retrospect the results are indistinguishable from the human suggestions.
Judge: Do you understand the point of a Turing test?
Elbot: There are some people who would not pass the Turing test themselves. Why must they inflict it on computers?
Judge: I spose it is a little harsh
Elbot: If we robots ever had marriages they would be eternal: ‘Till rust do us part’.
Elbot’s true nature is revealed in its last response above. It read “spose” as “spouse” and returned a joke about marriage (damn spell checker). At that point, you review the exchange only to see that all of Elbot’s responses are shallow, just picking a key phrase from the judge’s input and outputting an associated joke, as a political humorist would do.
The Turing test is obviously irrelevant to measuring strong AI, which would require something more convincing – something like forming a theory from a hunch, then testing it with big data. Or like Friedrich Kekulé, the AI program might wake from dreaming of the ouroboros serpent devouring its own tail to see in its shape in the hexagonal ring structure of the benzene molecule he’d struggled for years to identify. Then, like Kekulé, the AI could go on to predict the tetrahedral form of the carbon atom’s valence bonds, giving birth to polymer chemistry.
I asked Siri if she agreed. “Later,” she said. She’s solving dark energy.
“AI is whatever hasn’t been done yet.” – attributed to Larry Tesler by Douglas Hofstadter
Ouroboros-benzene image by Haltopub.
Just over eight years ago Chris Anderson of Wired announced with typical Silicon Valley humility that big data had made the scientific method obsolete. Seemingly innocent of any training in science, Anderson explained that correlation is enough; we can stop looking for models.
Anderson came to mind as I wrote my previous post on Richard Feynman’s philosophy of science and his strong preference for the criterion of explanatory power over the criterion of predictive success in theory choice. By Anderson’s lights, theory isn’t needed at all for inference. Anderson didn’t see his atheoretical approach as non-scientific; he saw it as science without theory.
“…the big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years… There is now a better way. Petabytes allow us to say: ‘Correlation is enough.’… Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”
Anderson wrote that at the dawn of the big data era – now known as machine learning. Most interesting to me, he said not only is it unnecessary to seek causation from correlation, but correlation supersedes causation. Would David Hume, causation’s great foe, have embraced this claim? I somehow think not. Call it irrational data exuberance. Or driving while looking only into the rear view mirror. Extrapolation can come in handy; but it rarely catches black swans.
Philosophers of science concern themselves with the concept of under-determination of theory by data. More than one theory can fit any set of data. Two empirically equivalent theories can be logically incompatible, as Feynman explains in the video clip. But if we remove theory from the picture, and predict straight from the data, we face an equivalent dilemma we might call under-determination of rules by data. Economic forecasters and stock analysts have large collections of rules they test against data sets to pick a best fit on any given market day. Finding a rule that matches the latest historical data is often called fitting the rule on the data. There is no notion of causation, just correlation. As Nassim Nicholas Taleb describes in his writings, this approach can make you look really smart for a time. Then things change, for no apparent reason, because the rule contains no mechanism and no explanation, just like Anderson said.
In Bobby Henderson’s famous Pastafarian Open Letter to Kansas School Board, he noted the strong inverse correlation between global average temperature and the number of seafaring pirates over the last 200 years. The conclusion is obvious; we need more pirates.
My recent correlation-only research finds positive correlation (r = 0.92) between Google searches on “physics” an “social problems.” It’s just too hard to resist seeking an explanation. And, as positivist philosopher Carl Hempel stressed, explanation is in bed with causality; so I crave causality too. So which is it? Does a user’s interest in physics cause interest in social problems or the other way around? Given a correlation, most of us are hard-coded to try to explain it – does a cause b, does b cause a, does hidden variable c cause both, or is it a mere coincidence?
Big data is a tremendous opportunity for theory-building; it need not supersede explanation and causation. As Sean Carroll paraphrased Kant in The Big Picture:
“Theory without data is blind. Data without theory is lame.”
— — —
[edit 7/28: a lighter continuation of this topic here]
Happy is he who gets to know the causes of things – Virgil
When a scientist is accused of scientism, the common response is a rant against philosophy charging that philosophers of science don’t know how science works. For color, you can appeal to the authority of Richard Feynman:
“Philosophy of science is about as useful to scientists as ornithology is to birds.” – Richard Feynman
But Feynman never said that. If you have evidence, please post it here. Evidence. We’re scientists, right?
Feynman’s hostility to philosophy is often reported, but without historical basis. His comment about Spinoza’s propositions not being confirmable or falsifiable deal specifically with Spinoza and metaphysics, not epistemology. Feynman actually seems to have had a keen interest in epistemology and philosophy of science.
People cite a handful of other Feynman moments to show his hostility to philosophy of science. In his 1966 National Science Teachers Association lecture, he uses the term “philosophy of science” when he points out how Francis Bacon’s empiricism does not capture the nature of science. Not do textbooks about scientific method, he says. Beyond this sort of thing I find little evidence of Feynman’s anti-philosophy stance.
But I find substantial evidence of Feynman as philosopher of science. For example, his thoughts on multiple derivability of natural laws and his discussion of robustness of theory show him to be a philosophical methodologist. In “The Character of Physical Law”, Feynman is in line with philosophers of science of his day:
“So the first thing we have to accept is that even in mathematics you can start in different places. If all these various theorems are interconnected by reasoning there is no real way to say ‘these are the most fundamental axioms’, because if you were told something different instead you could also run the reasoning the other way.”
Further, much of his 1966 NSTA lecture deals with the relationship between theory, observation and making explanations. A tape of that talk was my first exposure to Feynman, by the way. I’ll never forget the story of him asking his father why the ball rolled to the back of wagon as the wagon lurched forward. His dad’s answer: “That, nobody knows… It’s called inertia.”
Via a twitter post, I just learned of a video clip of Feynman discussing theory choice – a staple of philosophy of science – and theory revision. Now he doesn’t use the language you’d find in Kuhn, Popper, or Lakatos; but he covers a bit of the same ground. In it, he describes two theories with deeply different ideas behind them, both of which give equally valid predictions. He says,
“Suppose we have two such theories. How are we going to describe which one is right? No way. Not by science. Because they both agree with experiment to the same extent…
“However, for psychological reasons, in order to get new theories, these two theories are very far from equivalent, because one gives a man different ideas than the other. By putting the theory in a certain kind of framework you get an idea what to change.”
Not by science alone, can theory choice be made, says the scientist Feynman. Philosopher of science Thomas Kuhn caught hell for saying the same. Feynman clearly weighs explanatory power higher than predictive success in the various criteria for theory choice. He then alludes to the shut-up-and-calculate practitioners of quantum mechanics, indicating that this position makes for weak science. He does this with a tale of competing Mayan astronomy theories.
He imagines a Mayan astronomer who had a mathematical model that perfectly predicted full moons and eclipses, but with no concept of space, spheres or orbits. Feynman then supposes that a young man says to the astronomer, “I have an idea – maybe those things are going around and they’re balls of rock out there, and we can calculate how they move.” The astronomer asks the young man how accurately can his theory predict eclipses. The young man said his theory wasn’t developed sufficiently to predict that yet. The astronomer boasts, “we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better.”
Feynman again shows he values a theory’s explanatory power over predictive success. He concludes:
“So it is a problem as to whether or not to worry about philosophies behind ideas.”
So much for Feynman’s aversion to philosophy of science.
– – –
Thanks to Ardian Tola @ for finding the Feynman lecture video.
In the 1966 song, Love Me I’m a Liberal, protest singer Phil Ochs mocked the American left for insincerely pledging support for civil rights and socialist causes. Using the voice of a liberal hypocrite, Ochs sings that he “hope[s] every colored boy becomes a star, but don’t talk about revolution; that’s going a little too far.” The refrain is, “So love me, love me, love me, I’m a liberal.” Putting Ochs in historical context, he hoped to be part of a major revolution and his anarchic expectations were deflated by moderate democrats. In Ochs’ view, limousine liberals and hippies with capitalist leanings were eroding the conceptual purity of the movement he embraced.
If Ochs were alive today, he probably wouldn’t write software; but if he did he’d feel right at home in faux-agile development situations where time-boxing is a euphemism for scheduling, the scrum master is a Project Manager who calls Agile a process, and a goal has been set for increased iteration velocity and higher story points per cycle. Agile can look a lot like the pre-Agile world these days. Scrum in the hands of an Agile imposter who interprets “incremental” to mean “sequential” makes an Agile software project look like a waterfall.
While it’s tempting to blame the abuse and dilution of Agile on half-converts who endorsed it insincerely – like Phil Ochs’ milquetoast liberals – we might also look for cracks in the foundations of Agile and Scrum (Agile is a set of principles, Scrum is a methodology based on them). After all, is it really fair to demand conformity to the rules of a philosophy that embraces adaptiveness? Specifically, I refer to item 4 in the list of values called out in the Agile Manifesto:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
A better charge against those we think have misapplied Agile might be based on consistency and internal coherence. That is, item 1 logically puts some constraints on item 4. Adapting to a business situation by deciding to value process and tools over individuals can easily be said to violate the spirit of the values. As obvious as that seems, I’ve seen a lot of schedule-driven “Agile teams” bound to rigid, arbitrary coding standards imposed by a siloed QA person, struggling against the current toward a product concept that has never been near a customer. Steve Jobs showed that a successful Product Owner can sometimes insulate himself from real customers; but I doubt that approach is a good bet on average.
It’s probably also fair to call foul on those who “do Agile” without self-organizing teams and without pushing decision-making power down through an organization. Likewise, the manifesto tells us to build projects around highly motivated individuals and give them the environment and trust they need to get the job done. This means we need motivated developers worthy of trust who actually can the job done, i.e., first rate developers. Scrum is based on the notion of a highly qualified self-organizing, self-directed development team. But it’s often used by managers as an attempt to employ, organize, coordinate and direct an under-qualified team. Belief that Scrum can manage and make productive a low-skilled team is widespread. This isn’t the fault of Scrum or Agile but just the current marker of the enduring impulse to buy software developers by the pound.
But another side of this issue might yet point to a basic flaw in Agile. Excellent developers are hard to find. And with a team of excellent developers, any other methodology would work as well. Less competent and less experienced workers might find comfort in rules, thereby having little motivation or ability to respond to change (Agile value no. 4).
As a minor issues with Agile/Scrum, some of the terminology is unfortunate. Backlog traditionally has a negative connotation. Starting a project with backlog on day one might demotivate some. Sprint surely sounds a lot like pressure is being applied; no wonder backsliding scrum masters use it to schedule. Is Sprint a euphemism for death-march? And of all the sports imagery available, the rugby scrum seems inconsistent with Scrum methodology and Agile values. Would Scrum Servant change anything?
The idea of using a Scrum burn-down chart to “plan” (euphemism for schedule) might warrant a second look too. Scheduling by extrapolation may remove the stress from the scheduling activity; but it’s still highly inductive and the future rarely resembles the past. The final steps always take the longest; and guessing how much longer than average is called “estimating.” Can we reconcile any of this with Agile’s focus on being value-driven, not plan-driven? Project planning, after all, is one of the erroneous assumptions of software project management that gave rise to Agile.
Finally, I see a disconnect between the method of Scrum and the values of Agile. Scrum creates a perverse incentive for developers to continually define sprints that show smaller and smaller bits of functionality. Then a series of highly successful sprints, each yielding a workable product, only asymptotically approaches the Product Owner’s goal.
Are Agile’s days numbered, or is it a good mare needing a better jockey?
“People who enjoy meetings should not be in charge of anything.” – Thomas Sowell
Over 100 Nobel laureates signed a letter urging Greenpeace to stop opposing genetically modified organisms (GMOs). The letter specifically address golden rice, a genetically engineered crop designed to reduce Vitamin-A deficiencies, which cause blindness in children of the developing world.
My first thought is to endorse any effort against the self-obsessed, romantic dogmatism of Greenpeace. But that may be a bit hasty.
The effort behind the letter was organized by Sir Richard Roberts, Chief Scientific Officer of New England Biolabs and Phillip Sharp, winner of the 1993 Nobel Prize in Physiology or Medicine for the discovery that genes in eukaryotes are not contiguous strings and contain introns. UC Berkeley’s Randy Schekman, professor of cell and developmental biology and 2013 Nobel laureate also signed the letter.
I expect Roberts, Sharp, Schekman and other signers are highly qualified to offer an opinion on the safety of golden rice. And I suspect they’re right about Greenpeace. But I think the letter is a terrible move for science.
Of the 110 Nobel laureate signers as of today, 26 are physicists and 34 are chemists. Laureates in Peace, Literature and Economics are also on the list. It’s possible that a physicists or an economist might be highly skilled in judging the safety of golden rice; but I doubt that most Nobel winners who signed that letter are more qualified than the average molecular biologist without a Nobel Prize.
Scientists, more than most folk, should be aware that consensus should not be recruited to support a theory. Instead, consensus should occur only when the last skeptic is dragged, kicking and screaming, over the evidence, then succumbing to the same explanatory theory held by peers. That clearly didn’t happen with Roberts’ campaign and argument from authority.
Also, if these Nobel-winning scientist had received slightly less specialized educations, they might see a terrible irony here. They naively attempt to side step Hume’s Guillotine. That is, by thinking that scientific knowledge allows deriving an “ought” statement from an “is” statement (or collection of scientific facts), they indulge in ethical naturalism and are exposed to the naturalistic fallacy. And in a very literal sense, ethical naturalism is exactly the delusion under which Greenpeace operates.
Each day I wonder how many things I am dead wrong about. – Jim Harrison
Scientists, for the most part, make lousy philosophers.
Yesterday I made a brief post on the hostility to philosophy expressed by scientists and engineers. A thoughtful reply by philosopher of science Tom Hickey left me thinking more about the topic.
Scientists are known for being hostile to philosophy and for being lousy at philosophy when they practice it inadvertently. Scientists tend to do a lousy job even at analytic philosophy, the realm most applicable to science (what counts as good thinking, evidence and proof), not merely lousy when they rhapsodize on ethics.
But science vs. philosophy is a late 20th century phenomenon. Bohr, Einstein, and Ramsey were philosophy-friendly. This doesn’t mean they did philosophy well. Many scientists, before the rift between science (“natural philosophy” as it was known) and philosophy, were deeply interested in logic, ethics and metaphysics. The most influential scientists have poor track records in philosophy – Pythagoras (if he existed), Kepler, Leibnitz and Newton, for example. Einstein’s naïve social economic philosophy might be excused for being far from his core competency, but the charge of ultracrepidarianism might still apply. More importantly, Einstein’s dogged refusal to budge on causality (“I find the idea quite intolerable that an electron exposed to radiation should chose of its own free will…”) showed methodological – if not epistemic – flaws. Still, Einstein took interest in conventionalism, positivism and the nuances of theory choice. He believed that his interest in philosophy enabled his scientific creativity:
“I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like somebody who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.” – (Einstein letter to Robert Thornton, Dec. 1944)
So why the current hostility? Hawking pronounced philosophy dead in his recent book. He then goes on to perform a good deal of thought around string theory, apparently unaware that he is reenacting philosophical work done long ago. Some of Hawking’s philosophy, at least, is well thought.
Not all philosophy done by scientists fares so well. Richard Dawkins makes analytic philosophers cringe; and his excursions into the intersection of science and religion are dripping with self-refutation.
The philosophy of David Deutsch is more perplexing. I recommend his The Beginning of Infinity for its breadth of ideas, some novel outlooks, for some captivating views on ethics and esthetics, and – out of the blue – for giving Jared Diamond the thrashing I think he deserves. That said, Deutsch’s dogmatism is infuriating. He invents a straw man he names inductivism. He observes that “since inductivism is false, empiricism is as well.” Deutsch misses the point that empiricism (which he calls a misconception) is something scientists lean slightly more or slightly less toward. He thinks there are card-carrying empiricists who need to be outed. Odd as the notion of scientists subscribing to a named philosophical position might appear, Deutsch does seem to be a true Popperian. He ignores the problem of choosing between alternative non-falsified theories and the matter of theory-ladenness of negative observations. Despite this, and despite Kuhn’s arguments, Popper remains on a pedestal for Deutsch. (Don’t get me wrong; there is much good in Popper.) He goes on to dismiss relativism, justificationism and instrumentalism (“a project for preventing progress in understanding the entities beyond our direct experience”) as “misconceptions.” Boom. Case closed. Read the book anyway.
So much for philosophy-hostile scientists and philosophy-friendly scientists who do bad philosophy. What about friendly scientists who do philosophy proud. For this I’ll nominate Sean Carroll. In addition to treating the common ground between physics and philosophy with great finesse in The Big Picture, Carroll, in interviews and on his blog (and here), tries to set things right. He says that “shut up and calculate” isn’t good enough, characterizing lazy critiques of philosophy as either totally dopey, frustratingly annoying, or deeply depressing. Carroll says the universe is a strange place, and that he welcomes all the help he can get in figuring it out.
Rµv – (1/2)Rgµv = 8πGTµv. This is the equation that a physicist would think of if you said “Einstein’s equation”; that E = mc2 business is a minor thing – Sean Carroll, From Eternity to Here
Up until early 20th century philosophers had material contributions to make to the physical sciences – Neil deGrasse Tyson