Posts Tagged Thomas Kuhn

Grains of Truth: Science and Dietary Salt

Science doesn’t proceeds in straight lines. It meanders, collides, and battles over its big ideas. Thomas Kuhn’s view of science as cycles of settled consensus punctuated by disruptive challenges is a great way to understand this messiness, though later approaches, like Imre Lakatos’s structured research programs, Paul Feyerabend’s radical skepticism, and Bruno Latour’s focus on science’s social networks have added their worthwhile spins. This piece takes a light look, using Kuhn’s ideas with nudges from Feyerabend, Lakatos, and Latour, at the ongoing debate over dietary salt, a controversy that’s nuanced and long-lived. I’m not looking for “the truth” about salt, just watching science in real time.

Dietary Salt as a Kuhnian Case Study

The debate over salt’s role in blood pressure shows how science progresses, especially when viewed through the lens of Kuhn’s philosophy. It highlights the dynamics of shifting paradigms, consensus overreach, contrarian challenges, and the nonlinear, iterative path toward knowledge. This case reveals much about how science grapples with uncertainty, methodological complexity, and the interplay between evidence, belief, and rhetoric, even when relatively free from concerns about political and institutional influence.

In The Structure of Scientific Revolutions, Kuhn proposed that science advances not steadily but through cycles of “normal science,” where a dominant paradigm shapes inquiry, and periods of crisis that can result in paradigm shifts. The salt–blood pressure debate, though not as dramatic in consequence as Einstein displacing Newton or as ideologically loaded as climate science, exemplifies these principles.

Normal Science and Consensus

Since the 1970s, medical authorities like the World Health Organization and the American Heart Association have endorsed the view that high sodium intake contributes to hypertension and thus increases cardiovascular disease (CVD) risk. This consensus stems from clinical trials such as the 2001 DASH-Sodium study, which demonstrated that reducing salt intake significantly (from 8 grams per day to 4) lowered blood pressure, especially among hypertensive individuals. This, in Kuhn’s view, is the dominant paradigm.

This framework – “less salt means better health” – has guided public health policies, including government dietary guidelines and initiatives like the UK’s salt reduction campaign. In Kuhnian terms, this is “normal science” at work. Researchers operate within an accepted model, refining it with meta-analyses and Randomized Control Trials, seeking data to reinforce it, and treating contradictory findings as anomalies or errors. Public health campaigns, like the AHA’s recommendation of less than 2.3 g/day of sodium, reflect this consensus. Governments’ involvement embodies institutional support.

Anomalies and Contrarian Challenges

However, anomalies have emerged. For instance, a 2016 study by Mente et al. in The Lancet reported a U-shaped curve; both very low (less than 3 g/day) and very high (more than 5 g/day) sodium intakes appeared to be associated with increased CVD risk. This challenged the linear logic (“less salt, better health”) of the prevailing model. Although the differences in intake were not vast, the implications questioned whether current sodium guidelines were overly restrictive for people with normal blood pressure.

The video Salt & Blood Pressure: How Shady Science Sold America a Lie mirrors Galileo’s rhetorical flair, using provocative language such as “shady science” to challenge the establishment. Like Galileo’s defense of heliocentrism, contrarians in the salt debate (researchers like Mente) amplify anomalies to question dogma, sometimes exaggerating flaws in early studies (e.g., Lewis Dahl’s rat experiments) or alleging conspiracies (e.g., pharmaceutical influence). More in Feyerabend’s view than in Kuhn’s, this exaggeration and rhetoric might be desirable. It’s useful. It provides the challenges that the paradigm should be able to overcome to remain dominant.

These challenges haven’t led to a paradigm shift yet, as the consensus remains robust, supported by RCTs and global health data. But they highlight the Kuhnian tension between entrenched views and emerging evidence, pushing science to refine its understanding.

Framing the issue as a contrarian challenge might go something like this:

Evidence-based medicine sets treatment guidelines, but evidence-based medicine has not translated into evidence-based policy. Governments advise lowering salt intake, but that advice is supported by little robust evidence for the general population. Randomized controlled trials have not strongly supported the benefit of salt reduction for average people. Indeed, we see evidence that low salt might pose as great a risk.

Sodium Intake vs. Cardiovascular Disease Risk

Sodium Intake vs. Cardiovascular Disease Risk. Based on Mente (2016) and O’Donnell (2014).

Methodological Challenges

The question “Is salt bad for you?” is ill-posed. Evidence and reasoning say this question oversimplifies a complex issue: sodium’s effects vary by individual (e.g., salt sensitivity, genetics), diet (e.g., processed vs. whole foods), and context (e.g., baseline blood pressure, activity level). Science doesn’t deliver binary truths. Modern science gives probabilistic models, refined through iterative testing.

While randomized controlled trials (RCTs) have shown that reducing sodium intake can lower blood pressure, especially in sensitive groups, observational studies show that extremely low sodium is associated with poor health. This association may signal reverse causality, an error in reasoning. The data may simply reveal that sicker people eat less, not that they are harmed by low salt. This complexity reflects the limitations of study design and the challenges of isolating causal relationships in real-world populations. The above graph is a fairly typical dose-response curve for any nutrient.

The salt debate also underscores the inherent difficulty of studying diet and health. Total caloric intake, physical activity, genetic variation, and compliance all confound the relationship between sodium and health outcomes. Few studies look at salt intake as a fraction of body weight. If sodium recommendations were expressed as sodium density (mg/kcal), it might help accommodate individual energy needs and eating patterns more effectively.

Science as an Iterative Process

Despite flaws in early studies and the polemics of dissenters, the scientific communities continue to refine its understanding. For example, Japan’s national sodium reduction efforts since the 1970s have coincided with significant declines in stroke mortality, suggesting real-world benefits to moderation, even if the exact causal mechanisms remain complex.

Through a Kuhnian lens, we see a dominant paradigm shaped by institutional consensus and refined by accumulating evidence. But we also see the system’s limits: anomalies, confounding variables, and methodological disputes that resist easy resolution.

Contrarians, though sometimes rhetorically provocative or methodologically uneven, play a crucial role. Like the “puzzle-solvers” and “revolutionaries” in Kuhn’s model, they pressure the scientific establishment to reexamine assumptions and tighten methods. This isn’t a flaw in science; it’s the process at work.

Salt isn’t simply “good” or “bad.” The better scientific question is more conditional: How does salt affect different individuals, in which contexts, and through what mechanisms? Answering this requires humility, robust methodology, and the acceptance that progress usually comes in increments. Science moves forward not despite uncertainty, disputation and contradiction but because of them.

, , , ,

5 Comments

Anarchy and Its Discontents: Paul Feyerabend’s Critics

(For and against Against Method)

Paul Feyerabend’s 1975 Against Method and his related works made bold claims about the history of science, particularly the Galileo affair. He argued that science progressed not because of adherence to any specific method, but through what he called epistemological anarchism. He said that Galileo’s success was due in part to rhetoric, metaphor, and politics, not just evidence.

Some critics, especially physicists and historically rigorous philosophers of science, have pointed out technical and historical inaccuracies in Feyerabend’s treatment of physics. Here are some examples of the alleged errors and distortions:

Misunderstanding Inertial Frames in Galileo’s Defense of Copernicanism

Feyerabend argued that Galileo’s arguments for heliocentrism were not based on superior empirical evidence, and that Galileo used rhetorical tricks to win support. He claimed that Galileo simply lacked any means of distinguishing heliocentric from geocentric models empirically, so his arguments were no more rational than those of Tycho Brahe and other opponents.

His critics responded by saying that Galileo’s arguments based on the phases of Venus and Jupiter’s moons were empirically decisive against the Ptolemaic model. This is unarguable, though whether Galileo had empirical evidence to overthrow Tycho Brahe’s hybrid model is a much more nuanced matter.

Critics like Ronald Giere, John Worrall, and Alan Chalmers (What Is This Thing Called Science?) argued that Feyerabend underplayed how strong Galileo’s observational case actually was. They say Feyerabend confused the issue of whether Galileo had a conclusive argument with whether he had a better argument.

This warrants some unpacking. Specifically, what makes an argument – a model, a theory – better? Criteria might include:

  • Empirical adequacy – Does the theory fit the data? (Bas van Fraassen)
  • Simplicity – Does the theory avoid unnecessary complexity? (Carl Hempel)
  • Coherence – Is it internally consistent? (Paul Thagard)
  • Explanatory power – Does it explain more than rival theories? (Wesley Salmon)
  • Predictive power – Does it generate testable predictions?  (Karl Popper, Hempel)
  • Fertility – Does it open new lines of research? (Lakatos)

 Some argue that Galileo’s model (Copernicanism, heliocentrism) was obviously simpler than Brahe’s. But simplicity opens another can of philosophical worms. What counts as simple? Fewer entities? Fewer laws? More symmetry? Copernicus had simpler planetary order but required a moving Earth. And Copernicus still relied on epicycles, so heliocentrism wasn’t empirically simpler at first. Given the evidence of the time, a static Earth can be seen as simpler; you don’t need to explain the lack of wind and the “straight” path of falling bodies. Ultimately, this point boils down to aesthetics, not math or science. Galileo and later Newtonians valued mathematical elegance and unification. Aristotelians, the church, and Tychonians valued intuitive compatibility with observed motion.

Feyerabend also downplayed Galileo’s use of the principle of inertia, which was a major theoretical advance and central to explaining why we don’t feel the Earth’s motion.

Misuse of Optical Theory in the Case of Galileo’s Telescope

Feyerabend argued that Galileo’s use of the telescope was suspect because Galileo had no good optical theory and thus no firm epistemic ground for trusting what he saw.

His critics say that while Galileo didn’t have a fully developed geometrical optics theory (e.g., no wave theory of light), his empirical testing and calibration of the telescope were rigorous by the standards of the time.

Feyerabend is accused of anachronism – judging Galileo’s knowledge of optics by modern standards and therefore misrepresenting the robustness of his observational claims. Historians like Mario Biagioli and Stillman Drake point out that Galileo cross-verified telescope observations with the naked eye and used repetition, triangulation, and replication by others to build credibility.

Equating All Theories as Rhetorical Equals

Feyerabend in some parts of Against Method claimed that rival theories in the history of science were only judged superior in retrospect, and that even “inferior” theories like astrology or Aristotelian cosmology had equal rational footing at the time.

Historians like Steven Shapin (How to be Antiscientific) and David Wootton (The Invention of Science) say that this relativism erases real differences in how theories were judged even in Galileo’s time. While not elaborated in today’s language, Galileo and his rivals clearly saw predictive power, coherence, and observational support as fundamental criteria for choosing between theories.

Feyerabend’s polemical, theatrical tone often flattened the epistemic distinctions that working scientists and philosophers actually used, especially during the Scientific Revolution. His analysis of “anything goes” often ignored the actual disciplinary practices of science, especially in physics.

Failure to Grasp the Mathematical Structure of Physics

Scientists – those broad enough to know who Feyerabend was – often claim that he misunderstood or ignored the role of mathematics in theory-building, especially in Newtonian mechanics and post-Galilean developments. In Against Method, Feyerabend emphasizes metaphor and persuasion over mathematics. While this critique is valuable when aimed at the rhetorical and political sides of science, it underrates the internal mathematical constraints that shape physical theories, even for Galileo.

Imre Lakatos, his friend and critic, called Feyerabend’s work a form of “intellectual sabotage”, arguing that he distorted both the history and logic of physics.

Misrepresenting Quantum Mechanics

Feyerabend wrote about Bohr and Heisenberg in Philosophical Papers and later essays. Critics like Abner Shimony and Mario Bunge charge that Feyerabend misrepresented or misunderstood Bohr’s complementarity as relativistic, when Bohr’s position was more subtle and aimed at objective constraints on language and measurement.

Feyerabend certainly fails to understand the mathematical formalism underpinning Quantum Mechanics. This weakens his broader claims about theory incommensurability.

Feyerabend’s erroneous critique of Neil’s Bohr is seen in his 1958 Complimentarity:

“Bohr’s point of view may be introduced by saying that it is the exact opposite of [realism]. For Bohr the dual aspect of light and matter is not the deplorable consequence of the absence of a satisfactory theory, but a fundamental feature of the microscopic level. For him the existence of this feature indicates that we have to revise … the [realist] ideal of explanation.” (more on this in an upcoming post)

Epistemic Complaints

Beyond criticisms that he failed to grasp the relevant math and science, Feyerabend is accused of selectively reading or distorting historical episodes to fit the broader rhetorical point that science advances by breaking rules, and that no consistent method governs progress. Feyerabend’s claim that in science “anything goes” can be seen as epistemic relativism, leaving no rational basis to prefer one theory over another or to prefer science over astrology, myth, or pseudoscience.

Critics say Feyerabend blurred the distinction between how theories are argued (rhetoric) and how they are justified (epistemology). He is accused of conflating persuasive strategy with epistemic strength, thereby undermining the very principle of rational theory choice.

Some take this criticism to imply that methodological norms are the sole basis for theory choice. Feyerabend’s “anarchism” may demolish authority, but is anything left in its place except a vague appeal to democratic or cultural pluralism? Norman Levitt and Paul Gross, especially in Higher Superstition: The Academic Left and Its Quarrels with Science (1994), argue this point, along with saying Feyerabend attacked a caricature of science.

Personal note/commentary: In my view, Levitt and Gross did some great work, but Higher Superstition isn’t it. I bought the book shortly after its release because I was disgusted with weaponized academic anti-rationalism, postmodernism, relativism, and anti-science tendencies in the humanities, especially those that claimed to be scientific. I was sympathetic to Higher Superstition’s mission but, on reading it, was put off by its oversimplifications and lack of philosophical depth. Their arguments weren’t much better than those of the postmodernists. Critics of science in the humanities critics overreached and argued poorly, but they were responding to legitimate concerns in the philosophy of science. Specifically:

  • Underdetermination – Two incompatible theories often fit the same data. Why do scientists prefer one over another? As Kuhn argued, social dynamics play a role.
  • Theory-laden Observations – Observations are shaped by prior theory and assumptions, so science is not just “reading the book of nature.”
  • Value-laden Theories – Public health metrics like life expectancy and morbidity (opposed to autonomy or quality of life) trickle into epidemiology.
  • Historical Variability of Consensus – What’s considered rational or obvious changes over time (phlogiston, luminiferous ether, miasma theory).
  • Institutional Interest and Incentives – String theory’s share of limited research funding, climate science in service of energy policy and social agenda.
  • The Problem of Reification – IQ as a measure of intelligence has been reified in policy and education, despite deep theoretical and methodological debates about what it measures.
  • Political or Ideological Capture – Marxist-Leninist science and eugenics were cases where ideology shaped what counted as science.

Higher Superstition and my unexpected negative reaction to it are what brought me to the discipline of History and Philosophy of Science.

Conclusion

Feyerabend exaggerated the uncertainty of early modern science, downplayed the empirical gains Galileo and others made, and misrepresented or misunderstood some of the technical content of physics. His mischievous rhetorical style made it hard to tell where serious argument ended and performance began. Rather than offering a coherent alternative methodology, Feyerabend’s value lay in exposing the fragility and contingency of scientific norms. He made it harder to treat methodological rules as timeless or universal by showing how easily they fracture under the pressure of real historical cases.

In a following post, I’ll review the last piece John Heilbron wrote before he died, Feyerabend, Bohr and Quantum Physics, which appeared in Stefano Gattei’s Feyerabend in Dialogue, a set of essays marking the 100th anniversary of Feyerabend’s birth.

Paul Feyerabend. Photo courtesy of Grazia Borrini-Feyerabend.

, , , , , , , , , , ,

1 Comment

John Heilbron Interview – June 2012

In 2012, I spoke with John Heilbron, historian of science and Professor Emeritus at UC Berkeley, about his career, his work with Thomas Kuhn, and the legacy of The Structure of Scientific Revolutions on its 50th anniversary. We talked late into the night. The conversation covered his shift from physics to history, his encounters with Kuhn and Paul Feyerabend, and his critical take on the direction of Science and Technology Studies (STS).

The interview marked a key moment. Kuhn and Feyerabend’s legacies were under fresh scrutiny, and STS was in the midst of redefining itself, often leaning toward sociological frameworks at the expense of other approaches.

Thirteen years later, in 2025, this commentary revisits that interview to illuminate its historical context, situate Heilbron’s critiques, and explore their relevance to contemporary STS and broader academic debates.

Over more than a decade, I had ongoing conversations with Heilbron about the evolution of the history of science – history of the history of science – and the complex relationship between History of Science and Science, Technology, and Society (STS) programs. At UC Berkeley, unlike at Harvard or Stanford, STS has long remained a “Designated Emphasis” rather than a department or standalone degree. Academic conservatism in departmental structuring, concerns about reputational risk, and questions about the epistemic rigor of STS may all have contributed to this decision. Moreover, Berkeley already boasted world-class departments in both History and Sociology.

That 2012 interview, the only one we recorded, brought together themes we’d explored over many years. Since then, STS has moved closer to engaging with scientific content itself. But it still draws criticism, both from scientists and from public misunderstanding. In 2012, the field was still heavily influenced by sociological models, particularly the Strong Programme and social constructivism, which stressed how scientific knowledge is shaped by social context. One of the key texts in this tradition, Shapin and Schaffer’s Leviathan and the Air-Pump (1985), argued that even Boyle’s experiments weren’t simply about discovery but about constructing scientific consensus.

Heilbron pushed back against this framing. He believed it sidelined the technical and epistemic depth of science, reducing STS to a sociological critique. He was especially wary of the dense, abstract language common in constructivist work. In his view, it often served as cover for thin arguments, especially from younger scholars who copied the style but not the substance. He saw it as a tactic: establish control of the conversation by embedding a set of terms, then build influence from there.

The influence of Shapin and Schaffer, Heilbron argued, created the impression that STS was dominated by a single paradigm, ironically echoing the very Kuhnian framework they analyzed. His frustration with a then-recent Isis review reflected his concern that constructivism had become doctrinaire, pressuring scholars to conform to its methods even when irrelevant to their work. His reference to “political astuteness” pointed to the way in which key figures in the field successfully advanced their terminology and frameworks, gaining disproportionate influence. While this gave them intellectual clout, Heilbron saw it as a double-edged sword: it strengthened their position while encouraging dogmatism among followers who prioritized jargon over genuine analysis.


Bill Storage: How did you get started in this curious interdisciplinary academic realm?

John Heilbron: Well, it’s not really very interesting, but I was a graduate student in physics but my real interest was history. So at some point I went down to the History department and found the medievalist, because I wanted to do medieval history. I spoke with the medievalist ad he said, “well, that’s very charming but you know the country needs physicists and it doesn’t need medievalists, so why don’t you go back to physics.” Which I duly did. But he didn’t bother to point out that there was this guy Kuhn in the History department who had an entirely different take on the subject than he did. So finally I learned about Kuhn and went to see him. Since Kuhn had very few students, I looked good; and I gradually I worked my way free from the Physics department and went into history. My PhD is in History; and I took a lot history courses and, as I said, history really is my interest. I’m interested in science too of course but I feel that my major concerns are historical and the writing of history is to me much more interesting and pleasant than calculations.

You entered that world at a fascinating time, when history of science – I’m sure to the surprise of most of its scholars – exploded onto the popular scene. Kuhn, Popper, Feyerabend and Lakatos suddenly appeared in The New Yorker, Life Magazine, and The Christian Century. I find that these guys are still being read, misread and misunderstood by many audiences. And that seems to be true even for their intended audiences – sometimes by philosophers and historians of science – certainly by scientists. I see multiple conflicting readings that would seem to show that at least some of them are wrong.

Well if you have two or more different readings then I guess that’s a safe conclusion. (Laughs.)

You have a problem with multiple conflicting truths…? Anyway – misreading Kuhn…

I’m more familiar with the misreading of Kuhn than of the others. I’m familiar with that because he was himself very distressed by many of the uses made of his work – particularly the notion that science is no different from art or has no stronger basis than opinion. And that bothered him a lot.

I don’t know your involvement in his work around that time. Can you tell me how you relate to what he was doing in that era?

I got my PhD under him. In fact my first work with him was hunting up footnotes for Structure. So I knew the text of the final draft well – and I knew him quite well during the initial reception of it. And then we all went off together to Copenhagen for a physics project and we were all thrown together a lot. So that was my personal connection and then of course I’ve been interested subsequently in Structure, as everybody is bound to be in my line of work. So there’s no doubt, as he says so in several places, that he was distressed by the uses made of it. And that includes uses made in the history of science particularly by the social constructionists, who try to do without science altogether or rather just to make it epiphenomenal on political or social forces.

I’ve read opinions by others who were connected with Kuhn saying there was a degree of back-peddling going by Kuhn in the 1970s. The implication there is that he really did intend more sociological commentary than he later claimed. Now I don’t see evidence of that in the text of Structure, and incidents like his telling Freeman Dyson that he (Kuhn) was not a Kuhnian would suggest otherwise. Do you have any thoughts on that?

I think that one should keep in mind the purpose of Structure, or rather the context in which it was produced. It was supposed to have been an article in this encyclopedia of unified science and Kuhn’s main interest was in correcting philosophers. He was not aiming for historians even. His message was that the philosophy practiced by a lot of positivists and their description of science was ridiculous because it didn’t pay any attention to the way science was actually done. So Kuhn was going to tell them how science was done, in order to correct philosophy. But then much to his surprise he got picked up by people for whom it was not written, who derived from it the social constructionist lesson that we’re all familiar with. And that’s why he was an unexpected rebel. But he did expect to be rebellious; that was the whole point. It’s just that the object of his rebellion was not history or science but philosophy.

So in that sense it would seem that Feyerabend’s question on whether Kuhn intended to be prescriptive versus descriptive is answered. It was not prescriptive.

Right – not prescriptive to scientists. But it was meant to be prescriptive to the philosophers – or at least normalizing – so that they would stop being silly and would base their conception of scientific progress on the way in which scientists actually went about their business. But then the whole thing got too big for him and he got into things that, in my opinion, really don’t have anything to do with his main argument. For example, the notion of incommensurability, which was not, it seems to me, in the original program. And it’s a logical construct that I don’t think is really very helpful, and he got quite hung up on that and seemed to regard that as the most important philosophical message from Structure.

I wasn’t aware that he saw it that way. I’m aware that quite a few others viewed it like that. Paul Feyerabend, in one of his last books, said that he and Kuhn kicked around this idea of commensurability in 1960 and had slightly different ideas about where to go with it. Feyerabend said Kuhn wanted to use it historically whereas his usage was much more abstract. I was surprised at the level of collaboration indicated by Feyerabend.

Well they talked a lot. They were colleagues. I remember parties at Kuhn’s house where Feyerabend would show up with his old white T shirt and several women – but that’s perhaps irrelevant to the main discussion. They were good friends. I got along quite well with Feyerabend too. We had discussions about the history of quantum physics and so on. The published correspondence between Feyerabend and Lakatos is relevant here. It’s rather interesting in that the person we’ve left out of the discussion so far, Karl Popper, was really the lighthouse for Feyerabend and Lakatos, but not for Kuhn. And I think that anybody who wants to get to the bottom of the relationship between Kuhn and Feyerabend needs to consider the guy out of the frame, who is Popper.

It appears Feyerabend was very critical of Kuhn and Structure at the time it was published. I think at that point Feyerabend was still essentially a Popperian. It seems Feyerabend reversed position on that over the next decade or so.

JH: Yes, at the time in question, around 1960, when they had these discussions, I think Feyerabend was still very much in Popper’s camp. Of course like any bright student, he disagreed with his professor about things.

How about you, as a bright student in 1960 – what did you disagree with your professor, Kuhn, about?

Well I believe in the proposition that philosophers and historians have different metabolisms. And I’m metabolically a historian and Kuhn was metabolically a philosopher – even though he did write history. But his most sustained piece of history of science was his book on black body theory; and that’s very narrowly intellectualist in approach. It’s got nothing to do with the themes of the structure of scientific revolutions – which does have something to say for the historian – but he was not by practice a historian. He didn’t like a whole lot of contingent facts. He didn’t like archival and library work. His notion of fun was take a few texts and just analyze and reanalyze them until he felt he had worked his way into the mind of their author. I take that to be a necromantic feat that’s not really possible.

I found that he was a very clever guy and he was excellent as a professor because he was very interested in what you were doing as soon it was something he thought he could make some use of. And that gave you the idea that you were engaged in something important, so I must give him that. On the other hand he just didn’t have the instincts or the knowledge to be a historian and so I found myself not taking much from his own examples. Once I had an argument with him about some way of treating a historical subject and I didn’t feel that I got anything out of him. Quite the contrary; I thought that he just ducked all the interesting issues. But that was because they didn’t concern him.

James Conant, president of Harvard who banned communists, chair of the National Science Foundation, etc.: how about Conant’s influence on Structure?

It’s not just Conant. It was the whole Harvard circle, of which Kuhn was part. There was this guy, Leonard Nash; there was Gerald Holton. And these guys would get together and l talk about various things having to do with the relationship between science and the public sphere. It was a time when Conant was fighting for the National Science Foundation and I think that this notion of “normal science” in which the scientists themselves must be left fully in charge of what they’re doing in order to maximize the progress within the paradigm to bring the profession swiftly to the next revolution – that this is essentially the Conant doctrine with respect to the ground rules of the National Science Foundation, which is “let the scientists run it.” So all those things were discussed. And you can find many bits of Kuhn’s Structure in that discussion. For example, the orthodoxy of normal science in, say, Bernard Cohen, who didn’t make anything of it of course. So there’s a lot of this Harvard group in Structure, as well as certain lessons that Kuhn took from his book on the Copernican Revolution, which was the textbook for the course he gave under Conant. So yes, I think Conant’s influence is very strong there.

So Kuhn was ultimately a philosopher where you are a historian. I think I once heard you say that reading historical documents does not give you history.

Well I agree with that, but I don’t remember that I was clever enough to say it.

Assuming you said it or believe it, then what does give you history?

Well, reading them is essential, but the part contributed by the historian is to make some sense of all the waste paper he’s been reading. This is essentially a construction. And that’s where the art, the science, the technique of the historian comes into play, to try to make a plausible narrative that has to satisfy certain rules. It can’t go against the known facts and it can’t ignore the new facts that have come to light through the study of this waste paper, and it can’t violate rules of verisimilitude, human action and whatnot. But otherwise it’s a construction and you’re free to manipulate your characters, and that’s what I like about it.

So I take it that’s where the historian’s metabolism comes into play – avoidance of leaping to conclusions with the facts.

True, but at some point you’ve got to make up a story about those facts.

Ok, I’ve got a couple questions on the present state of affairs – and this is still related to the aftermath of Kuhn. From attending colloquia, I sense that STS is nearly a euphemism for sociology of science. That bothers me a bit, possibly because I’m interested in the intersection of science, technology and society. Looking at the core STS requirements on Stanford’s website, I see few courses listed that would give a student any hint of what science looks like from the inside.

I’m afraid you’re only too right. I’ve got nothing against sociology of science, the study of scientific institutions, etc. They’re all very good. But they’re tending to leave the science out, and in my opinion, the further they get from science, the worse their arguments become. That’s what bothers me perhaps most of all – the weakness of the evidentiary base of many of the arguments and conclusions that are put forward.

I thought we all learned a bit from the Science Wars – thought that sort of indeterminacy of meaning and obfuscatory language was behind us. Either it’s back, or it never went away.

Yeah, the language part is an important aspect of it, and even when the language is relatively comprehensible as I think it is in, say, constructivist history of science – by which I mean the school of Schaffer and Shapin – the insistence on peculiar argot becomes a substitute for thought. You see it quite frequently in people less able than those two guys are, who try to follow in their footsteps. You get words strung together supposedly constituting an argument but which in fact don’t. I find that quite an interesting aspect of the business, and very astute politically on the part of those guys because if you can get your words into the discourse, why, you can still hope to have influence. There’s a doctrinaire aspect to it. I was just reading the current ISIS favorable book review by one of the fellow travelers of this group. The book was not written by one of them. The review was rather complimentary but then at the end says it is a shame that this author did not discuss her views as related to Schaffer and Shapin. Well, why the devil should she? So, yes, there’s issues of language, authority, and poor argumentation. STS is afflicted by this, no doubt.


John Heilbron and I at The Huntington in 2014

, , , , ,

Leave a comment

Extraordinary Popular Miscarriages of Science, Part 6 – String Theory

Introduction: A Historical Lens on String Theory

In 2006, I met John Heilbron, widely credited with turning the history of science from an emerging idea into a professional academic discipline. While James Conant and Thomas Kuhn laid the intellectual groundwork, it was Heilbron who helped build the institutions and frameworks that gave the field its shape. Through John I came to see that the history of science is not about names and dates – it’s about how scientific ideas develop, and why. It explores how science is both shaped by and shapes its cultural, social, and philosophical contexts. Science progresses not in isolation but as part of a larger human story.

The “discovery” of oxygen illustrates this beautifully. In the 18th century, Joseph Priestley, working within the phlogiston theory, isolated a gas he called “dephlogisticated air.” Antoine Lavoisier, using a different conceptual lens, reinterpreted it as a new element – oxygen – ushering in modern chemistry. This was not just a change in data, but in worldview.

When I met John, Lee Smolin’s The Trouble with Physics had just been published. Smolin, a physicist, critiques string theory not from outside science but from within its theoretical tensions. Smolin’s concerns echoed what I was learning from the history of science: that scientific revolutions often involve institutional inertia, conceptual blind spots, and sociopolitical entanglements.

My interest in string theory wasn’t about the physics. It became a test case for studying how scientific authority is built, challenged, and sustained. What follows is a distillation of 18 years of notes – string theory seen not from the lab bench, but from a historian’s desk.

A Brief History of String Theory

Despite its name, string theory is more accurately described as a theoretical framework – a collection of ideas that might one day lead to testable scientific theories. This alone is not a mark against it; many scientific developments begin as frameworks. Whether we call it a theory or a framework, it remains subject to a crucial question: does it offer useful models or testable predictions – or is it likely to in the foreseeable future?

String theory originated as an attempt to understand the strong nuclear force. In 1968, Gabriele Veneziano introduced a mathematical formula – the Veneziano amplitude – to describe the scattering of strongly interacting particles such as protons and neutrons. By 1970, Pierre Ramond incorporated supersymmetry into this approach, giving rise to superstrings that could account for both fermions and bosons. In 1974, Joël Scherk and John Schwarz discovered that the theory predicted a massless spin-2 particle with the properties of the hypothetical graviton. This led them to propose string theory not as a theory of the strong force, but as a potential theory of quantum gravity – a candidate “theory of everything.”

Around the same time, however, quantum chromodynamics (QCD) successfully explained the strong force via quarks and gluons, rendering the original goal of string theory obsolete. Interest in string theory waned, especially given its dependence on unobservable extra dimensions and lack of empirical confirmation.

That changed in 1984 when Michael Green and John Schwarz demonstrated that superstring theory could be anomaly-free in ten dimensions, reviving interest in its potential to unify all fundamental forces and particles. Researchers soon identified five mathematically consistent versions of superstring theory.

To reconcile ten-dimensional theory with the four-dimensional spacetime we observe, physicists proposed that the extra six dimensions are “compactified” into extremely small, curled-up spaces – typically represented as Calabi-Yau manifolds. This compactification allegedly explains why we don’t observe the extra dimensions.

In 1995, Edward Witten introduced M-theory, showing that the five superstring theories were different limits of a single 11-dimensional theory. By the early 2000s, researchers like Leonard Susskind and Shamit Kachru began exploring the so-called “string landscape” – a space of perhaps 10^500 (1 followed by 500 zeros) possible vacuum states, each corresponding to a different compactification scheme. This introduced serious concerns about underdetermination – the idea that available empirical evidence cannot determine which among many competing theories is correct.

Compactification introduces its own set of philosophical problems. Critics Lee Smolin and Peter Woit argue that compactification is not a prediction but a speculative rationalization: a move designed to save a theory rather than derive consequences from it. The enormous number of possible compactifications (each yielding different physics) makes string theory’s predictive power virtually nonexistent. The related challenge of moduli stabilization – specifying the size and shape of the compact dimensions – remains unresolved.

Despite these issues, string theory has influenced fields beyond high-energy physics. It has informed work in cosmology (e.g., inflation and the cosmic microwave background), condensed matter physics, and mathematics (notably algebraic geometry and topology). How deep and productive these connections run is difficult to assess without domain-specific expertise that I don’t have. String theory has, in any case, produced impressive mathematics. But mathematical fertility is not the same as scientific validity.

The Landscape Problem

Perhaps the most formidable challenge string theory faces is the landscape problem: the theory allows for an enormous number of solutions – on the order of 10^500. Each solution represents a possible universe, or “vacuum,” with its own physical constants and laws.

Why so many possibilities? The extra six dimensions required by string theory can be compactified in myriad ways. Each compactification, combined with possible energy configurations (called fluxes), gives rise to a distinct vacuum. This extreme flexibility means string theory can, in principle, accommodate nearly any observation. But this comes at the cost of predictive power.

Critics argue that if theorists can forever adjust the theory to match observations by choosing the right vacuum, the theory becomes unfalsifiable. On this view, string theory looks more like metaphysics than physics.

Some theorists respond by embracing the multiverse interpretation: all these vacua are real, and our universe is just one among many. The specific conditions we observe are then attributed to anthropic selection – we could only observe a universe that permits life like us. This view aligns with certain cosmological theories, such as eternal inflation, in which different regions of space settle into different vacua. But eternal inflation can exist independent of string theory, and none of this has been experimentally confirmed.

The Problem of Dominance

Since the 1980s, string theory has become a dominant force in theoretical physics. Major research groups at Harvard, Princeton, and Stanford focus heavily on it. Funding and institutional prestige have followed. Prominent figures like Brian Greene have elevated its public profile, helping transform it into both a scientific and cultural phenomenon.

This dominance raises concerns. Critics such as Smolin and Woit argue that string theory has crowded out alternative approaches like loop quantum gravity or causal dynamical triangulations. These alternatives receive less funding and institutional support, despite offering potentially fruitful lines of inquiry.

In The Trouble with Physics, Smolin describes a research culture in which dissent is subtly discouraged and young physicists feel pressure to align with the mainstream. He worries that this suppresses creativity and slows progress.

Estimates suggest that between 1,000 and 5,000 researchers work on string theory globally – a significant share of theoretical physics resources. Reliable numbers are hard to pin down.

Defenders of string theory argue that it has earned its prominence. They note that theoretical work is relatively inexpensive compared to experimental research, and that string theory remains the most developed candidate for unification. Still, the issue of how science sets its priorities – how it chooses what to fund, pursue, and elevate – remains contentious.

Wolfgang Lerche of CERN once called string theory “the Stanford propaganda machine working at its fullest.” As with climate science, 97% of string theorists agree that they don’t want to be defunded.

Thomas Kuhn’s Perspective

The logical positivists and Karl Popper would almost certainly dismiss string theory as unscientific due to its lack of empirical testability and falsifiability – core criteria in their respective philosophies of science. Thomas Kuhn would offer a more nuanced interpretation. He wouldn’t label string theory unscientific outright, but would express concern over its dominance and the marginalization of alternative approaches. In Kuhn’s framework, such conditions resemble the entrenchment of a paradigm during periods of normal science, potentially at the expense of innovation.

Some argue that string theory fits Kuhn’s model of a new paradigm, one that seeks to unify quantum mechanics and general relativity – two pillars of modern physics that remain fundamentally incompatible at high energies. Yet string theory has not brought about a Kuhnian revolution. It has not displaced existing paradigms, and its mathematical formalism is often incommensurable with traditional particle physics. From a Kuhnian perspective, the landscape problem may be seen as a growing accumulation of anomalies. But a paradigm shift requires a viable alternative – and none has yet emerged.

Lakatos and the Degenerating Research Program

Imre Lakatos offered a different lens, seeing science as a series of research programs characterized by a “hard core” of central assumptions and a “protective belt” of auxiliary hypotheses. A program is progressive if it predicts novel facts; it is degenerating if it resorts to ad hoc modifications to preserve the core.

For Lakatos, string theory’s hard core would be the idea that all particles are vibrating strings and that the theory unifies all fundamental forces. The protective belt would include compactification schemes, flux choices, and moduli stabilization – all adjusted to fit observations.

Critics like Sabine Hossenfelder argue that string theory is a degenerating research program: it absorbs anomalies without generating new, testable predictions. Others note that it is progressive in the Lakatosian sense because it has led to advances in mathematics and provided insights into quantum gravity. Historians of science are divided. Johansson and Matsubara (2011) argue that Lakatos would likely judge it degenerating; Cristin Chall (2019) offers a compelling counterpoint.

Perhaps string theory is progressive in mathematics but degenerating in physics.

The Feyerabend Bomb

Paul Feyerabend, who Lee Smolin knew from his time at Harvard, was the iconoclast of 20th-century philosophy of science. Feyerabend would likely have dismissed string theory as a dogmatic, aesthetic fantasy. He might write something like:

String theory dazzles with equations and lulls physics into a trance. It’s a mathematical cathedral built in the sky, a triumph of elegance over experience. Science flourishes in rebellion. Fund the heretics.”

Even if this caricature overshoots, Feyerabend’s tools offer a powerful critique:

  1. Untestability: String theory’s predictions remain out of reach. Its core claims – extra dimensions, compactification, vibrational modes – cannot be tested with current or even foreseeable technology. Feyerabend challenged the privileging of untested theories (e.g., Copernicanism in its early days) over empirically grounded alternatives.

  2. Monopoly and suppression: String theory dominates intellectual and institutional space, crowding out alternatives. Eric Weinstein recently said, in Feyerabendian tones, “its dominance is unjustified and has resulted in a culture that has stifled critique, alternative views, and ultimately has damaged theoretical physics at a catastrophic level.”

  3. Methodological rigidity: Progress in string theory is often judged by mathematical consistency rather than by empirical verification – an approach reminiscent of scholasticism. Feyerabend would point to Johannes Kepler’s early attempt to explain planetary orbits using a purely geometric model based on the five Platonic solids. Kepler devoted 17 years to this elegant framework before abandoning it when observational data proved it wrong.

  4. Sociocultural dynamics: The dominance of string theory stems less from empirical success than from the influence and charisma of prominent advocates. Figures like Brian Greene, with their public appeal and institutional clout, help secure funding and shape the narrative – effectively sustaining the theory’s privileged position within the field.

  5. Epistemological overreach: The quest for a “theory of everything” may be misguided. Feyerabend would favor many smaller, diverse theories over a single grand narrative.

Historical Comparisons

Proponents say other landmark theories emerging from math predated their experimental confirmation. They compare string theory to historical cases. Examples include:

  1. Planet Neptune: Predicted by Urbain Le Verrier based on irregularities in Uranus’s orbit, observed in 1846.
  2. General Relativity: Einstein predicted the bending of light by gravity in 1915, confirmed by Arthur Eddington’s 1919 solar eclipse measurements.
  3. Higgs Boson: Predicted by the Standard Model in the 1960s, observed at the Large Hadron Collider in 2012.
  4. Black Holes: Predicted by general relativity, first direct evidence from gravitational waves observed in 2015.
  5. Cosmic Microwave Background: Predicted by the Big Bang theory (1922), discovered in 1965.
  6. Gravitational Waves: Predicted by general relativity, detected in 2015 by the Laser Interferometer Gravitational-Wave Observatory (LIGO).

But these examples differ in kind. Their predictions were always testable in principle and ultimately tested. String theory, in contrast, operates at the Planck scale (~10^19 GeV), far beyond what current or foreseeable experiments can reach.

Special Concern Over Compactification

A concern I have not seen discussed elsewhere – even among critics like Smolin or Woit – is the epistemological status of compactification itself. Would the idea ever have arisen apart from the need to reconcile string theory’s ten dimensions with the four-dimensional spacetime we experience?

Compactification appears ad hoc, lacking grounding in physical intuition. It asserts that dimensions themselves can be small and curled – yet concepts like “small” and “curled” are defined within dimensions, not of them. Saying a dimension is small is like saying that time – not a moment in time, but time itself – can be “soon” or short in duration. It misapplies the very conceptual framework through which such properties are understood. At best, it’s a strained metaphor; at worst, it’s a category mistake and conceptual error.

This conceptual inversion reflects a logical gulf that proponents overlook or ignore. They say compactification is a mathematical consequence of the theory, not a contrivance. But without grounding in physical intuition – a deeper concern than empirical support – compactification remains a fix, not a forecast.

Conclusion

String theory may well contain a correct theory of fundamental physics. But without any plausible route to identifying it, string theory as practiced is bad science. It absorbs talent and resources, marginalizes dissent, and stifles alternative research programs. It is extraordinarily popular – and a miscarriage of science.

, , , , , ,

3 Comments

Extraordinary Popular Miscarriages of Science, Part 5 – Climate Science

NASA reports that ninety-seven percent of climate scientists agree that human-caused climate change is happening.

As with earlier posts on popular miscarriages of science, I look at climate science through the lens of the 20th century historians of science and philosophers of science and conclude that climate science is epistemically thin.

To elaborate a bit, most sensible folk accept that climate science addresses a potentially critical concern and that it has many earnest and talented practitioners. Despite those practitioners, it can be critiqued as bad science. We can do that without delving into the levels or claims, disputations, and counterarguments on relationships between ice cores, CO₂ concentrations and temperature. We can instead use the perspectives of prominent historians and philosophers of science of the 20th century, including the Logical Positivists in general, positivist Carl Hempel in particular, Karl Popper, Thomas Kuhn, Imre Lakatos, and Paul Feyerabend. Each perspective offers a distinct philosophical lens that highlights shortcomings in climate science’s methodologies and practices. I’ll explain each of those perspectives, why I think they’re important, and I’ll explore the critiques they would likely advance. These critiques don’t invalidate climate science conceptually as a field of inquiry but they highlight serious logical and philosophical concerns about its methodologies, practices, and epistemic foundations.

The historians and philosophers invoked here were fundamentally concerned with the demarcation problem: how to differentiate good science, bad science, and pseudoscience using a methodological perspective. They didn’t necessarily agree with each other. In some cases, like Kuhn versus Popper, they outright despised each other. All were flawed, but they were giants who shone brightly and presented systematic visions of how science works and what good science is.

Carnap, Ayer and the Positivists: Verification

The early Logical Positivists, particularly Rudolf Carnap and A.J. Ayer, saw empirical verification as the cornerstone of scientific claims. To be meaningful, a claim must be testable through observation or experiment. Climate science, while rooted in empirical data, struggles with verifiability because of its focus on long-term, global phenomena. Predictions about future consequences like sea level change, crop yield, hurricane frequency, and average temperature are not easily verifiable within a human lifespan or with current empirical methods. That might merely suggest that climate science is hard, not that it is bad. But decades of past predictions and retrodictions have been notoriously poor. Consequently, theories have been continuously revised in light of failed predictions. The reliance on indirect evidence – proxy data and computer simulations – rather than controlled experiments (which would be impossible or unethical) would not satisfy the positivists’ demand for direct, observable confirmation. Climatologist Michael Mann (originator of the “hockey stick” graph) often refers to climate simulation results as data. It is not – not in any sense that a positivist would use the term data. Positivists would see these difficulties and predictive failures as falling short of their strict criteria for scientific legitimacy.

Carl Hempel: Absence of Appeal to Universal Laws

The philosophy of Carl Hempel centered on the deductive-nomological model (aka covering-law model), which holds that scientific explanations should be derived from universal, timeless laws of nature combined with deductive logic about specific sense observations (empirical data). For Hempel, explanation and prediction were two sides of the same coin. If you can’t predict, then you cannot explain. For Hempel to judge a scientific explanation valid, deductive logic applied to laws of nature must confer nomic expectability upon the phenomenon being explained.

Climate science rarely operates with the kinds of laws of nature Hempel considered suitably general, simple, and verifiable. Instead, it relies on statistical correlations and computer models such as linking CO₂ concentrations to temperature increases through statistical trends, rather than strict, law-like statements. These approaches contrast with Hempel’s ideal of deductive certifiability. Scientific explanations should, by Hempel’s lights, be structured as deductive arguments, where the truth of the premises (law of nature plus initial conditions plus empirical data) entails the truth of the phenomenon to be explained. Without universal laws to anchor its explanations, climate science would appear to Hempel to lack the logical rigor of good science. On Hempel’s view, climate science’s dependence on complex models having parameters that are constantly re-tuned further weakens its explanatory power.

Hempel’s deductive-nomological model was a solid effort at removing causality from scientific explanations, something the positivists, following David Hume, thought to be too metaphysical.  The deductive-nomological model ultimately proved unable to bear the load Hempel wanted it to carry. Scientific explanation doesn’t work in certain cases without appeal to the notion of causality. That failure of Hempel’s model doesn’t weaken its criticism of climate science, or criticism of any other theory, however. It merely limits the deductive-nomological model’s ability to defend a theory by validating its explanations.

Karl Popper: Falsifiability

Karl Popper’s central criterion for demarcating good science from bad science and pseudoscience is falsifiability. A scientific theory, in his view, must make risky predictions that can be tested and potentially proven false. If a theory could not in principle be falsified, it does not belong to the realm of science.

The predictive models of climate science face severe challenges under this criterion. Climate models often project long-term trends, typically, global temperature increases over decades or centuries, which are probabilistic and difficult to test. Shorter-term, climate science has made abundant falsifiable predictions that were in fact falsified. Popper would initially see this as a mark of bad science, rather than pseudoscience.

But climate scientists have frequently adjusted their models or invoked external factors like previously unknown aerosol concentrations or volcanic eruptions to explain discrepancies. This would make climate science look, to Popper, too much like scientific Marxism and psychoanalysis, both of which he condemned for accommodating all possible outcomes to a prediction. When global temperatures temporarily stabilize or decrease, climate scientists often argue that natural variability is masking a long-term trend, rather than conceding a flaw in the theory. On this point, Popper would see climate science more akin to pseudoscience, since it lacks clear, testable predictions that could definitively refute its core claims.

For Popper, climate science must vigorously court skepticism and invite attempts at disputation and refutation, especially from dissenting insiders like Tol, Curry, and Michaels (more on below). Instead, climate science brands them as traitors.

Thomas Kuhn: Paradigm Rigidity

Thomas Kuhn agreed that Popper’s notion of falsifiability was how scientists think they behave, eager to subject their theories to disconfirmation. But scientific institutions don’t behave like that. Kuhn described science as progressing through paradigms, the frameworks, shared within a scientific community, that define normal scientific practice, periodically interrupted by revolutionary shifts, with a new theory displacing an older one.

A popular criticism of climate science is that science is not based on consensus. Kuhn would disagree, arguing that all scientific paradigms are fundamentally consensus-based.

“Normal science” for Kuhn was the state of things in a paradigm where most activity is aimed at defending the paradigm, thereby rationalizing the rejection of any evidence that disconfirms its theories. In this sense, everyday lab-coat scientists are some of the least scientific of professionals.

“Even in physics,” wrote Kuhn, “there is no standard higher than the assent of the relevant community.” So for Kuhn, evidence does not completely speak for itself, since assent about what evidence exists (Is that blip on the chart a Higgs boson or isn’t it?) must exist within the community for a theory to show consistency with observation. Climate science, more than any current paradigm except possibly string theory, has built high walls around its dominant theory.

That theory is the judgement, conclusion, or belief that human activity, particularly CO₂ emissions, has driven climate change for 150 years and will do so at an accelerated pace in the future. The paradigm virtually ensures that the vast majority of climate scientists agree with the theory because the theory is the heart of the paradigm, as Kuhn would see it. Within a paradigm, Kuhn accepts the role of consensus, but he wants outsiders to be able to overthrow the paradigm.

Given the relevant community’s insularity, Kuhn would see climate scientists’ claim that the anthropogenic warming theory is consistent with all their data as a case of anomalies being rationalized to preserve the paradigm. He would point to Michael Mann’s resistance to disclose his hockey stick data and simulation code as brutal shielding of the paradigm, regardless of Mann’s being found innocent of ethics violations.

Climate science’s tendency to dismiss solar influence and alternative hypotheses would likely be interpreted by Kuhn as the marginalization of dissent and paradigm rigidity. Kuhn might not see this rigidity as a sign of dishonesty or interest – as Paul Feyerabend (below) would – but would see the prevailing framework as stifling the revolutionary thinking he believed necessary for scientific advancement. From Kuhn’s perspective, climate science’s entrenched consensus could make it deeply flawed by prioritizing conformity too heavily over innovation.

Imre Lakatos: Climate as “Research Programme”

Lakatos developed his concept of “research programmes” to evaluate scientific progress.  He blended ideas from Popper’s falsification and Kuhn’s paradigm shifts. Lakatos distinguished between progressive and degenerating research programs based on their ability to predict new facts and handle challenges effectively.

Lakatos viewed scientific progress as developing within research programs having two main components. The hard core, for Lakatos, was the set of central assumptions that define the program, which are not easily abandoned. The protective belt is a flexible layer of auxiliary hypotheses, methods, and data interpretations that can be adjusted to defend the hard core from anomalies. A research program is progressive if it predicts novel phenomena and those predictions are confirmed empirically. It is degenerating if its predictions fail and it relies on ad hoc modifications to explain away anomalies.

In climate science, the hard core would be that global climate is changing, that greenhouse gas emissions drive this change, and that climate models can reliably predict future trends. Its protective belt would be the evolving methods of collecting, revising, and interpreting weather data adjustments due to new evidence such as volcanic activity.

Lakatos would be more lenient than Popper about continual theory revision and model-tweaking on the grounds that a progressive research agenda’s revision of its protective belt is justified by the complexity of the topic. Signs of potential degeneration of the program would include the “pause” in warming from 1998–2012, explained ad hoc as natural variability, particularly since natural variability was invoked too early to know whether the pause would continue. I.e., it was called a pause with no knowledge of whether the pause would end.

I suspect Lakatos would be on the fence about climate science, seeing it as more progressive (in his terms, not political ones) than rival programs, but would be concerned about its level of dogmatism.

Paul Feyerabend: Tyranny of Methodological Monism

Kuhn, Lakatos, and Paul Feyerabend were close friends who, while drawing on each other’s work, differed greatly in viewpoint. Feyerabend advocated epistemological anarchism, defending his claim that no scientific advancement ever proceeds purely within what is taught as “the scientific method.” He argued that science should be open to diverse approaches and that imposing methodological rules suppresses necessary creativity and innovation. Feyerabend often cited Galileo’s methodology, which bears little in common with what is called the scientific method. He famously claimed that anything goes in science, emphasizing the importance of methodological pluralism.

From Feyerabend’s perspective, climate science excessively relies on a narrow set of methodologies, particularly computer modeling and statistical analysis. The field’s heavy dependence on these tools and its discounting of historical climatology is a form of methodological monism. Its emphasis on consensus, rigid practices, and public hostility to dissent (more on below) would be viewed as stifling the kind of creative, unorthodox thinking that Feyerabend believed essential for scientific breakthroughs. The pressure to conform coupled with the politicization of climate science has led to a homogenized field that lacks cognitive diversity.

Feyerabend distrusted the orthodoxy of the social practices in what Kuhn termed “normal science” – what scientific institutions do in their laboratories. Against Lakatos, Feyerabend distrusted any rule-based scientific method at all. Science in the mid 1900’s had fallen prey to the “tyranny of tightly knit, highly corroborated, and gracelessly presented theoretical systems.”

Viewing science as an institution, he said that science was a threat to democracy and that there must be “a separation of state and science just as there is a separation between state and religious institutions.” He called 20th century science “the most aggressive, and most dogmatic religious institution.” He wrote that institutional science resembled more the church of Galileo’s day than it resembled Galileo. I think he would say the same of climate science.

Feyerabend complained that university research requires “a willingness to subordinate one’s ideas to those of a team leader.” In the case of global warming, government and government-funded scientists are deciding not only what is important as a scientific program but what is important as energy policy and social agenda. Feyerabend would be utterly horrified.

Feyerabend’s biggest concern, I suspect, would be the frequent alignment of climate scientists with alternative energy initiatives. Climate scientists who advocate for solar, wind, and hydrogen step beyond their expertise in diagnosing climate change into prescribing solutions, a policy domain involving engineering and economics. Michael Mann still prioritizes “100% renewable energy,” despite all evidence of its engineering and economical infeasibility.

Further, advocacy for a specific solution over others (nuclear power is often still shunned) suggests a theoretical precommitment likely to introduce observational bias. Climate research grants from renewable energy advocates including NGOs the Department of Energy’s ARPA-E program create incentives for scientists to emphasize climate problems that those technologies could cure. Climate science has been a gravy train for bogus green tech, such as Solyndra and Abound Solar.

Why Not Naomi Oreskes?

All my science history gods are dead white men. Why not include a prominent living historian? Naomi Oreskes at Harvard is the obvious choice. We need not speculate about how she would view climate science. She has been happy to tell us. Her activism and writings suggest she functions more as an advocate for the climate political cause than a historian of science. Her role extends past documenting the past to shaping contemporary debate.

Oreskes testified before U.S. congressional committees (House Select Committee on the Climate Crisis, 2019, and the Senate Budget Committee, 2023), as a Democratic-invited witness. There she accused political figures of harassing scientists and pushed for action against fossil fuel companies. She aligns with progressive anti-nuclear leanings. An objective historian would limit herself to historical facts and the resulting predictions and explanations rather than advocating specific legislative actions. She embraces the term “climate activist,” arguing that citizen engagement is essential for democracy.

Oreskes’s scholarship, notably her 2004 “The Scientific Consensus on Climate Change” and her book Merchants of Doubt, employ the narrative of universal scientific agreement on anthropogenic climate change while portraying dissent solely as industry-driven disinformation. She wrote that 100% of 928 peer-reviewed papers supported the IPCC’s position on climate change. Conflicting peer-reviewed papers show Oreskes to have, at best, cherry-picked data to bolster a political point. Pursuing legal attacks on fossil fuel companies is activism, not analysis.

Acts of the “Relevant Community”

Countless scientists themselves engage in climate advocacy, even in the analysis of effectiveness of advocacy. Advocacy backed by science, and science applied to advocacy. A paradigmatic example – using Kuhn’s term literally – is Dr. James Lawrence Powell’s 2017 “The Consensus on Anthropogenic Global Warming Matters.” In it, Powell addresses a critic’s response to Powell’s earlier report on the degree of scientific consensus. Powell argues that 99.99% of scientists accept anthropogenic warming, rather than 97% as his critic claims. But the thrust of Powell’s paper is that the degree of consensus matters greatly, “because scholars have shown that the stronger the public believe the consensus to be, the more they support the action on global warming that human society so desperately needs.” Powell goes on for seven fine-print pages, citing Oreskes’ work, with charts and appendices on the degree of scientific consensus. He not only focuses on consensus, he seeks consensus about consensus.

Of particular interest to anyone with Kuhn’s perspective – let alone Feyerabend’s – is the way climate science treats its backsliders. Dissenters are damned from the start, but those who have left the institution (literally, in the case of The Intergovernmental Panel on Climate Change) are further vilified.

Dr. Richard Tol, lead author for the Fifth IPCC Assessment Report, later identified methodological flaws in IPCC work. Dr. Judith Curry, lead author for the Third Assessment Report, later became a prominent critic of the IPCC’s consensus-driven process. She criticized climate models and the IPCC’s dismissal of natural climate variability. She believes (in Kuhnian terms) that the IPCC’s theories are value-laden and that their observations are theory-laden, the theory being human causation. Scientific American, a once agenda-less publication, called Curry a “climate heretic.” Dr. Patrick Michaels, contributor to the Second Assessment Report later emerged as a vocal climate change skeptic, arguing that the IPCC ignores natural climate variability and uses a poor representation of climate dynamics.

These scientists represent a small minority of the relevant community. But that community has challenged the motives and credentials of Tol, Curry, and Michaels more than their science. Michael Mann accused Curry of undermining science with “confusionism and denialism” in a 2017 congressional testimony. Mann said that any past legitimate work by Curry was invalidated by her “boilerplate denial drivel.” Mann said her exit strengthened the field by removing a disruptive voice. Indeed.

Tampering with Evidence

Everything above deals with methodological and social issues in climate science. Kuhn, Feyerabend, and even the Strong Program sociologists of science, assumed that scientists were above fudging the data. Tony Heller, Harvard emeritus professor of Geophysics, has, for over a decade, assembled screenshots of NASA and NOAA temperature records that prove continual revision of historic data, making the past look colder and the present look hotter. Heller’s opponents relentlessly engage in ad hominem attacks and character-based dismissals, rather than focusing on the substance of his arguments. If I can pick substance from his opponents’ positions, it would be that Heller cherry-picks U.S.-only examples and dismisses global evidence and corroboration of climate theory by evidence beyond temperature data. Heller may be guilty of cherry-picking. I haven’t followed the debate closely for many years.

But in 2013, I wrote to Judith Curry on the topic, assuming she was close to the issue. I asked her what fraction of NASA’s adjustments were consistent with strengthening the argument for 20th-century global warming, i.e., what fraction was consistent with Heller’s argument. She said the vast majority of it was.

Curry acknowledged that adjustments like those for urban heat-island effects and differences in observation times are justified in principle, but she challenged their implementation. In a 2016 interview with The Spectator, she said, “The temperature record has been adjusted in ways that make the past look cooler and the present warmer – it’s not a conspiracy, but it’s not neutral either.” She ties the bias to institutional pressures like funding and peer expectations. Feyerabend would smirk and remark that a conspiracy is not needed when the paradigm is ideologically aligned from the start.

In a 2017 testimony before the U.S. House Committee on Science, Space, and Technology, Curry said, “Adjustments to historical temperature data have been substantial, and in many cases, these adjustments enhance the warming trend.” She cited this as evidence of bias, implying the process lacks transparency and independent validation.

Conclusion

From the historical and philosophical perspectives discussed above, climate science can be critiqued as bad science. For the Logical Positivists, its global, far-future claims are hard to verify directly, challenging their empirical basis. For Hempel, its reliance on models and statistical trends rather than universal laws undermines its deductive explanatory power. For Popper, its long-term predictions resist falsification, blurring the line between science and non-science. For Kuhn, its dominant paradigm suppresses alternative viewpoints, hindering progress. Lakatos would likely endorse its progressive program, but would challenge its dogmatism. Feyerabend would be disgusted by its narrow methodology and its institutional rigidness. He would call it a religion – a bad one. He would quip that 97% of climate scientists agree that they do not want to be defunded. Naomi Oreskes thinks climate science is vital. I think it’s crap.

, , , , , ,

8 Comments

Fuck Trump: The Road to Retarded Representation

-Bill Storage, Apr 2, 2025

On February 11, 2025, the American Federation of Government Employees (AFGE) staged a “Rally to Save the Civil Service” at the U.S. Capitol. The event aimed to protest proposed budget cuts and personnel changes affecting federal agencies under the Trump administration. Notable attendees included Senators Brian Schatz (D-HI) and Chris Van Hollen (D-MD), and Representatives Donald Norcross (D-NJ) and Maxine Dexter (D-OR).

Dexter took the mic and said that “we have to fuck Trump.” Later Norcross led a “Fuck Trump” chant. The senators and representatives then joined a song with the refrain, “We want Trump in jail.” “Fuck Donald Trump and Elon Musk,” added Rep. Mark Pocan (D-WI).

This sort of locution might be seen as a paradigmatic example of free speech and authenticity in a moment of candid frustration, devised to align the representatives with a community that is highly critical of Trump. On this view, “Fuck Trump” should be understood within the context of political discourse and rhetorical appeal to a specific audience’s emotions and cultural values.

It might also be seen as a sad reflection of how low the Democratic Party has sunk and how low the intellectual bar has dropped to become a representative in the US congress.

I mostly write here about the history of science, more precisely, about History of Science, the academic field focused on the development of scientific knowledge and the ways that scientific ideas, theories, and discoveries have evolved over time. And how they shape and are shaped by cultural, social, political, and philosophical contexts. I held a Visiting Scholar appointment in the field at UC Berkeley for a few years.

The Department of the History of Science at UC Berkeley was created in 1960. There in 1961, Thomas Kuhn (1922 – 1996) completed the draft of The Structure of Scientific Revolutions, which very unexpectedly became the most cited academic book of the 20th century. I was fortunate to have second-hand access to Kuhn through an 18-year association with John Heilbron (1924 – 2023), who, outside of family, was by far the greatest influence on what I spend my time thinking about. John, Vice-Chancellor Emeritus of the UC System and senior research fellow at Oxford, was Kuhn’s grad student and researcher while Kuhn was writing Structure.

Thomas Kuhn

I want to discuss here the uncannily direct ties between Thomas Kuhn’s analysis of scientific revolutions and Rep. Norcross’s chanting “Fuck Trump,” along with two related aspects of the Kuhnian aftermath. The second is academic precedents that might be seen as giving justification to Norcross’s pronouncements. Third is the decline in academic standards over the time since Kuhn was first understood to be a validation of cultural relativism. To make this case, I need to explain why Thomas Kuhn became such a big deal, what relativism means in this context, and what Kuhn had to do with relativism.

To do that I need to use the term epistemology. I can’t do without it. Epistemology deals with questions that were more at home with the ancient Greeks than with modern folk. What counts as knowledge? How do we come to know things? What can be known for certain? What counts as evidence? What do we mean by probable? Where does knowledge come from, and what justifies it?

These questions are key to History of Science because science claims to have special epistemic status. Scientists and most historians of science, including Thomas Kuhn, believe that most science deserves that status.

Kernels of scientific thinking can be found in the ancient Greeks and Romans and sporadically through the Middle Ages. Examples include Adelard of Bath, Roger Bacon, John of Salisbury, and Averroes (Ibn Rushd). But prior to the Copernican Revolution (starting around 1550 and exploding under Galileo, Kepler, and Newton) most people were happy with the idea that knowledge was “received,” either through the ancients or from God and religious leaders, or from authority figures of high social status. A statement or belief was considered “probable”, not if it predicted a likely future outcome but if it could be supported by an authority figure or was justified by received knowledge.

Scientific thinking, roughly after Copernicus, introduced the radical notion that the universe could testify on its own behalf. That is, physical evidence and observations (empiricism) could justify a belief against all prior conflicting beliefs, regardless of what authority held them.

Science, unlike the words of God, theologians, and kings, does not deal in certainty, despite the number of times you have heard the phrase “scientifically proven fact.” There is no such thing. Proof is in the realm of math, not science. Laws of nature are generalizations about nature that we have good reason to act as if we know them to be universally and timelessly true. But they are always contingent. 2 + 2 is always 4, in the abstract mathematical sense. Two atoms plus two atoms sometimes makes three atoms. It’s called fission or transmutation. No observation can ever show 2 + 2 = 4 to be false. In contrast, an observation may someday show E = MC2 to be false.

Science was contagious. Empiricism laid the foundation of the Enlightenment by transforming the way people viewed the natural world. John Locke’s empirical philosophy greatly influenced the foundation of the United States. Empiricism contrasts with rationalism, the idea that knowledge can be gained by shear reasoning and through innate ideas. Plato was a rationalist. Aristotle thought Plato’s rationalism was nonsense. His writings show he valued empiricism, though was not a particularly good empiricist (“a dreadfully bad physical scientist,” wrote Kuhn). 2400 years ago, there was tension between rationalism and empiricism.

The ancients held related concerns about the contrast between absolutism and relativism. Absolutism posits that certain truths, moral principles, and standards are universally and timelessly valid, regardless of perspectives, cultures, or circumstances. Relativism, in contrast, holds that truth, morality, and knowledge are context-sensitive and are not universal or timeless.

In Plato’s dialogue, Theaetetus, Plato, examines epistemological relativism by challenging his adversary Protagoras, who asserts that truth and knowledge are not absolute. In Theaetetus Socrates, Plato’s mouthpiece, asks, “If someone says, ‘This is true for me, but that is true for you,’ then does it follow that truth is relative to the individual?”


Epistemological relativism holds that truth is relative to a community. It is closely tied to the anti-enlightenment romanticism that developed in the late 1700s. The romantics thought science was spoiling the mystery of nature. “Our meddling intellect mis-shapes the beauteous forms of things: We murder to dissect,” wrote Wordsworth.

 Relativism of various sorts – epistemological, moral, even ontological (what kinds of things exist) – resurged in the mid 1900s in poststructuralism and postmodernism. I’ll return to postmodernism later.

The contingent nature of scientific beliefs (as opposed to the certitude of math), right from the start in the Copernican era, was not seen by scientists or philosophers as support for epistemological relativism. Scientists – good ones, anyway – hold it only probable, not certain, that all copper is conductive. This contingent state of scientific knowledge does not, however, mean that copper can be conductive for me but not for you. Whatever evidence might exist for the conductivity of copper, scientists believe, can speak for itself. If we disagreed about conductivity, we could pull out an Ohmmeter and that would settle the matter, according to scientists.

Science has always had its enemies, at times including clerics, romantics, Luddites, and environmentalists. Science, viewed as an institution, could be seen as the monster that spawned atomic weapons, environmental ruin, stem cell hubris, and inequality. But those are consequences of science, external to its fundamental method. They don’t challenge science’s special epistemic status, but epistemic relativists do.

Relativism about knowledge – epistemological relativism – gained steam in the 1800s. Martin Heidegger, Karl Marx (though not intentionally), and Sigmund Freud, among others, brought the idea into academic spheres. While moral relativism and ethical pluralism (likely influenced by Friedrich Nietzsche) had long been in popular culture, epistemological relativism was sealed in Humanities departments, apparently because the objectivity of science was unassailable.

Enter Thomas Kuhn, Physics PhD turned historian for philosophical reasons. His Structure was originally published as a humble monograph in International Encyclopedia of Unified Science, then as a book in 1962. One of Kuhn’s central positions was that evidence cannot really settle non-trivial scientific debates because all evidence relies on interpretation. One person may “see” oxygen in the jar while another “sees” de-phlogisticated air. (Phlogiston was part of a theory of combustion that was widely believed before Antoine Lavoisier “disproved” it along with “discovering” oxygen.) Therefore, there is always a social component to scientific knowledge.

Kuhn’s point, seemingly obvious and innocuous in retrospect, was really nothing new. Others, like Michael Polanyi, had published similar thoughts earlier. But for reasons we can only guess about in retrospect, Kuhn’s contention that scientific paradigms are influenced by social, historical, and subjective factors was just the ammo that epistemological relativism needed to escape the confines of Humanities departments. Kuhn’s impact probably stemmed from the political climate of the 1960s and the detailed way he illustrated examples of theory-laden observations in science. His claim that, “even in physics, there is no standard higher than the assent of the relevant community” was devoured by socialists and relativists alike – two classes with much overlap in academia at that time. That makes Kuhn a relativist of sorts, but he still thought science to be the best method of investigating the natural world.

Kuhn argued that scientific revolutions and paradigm shifts (a term coined by Kuhn) are fundamentally irrational. That is, during scientific revolutions, scientific communities depart from empirical reasoning. Adherents often defend their theories illogically, discounting disconfirming evidence without grounds. History supports Kuhn on this for some cases, like Copernicus vs. Ptolemy, Einstein vs. Newton, quantum mechanics vs. Einstein’s deterministic view of the subatomic, but not for others like plate tectonics and Watson and Crick’s discovery of the double-helix structure of DNA, where old paradigms were replaced by new ones with no revolution.

The Strong Programme, introduced by David Bloor, Barry Barnes, John Henry and the Edinburgh School as Sociology of Scientific Knowledge (SSK), drew heavily on Kuhn. It claimed to understand science only as a social process. Unlike Kuhn, it held that all knowledge, not just science, should be studied in terms of social factors without privileging science as a special or uniquely rational form of knowledge. That is, it denied that science had a special epistemic status and outright rejected the idea that science is inherently objective or rational. For the Strong Programme, science was “socially constructed.” The beliefs and practices of scientific communities are shaped solely by social forces and historical contexts. Bloor and crew developed their “symmetry principle,” which states that the same kinds of causes must be used to explain both true and false scientific beliefs.

The Strong Programme folk called themselves Kuhnians. What they got from Kuhn was that science should come down from its pedestal, since all knowledge, including science, is relative to a community. And each community can have its own truth. That is, the Strong Programmers were pure epistemological relativists.  Kuhn repudiated epistemological relativism (“I am not a Kuhnian!”), and to his chagrin, was still lionized by the strong programmers. “What passes for scientific knowledge becomes, then, simply the belief of the winners. I am among those who have found the claims of the strong program absurd: an example of deconstruction gone mad.” (Deconstruction is an essential concept in postmodernism.)

“Truth, at least in the form of a law of noncontradiction, is absolutely essential,” said Kuhn in a 1990 interview. “You can’t have reasonable negotiation or discourse about what to say about a particular knowledge claim if you believe that it could be both true and false.”

No matter. The Strong Programme and other Kuhnians appropriated Kuhn and took it to the bank. And the university, especially the social sciences. Relativism had lurked in academia since the 1800s, but Kuhn’s scientific justification that science isn’t justified (in the eyes of the Kuhnians) brought it to the surface.


Herbert Marcuse, ” Father of the New Left,” also at Berkeley in the 1960s, does not appear to have had contact with Kuhn. But Marcuse, like the Strong Programme, argued that knowledge was socially constructed, a position that Kuhnians attributed to Kuhn. Marcuse was critical of the way that Enlightenment values and scientific rationality were used to legitimize oppressive structures of power in capitalist societies. He argued that science, in its role as part of the technological apparatus, served the interests of oppressors. Marcuse saw science as an instrument of domination rather than emancipation. The term “critical theory” originated in the Frankfurt School in the early 20th century, but Marcuse, once a main figure in Frankfurt’s Institute for Social Research, put Critical Theory on the map in America. Higher academics began its march against traditional knowledge, waving the banners of Marcusian cynicism and Kuhnian relativism.

Postmodernism means many things in different contexts. In 1960s academia, it referred to a reaction against modernism and Enlightenment thinking, particularly thought rooted in reason, progress, and universal truth. Many of the postmodernists saw in Kuhn a justification for certain forms of both epistemic and moral relativism. Prominent postmodernists included Jean-François Lyotard, Michel Foucault, Jean Baudrillard, Richard Rorty, and Jacques Derrida. None of them, to my knowledge, ever made a case for unqualified epistemological relativism. Their academic intellectual descendants often do.

20th century postmodernism had significant intellectual output, a point lost on critics like Gross and Levitt (Higher Superstition, 1994) and Dinesh De Souza. Derrida’s application of deconstruction of written text took hermeneutics to a new level and has proved immensely valuable to analysis of ancient texts, as has the reader-response criticism approach put forth by Louise Rosenblatt (who was not aligned with the radical skepticism typical of postmodernism) and Jacques Derrida, and embraced by Stanley Fish (more on whom below). All practicing scientists would benefit from Richard Rorty’s elaborations on the contingency of scientific knowledge, which are consistent with those held by Descartes, Locke, and Kuhn.

Michel Foucault attacked science directly, particularly psychology and, oddly, from where we stand today, sociology. He thought those sciences constructed a specific normative picture of what it means to be human, and that the farther a person was from the idealized clean-cut straight white western European male, the more aberrant those sciences judged the person to be. Males, on Foucault’s view, had repressed women for millennia to construct an ideal of masculinity that serves as the repository of political power. He was brutally anti-Enlightenment and was disgusted that “our discourse has privileged reason, science, and technology.” Modernity must be condemned constantly and ruthlessly. Foucault was gay, and for a time, he wanted sex to be the center of everything.

Foucault was once a communist. His influence on identity politics and woke ideology is obvious, but Foucault ultimately condemned communism and concluded that sexual identity was an absurd basis on which to form one’s personal identity.

Rosenblatt, Rorty, Derrida, and even at times Foucault, despite their radical positions, displayed significant intellectual rigor. This seems far less true of their intellectual offspring. Consider Sandra Harding, author of “The Gender Dimension of Science and Technology” and consultant to the U.N. Commission on Science and Technology for Development. Harding argues that the Enlightenment resulted in a gendered (male) conception of knowledge. She wrote in The Science Question in Feminism that it would be “illuminating and honest” to call Newton’s laws of motion “Newton’s rape manual.”

Cornel West, who has held fellowships at Harvard, Yale, Princeton, and Dartmouth, teaches that the Enlightenment concepts of reason and of individual rights, which were used since the Enlightenment were projected by the ruling classes of the West to guarantee their own liberty while repressing racial minorities. Critical Race Theory, the offspring of Marcuse’s Critical Theory, questions, as stated by Richard Delgado in Critical Race Theory, “the very foundations of the liberal order, including equality theory, legal reasoning, Enlightenment rationalism, and neutral principles of constitutional law.”

Allan Bloom, a career professor of Classics who translated Plato’s Republic in 1968, wrote in his 1987 The Closing of the American Mind on the decline of intellectual rigor in American universities. Bloom wrote that in the 1960s, “the culture leeches, professional and amateur, began their great spiritual bleeding” of academics and democratic life. Bloom thought that the pursuit of diversity and universities’ desire to increase the number of college graduates at any cost undermined the outcomes of education. He saw, in the 1960s, social and political goals taking priority over the intellectual and academic purposes of education, with the bulk of unfit students receiving degrees of dubious value in the Humanities, his own area of study.

At American universities, Marx, Marcuse, and Kuhn were invoked in the Humanities to paint the West, and especially the US, as cultures of greed and exploitation. Academia believed that Enlightenment epistemology and Enlightenment values had been stripped of their grandeur by sound scientific and philosophical reasoning (i.e. Kuhn). Bloom wrote that universities were offering students every concession other than education. “Openness used to be the virtue that permitted us to seek the good by using reason. It now means accepting everything and denying reason’s power,” wrote Bloom, adding that by 1980 the belief that truth is relative was essential to university life.

Anti-foundationalist Stanley Fish, Visiting Professor of Law at Yeshiva University, invoked Critical Theory in 1985 to argue that American judges should think of themselves as “supplementers” rather than “textualists.” As such, they “will thereby be marginally more free than they otherwise would be to infuse into constitutional law their current interpretations of our society’s values.” Fish openly rejects the idea of judicial neutrality because interpretation, whether in law or literature, is always contingent and socially constructed.


If Bloom’s argument is even partly valid, we now live in a second or third generation of the academic consequences of the combined decline of academic standards and the incorporation of moral, cultural, and epistemological relativism into college education. We have graduated PhDs in the Humanities, educated by the likes of Sandra Harding and Cornel West, who never should have been in college, and who learned nothing of substance there beyond relativism and a cynical disgust for reason. And those PhDs are now educators who have graduated more PhDs.

Peer reviewed journals are now being reviewed by peers who, by the standards of three generations earlier, might not be qualified to grade spelling tests. The academic products of this educational system are hired to staff government agencies, HR departments, and to teach school children Critical Race Theory, Queer Theory, and Intersectionality – which are given the epistemic eminence of General Relativity – and the turpitude of national pride and patriotism.

An example, with no offense intended to those who call themselves queer, would be to challenge the epistemic status of Queer Theory. Is it parsimonious? What is its research agenda? Does it withstand empirical scrutiny and generate consistent results? Do its theorists adequately account for disconfirming evidence? What bold hypothesis in Queer Theory makes a falsifiable prediction?

Herbert Marcuse’s intellectual descendants, educated under the standards detailed by Bloom, now comprise progressive factions within the Democratic Party, particularly those advocating socialism and Marxist-inspired policies. The rise of figures like Bernie Sanders, Alexandria Ocasio-Cortez, and others associated with the “Democratic Socialists of America” reflects a broader trend in American politics toward embracing a combination of Marcuse’s critique of capitalism, epistemic and moral relativism, and a hefty decline in academic standards.

One direct example is the notion that certain forms of speech including reactionary rhetoric should not be tolerated if they undermine social progress and equity. Allan Bloom again comes to mind: “The most successful tyranny is not the one that uses force to assure uniformity but the one that removes the awareness of other possibilities.”

Echoes of Marcuse, like others of the 1960s (Frantz Fanon, Stokely Carmichael, the Weather Underground) who endorsed rage and violence in anti-colonial struggles, are heard in modern academic outrage that is seen by its adherents as a necessary reaction against oppression. Judith Butler of UC Berkeley, who called the October 2023 Hamas attacks an “act of armed resistance,” once wrote that “understanding Hamas, Hezbollah as social movements that are progressive, that are on the left, that are part of a global left, is extremely important.” College students now learn that rage is an appropriate and legitimate response to systemic injustice, patriarchy, and oppression. Seing the US as a repressive society that fosters complacency toward the marginalization of under-represented groups while striving to impose heteronormativity and hegemonic power is, to academics like Butler, grounds for rage, if not for violent response.

Through their college educations and through ideas and rhetoric supported by “intellectual” movements bred in American universities, politicians, particularly those more aligned with relativism and Marcuse-styled cynicism, feel justified in using rhetorical tools born of relaxed academic standards and tangential admissions criteria.

In the relevant community, “Fuck Trump” is not an aberrant tantrum in an echo chamber but a justified expression of solidary-building and speaking truth to power. But I would argue, following Bloom, that it reveals political retardation originating in shallow academic domains following the deterioration of civic educational priorities.


Examples of such academic domains serving as obvious predecessors to present causes at the center of left politics include:

  • 1965: Herbert Marcuse (UC Berkeley) in Repressive Tolerance argues for intolerance toward prevailing policies, stating that a “liberating tolerance” would consist of intolerance to right-wing movements and toleration of left-wing movements. Marcuse advanced Critical Theory and a form of Marxism modified by genders and races replacing laborers as the victims of capitalist oppression.

  • 1971: Murray Bookchin’s (Alternative University, New York) Post-Scarcity Anarchism followed by The Ecology of Freedom (1982) introduce the eco-socialism that gives rise to the Green New Deal.

  • 1980: Derrick Bell’s (New York University School of Law) “Brown v. Board of Education and the Interest-Convergence Dilemma” wrote that civil rights advance only when they align with the interests of white elites. Later, Bell, Kimberlé Crenshaw, and Richard Delgado (Seattle University) develop Critical Race Theory, claiming that “colorblindness” is a form of oppression.

  • 1984: Michel Foucault’s (Collège de France) The Courage of Truth addresses how individuals and groups form identities in relation to truth and power. His work greatly informs Queer Theory, post-colonial ideology, and the concept of toxic masculinity.

  • 1985: Stanley Fish (Yeshiva University) and Thomas Grey (Stanford Law School) reject judicial neutrality and call for American judges to infuse into constitutional law their current interpretations of our society’s values.

  • 1989: Kimberlé Crenshaw of Columbia Law School introduced the concept of Intersectionality, claiming that traditional frameworks for understanding discrimination were inadequate because they overlooked the ways that multiple forms of oppression (e.g., race, gender, class) interacted.

  • 1990: Judith Butler’s (UC Berkeley) Gender Trouble introduces the concept of gender performativity, arguing that gender is socially constructed through repeated actions and expressions. Butler argues that the emotional well-being of vulnerable individuals supersedes the right to free speech.

  • 1991: Teresa de Lauretis of UC Santa Cruz: introduced the term “Queer Theory” to challenge traditional understandings of gender and sexuality, particularly in relation to identity, norms, and power structures.

Marcusian cynicism might have simply died an academic fantasy, as it seemed destined to do through the early 1980s, if not for its synergy with the cultural relativism that was bolstered by the universal and relentless misreading and appropriation of Thomas Kuhn that permeated academic thought in the 1960s through 1990s. “Fuck Trump” may have happened without Thomas Kuhn through a different thread of history, but the path outlined here is direct and well-travelled. I wonder what Kuhn would think.

, , , , , ,

3 Comments

Extraordinary Popular Miscarriages of Science (part 1)

By Bill Storage, Jan. 18, 2024

I’ve been collecting examples of bad science. Many came from friends and scientists I’ve talked to. Bad science can cover several aspects of science depending on what one means by science. At least three very different things are called science now:

  1. An approach or set of rules and methods used to understand and predict nature
  2. A body of knowledge about nature and natural processes
  3. An institution, culture or community of people, including academic, government and corporate professionals, who are involved, or are said to be involved, in 1. or 2. above

Many of my examples of bad science fall under the 3rd category and involve, or are dominated by, the academicians, government offices, and corporations. Below are a few of my favorites from the past century or so. I think many people tend to think that bad science happened in medieval times and that the modern western world is immune to that sort of thing. On the contrary, bad science may be on the rise. For the record, I don’t judge a theory bad merely because it was shown to be wrong, even if spectacularly wrong. Geocentricity was a good theory. Phlogiston (17th century theoretical substance believed to escape from matter during combustion), caloric theory (18th century theory of a weightless fluid that flows from hot matter to cold), and the luminiferous ether (17-19th century postulated medium for the propagation of light waves) were all good theories, though we now have robust evidence against them. All had substantial predictive power. All posited unobservable entities to explain phenomena. But predictive success alone cannot justify belief in unobservable entities. Creation science and astrology were always bad science.

To clarify the distinction between bad science and wrong theories, consider Trofim Lysenko. He was nominally a scientist. Some of his theories appear to be right. He wore the uniform, held the office, and published sciencey papers. But he did not behave scientifically (consistent with definition 1 above) when he ignored the boundless evidence and prior art about heredity. Wikipedia dubs him a pseudoscientist, despite his having some successful theories and making testable hypotheses. Pseudoscience, says Wikipedia, makes unfalsifiable claims. Lysenko’s bold claims were falsifiable, and they were falsified. Wikipedia talks as if the demarcation problem – knowing science from pseudoscience – is a closed case. Nah. Rather than tackle that matter of metaphysics and philosophy, I’ll offer that Lysenkoism, like creation science, and astrology, are all sciences but they are bad science. While they all make some testable predictions, they also make a lot of vague ones, their interest in causation is puny, and their research agendas are scant.

Good science entails testable, falsifiable theories and bold predictions. Most philosophers of science, notably excluding Karl Popper, who thought that only withstanding falsification mattered, have held that making succinct, correct prediction makes a theory good, and that successful theories make for good science. Larry Laudan gave, in my view, a fine definition of a successful theory in his 1984 Philosophy of Science: A theory is successful provided it makes substantially more correct predictions, that it leads to efficacious interventions in the natural order, or that it passes a suitable battery of tests.

Concerns over positing unobservables opens a debate on the question of just how observable are electrons, quarks, and the Higgs Field. Not here though. I am more interested in bad science (in the larger senses of science) than I am with wrong theories. Badness often stems not from seeking to explain and predict nature and failing out of refusal to read the evidence fairly, but from cloaking a non-scientific agenda in the trappings of science. I’m interested in what Kuhn, Feyerabend, and Lakatos dealt with – the non-scientific interests of academicians, government offices, and corporations and their impact on what gets studied and how it gets studied, how confirming evidence is sought and processed, how disconfirming evidence is processed, avoided, or dismissed, and whether Popperian falsifiability was ever on the table.

Recap of Kuhn, Feyerabend, and Lakatos

Thomas Kuhn claimed that normal (day-to-day lab-coat) science consisted of showing how nature can be fit into the existing theory. That is, normal science is decidedly unscientific. It is bad science, aimed at protecting the reigning paradigm from disconfirming evidence. On Kuhn’s view, your scientific education teaches you how to see things as your field requires them to be seen. He noted that medieval and renaissance astronomers never saw the supernovae that were seen in China. Europeans “knew” that the heavens were unchanging. Kuhn used the terms dogma and indoctrination to piss off scientists of his day. He thought that during scientific crises (Newton vs. Einstein being the exemplar) scientists clutched at new theories, often irrationally, and then vicious competition ended when scientific methods determined the winner of a new paradigm. Kuhn was, unknown to most of his social-science groupies, a firm believer that the scientific enterprise ultimately worked. Kuhn says normal science is bad science. He thought this was okay because crisis science reverted to good science, and in crisis, the paradigm was overthrown when the scientists got interested in philosophy of science. When Kuhn was all the rage in the early 1960s, radical sociologists of science, all at the time very left leaning, had their doubts that science could stay good under the influence of government and business. Recall worries about the military industrial complex. They thought that interest, whether economic or political, could keep science bad. I think history has sided with those sociologists; though today’s professional sociologists, now overwhelmingly employed by the the US and state governments, are astonishingly silent on the matter. Granting, for sake of argument, that social science is science, its practitioners seem to be living proof that interest can dominate not only research agendas but what counts as evidence, along with the handling of evidence toward what becomes dogma in the paradigm.

Paul Feyerabend, though also no enemy of science, thought Kuhn stopped short of exposing the biggest problems with science. Feyerabend called science, referring to science as an institution, a threat to democracy. He called for “a separation of state and science just as there is a separation between state and religious institutions.” He thought that 1960s institutional science resembled more the church of Galileo’s day than it resembled Galileo. Feyerabend thought theories should be tested against each other, not merely against the world. He called institutional science a threat because it increasingly held complete control over what is deemed scientifically important for society. Historically, he observed, individuals, by voting with their attention and their dollars, have chosen what counts as being socially valuable. Feyerabend leaned rather far left. In my History of Science appointment at UC Berkeley I was often challenged for invoking him against bad-science environmentalism because Feyerabend wouldn’t have supported a right-winger. Such is the state of H of S at Berkeley, now subsumed by Science and Technology Studies, i.e., same social studies bullshit (it all ends in “Studies”), different pile. John Heilbronn rest in peace.

Imre Lakatos had been imprisoned by the Nazis for revisionism. Through that experience he saw Kuhn’s assent of the relevant community as a valid criterion for establishing a new post-crisis paradigm as not much of a virtue. It sounded a bit too much like Nazis and risked becoming “mob psychology.” If the relevant community has excessive organizational or political power, it can put overpowering demands on individual scientists and force them to subordinate their ideas to the community (see String Theory’s attack on Lee Smolin below). Lakatos saw the quality of a science’s research agenda as a strong indicator of quality. Thin research agendas, like those of astrology and creation science, revealed bad science.

Selected Bad Science

Race Science and Eugenics
Eugenics is an all time favorite, not just of mine. It is a poster child for evil-agenda science driven by a fascist. That seems enough knowledge of the matter for the average student of political science. But eugenics did not emerge from fascism and support for it was overwhelming in progressive circles, particularly in American universities and the liberal elite. Alfred Binet of IQ-test fame, H. G. Wells, Margaret Sanger, John Harvey Kellogg, George Bernard Shaw, Theodore Roosevelt, and apparently Oliver Wendell Holmes, based on his decision that compulsory sterilization was within a state’s rights, found eugenics attractive. Financial support for the eugenics movement included the Carnegie Foundation, Rockefeller Institute, and the State Department. Harvard endorsed it, as did Stanford’s first president, David S Jordan. Yale’s famed economist and social reformer Irving Fisher was a supporter. Most aspects of eugenics in the United States ended abruptly when we discovered that Hitler had embraced it and was using it to defend the extermination of Jews. Hitler borrowed from our 1933 Law for the Prevention of Hereditarily Defective Offspring drawn up by Harry Laughlin. Eugenics was a class case of advocates and activists, clueless of any sense of science, broadcasting that the science (the term “race science” exploded onto the scene as if if had always been a thing) had been settled. In an era where many Americans enjoy blaming the living – and some of the living enjoy accepting that blame – for the sins of our fathers, one wonders why these noble academic institutions have not come forth to offer recompense for their eugenics transgressions.

The War on Fat

In 1977 a Senate committee led by George McGovern published “Dietary Goals for the United States,” imploring us to eat less red meat, eggs, and dairy products. The U.S. Department of Agriculture (USDA) then issued its first dietary guidelines, which focused on cutting cholesterol and not only meat fat but fat from any source. The National Institutes of Health recommended that all Americans, including young kids, cut fat consumption. In 1980 the US government broadcast that eating less fat and cholesterol would reduce your risk of heart attack. Evidence then and ever since has not supported this edict. A low-fat diet was alleged to mitigate many metabolic risk factors and to be essential for achieving a healthy body weight. However, over the past 45 years, obesity in the US climbed dramatically while dietary fat levels fell. Europeans with higher fat diets, having the same genetic makeup, are far thinner. The science of low-fat diets and the tenets of related institutions like insurance, healthcare, and school lunches have seemed utterly immune to evidence. Word is finally trickling out. The NIH has not begged pardon.

The DDT Ban

Rachel Carson appeared before the Department of Commerce in 1963, asking for a “Pesticide Commission” to regulate the DDT. Ten years later, Carson’s “Pesticide Commission” became the Environmental Protection Agency, which banned DDT in the US. The rest of the world followed, including Africa, which was bullied by European environmentalists and aid agencies to do so.

By 1960, DDT use had eliminated malaria from eleven countries. Crop production, land values, and personal wealth rose. In eight years of DDT use, Nepal’s malaria rate dropped from over two million to 2,500. Life expectancy rose from 28 to 42 years.

Malaria reemerged when DDT was banned. Since the ban, tens of millions of people have died from malaria. Following Rachel Carson’s Silent Spring narrative, environmentalists claimed that, with DDT, cancer deaths would have negated the malaria survival rates. No evidence supported this. It was fiction invented by Carson. The only type of cancer that increased during DDT use in the US was lung cancer, which correlated cigarette use. But Carson instructed citizens and governments that DDT caused leukemia, liver disease, birth defects, premature births, and other chronic illnesses. If you “know” that DDT altered the structure of eggs, causing bird populations to dwindle, it is Carson’s doing.

Banning DDT didn’t save the birds, because DDT wasn’t the cause of US bird death as Carson reported. While bird populations had plunged prior to DDT’s first use, the bird death at the center of her impassioned plea never happened. We know this from bird count data and many subsequent studies. Carson, in her work at Fish and Wildlife Service and through her participation in Audubon bird counts, certainly knew that during US DDT use, the eagle population doubled, and robin, dove, and catbird counts increased by 500%. Carson lied like hell and we showered her with praise and money. Africans paid with their lives.

In 1969 the Environmental Defense Fund demanded a hearing on DDT. The 8-month investigation concluded DDT was not mutagenic or teratogenic. No cancer, no birth defects. In found no “deleterious effect on freshwater fish, estuarine organisms, wild birds or other wildlife.” Yet William Ruckleshaus, first director of the EPA, who never read the transcript, chose to ban DDT anyway. Joni Mitchell was thrilled. DDT was replaced by more harmful pesticides. NPR, the NY Times, and the Puget Sound Institute still report a “preponderance of evidence” of DDT’s dangers.

When challenged with the claim that DDT never killed kids, the Rachel Carson Landmark Alliance responded in 2017 that indeed it had. A two-year old drank and ounce of 5% DDT in a solution of kerosene and died. Now there’s scientific integrity.

Vilification of Cannabis

I got this one from my dentist; I had never considered it before. White-collar, or rather, work-from-home, California potheads think this problem has been overcome. Far from it. Cannabis use violates federal law. Republicans are too stupid to repeal it, and Democrats are too afraid of looking like hippies. According to Quest Diagnostics, in 2020, 4.4% of workers failed their employers’ drug tests. Blue-collar Americans, particularly those who might be a sub-sub-subcontractor on a government project, are subject to drug tests. Testing positive for weed can cost you your job. So instead of partying on pot, the shop floor consumes immense amounts of alcohol, increasing its risk of accidents at work and in the car, harming its health, and raising its risk of hard drug use. To the admittedly small sense in which the concept of a gateway drug is valid, marijuana is probably not one and alcohol almost certainly is. Racism, big pharma lobbyists, and social-control are typically blamed for keeping cannabis illegal. Governments may also have concluded that tolerating weed at the state level while maintaining federal prohibition is an optimal tax revenue strategy. Cannabis tolerance at state level appears to have reduced opioid use and opioid related ER admissions.

Stoners who scoff at anti-cannabis propaganda like Reefer Madness might be unaware that a strong correlation between psychosis and cannabis use has been known for decades. But inferring causation from that correlation was always either highly insincere (huh huh) or very bad science. Recent analysis of study participants’ genomes showed that those with the strongest genetic profile for schizophrenia were also more likely to use cannabis in large amounts. So unless you follow Lysenko, who thought acquired traits were passed to offspring, pot is unlikely to cause psychosis. When A and B correlate, either A causes B, B causes A, or C causes both, as appears to be the case with schizophrenic potheads.

To be continued.

, , ,

4 Comments

Paul Feyerabend, The Worst Enemy of Science

“How easy it is to lead people by the nose in a rational way.”

A similarly named post I wrote on Paul Feyerabend seven years ago turned out to be my most popular post by far. Seeing it referenced in a few places has made me cringe, and made me face the fact that I failed to make my point. I’ll try to correct that here. I don’t remotely agree with the paper in Nature that called Feyerabend the worst enemy of science, nor do I side with the postmodernists that idolize him. I do find him to be one of the most provocative thinkers of the 20th century, brash, brilliant, and sometimes full of crap.

Feyerabend opened his profound Against Method by telling us to always remember that what he writes in the book does not reflect any deep convictions of his, but that he intends “merely show how easy it is to lead people by the nose in a rational way.” I.e., he was more telling us what he thought we needed to hear than what he necessarily believed. In his autobiography he wrote that for Against Method he had used older material but had “replaced moderate passages with more outrageous ones.” Those using and abusing Feyerabend today have certainly forgot what this provocateur, who called himself an entertainer, told us always to remember about him in his writings.

Any who think Feyerabend frivolous should examine the scientific rigor in his analysis of Galileo’s work. Any who find him to be an enemy of science should actually read Against Method instead of reading about him, as quotes pulled from it can be highly misleading as to his intent. My communications with some of his friends after he died in 1994 suggest that while he initially enjoyed ruffling so many feathers with Against Method, he became angered and ultimately depressed over both critical reactions against it and some of the audiences that made weapons of it. In 1991 he wrote, “I often wished I had never written that fucking book.”

I encountered Against Method searching through a library’s card catalog seeking an authority on the scientific method. I learned from Feyerabend that no set of methodological rules fits the great advances and discoveries in science. It’s obvious once you think about it. Pick a specific scientific method – say the hypothetico-deductive model – or any set of rules, and Feyerabend will name a scientific discovery that would not have occurred had the scientist, from Galileo to Feynman, followed that method, or any other.

Part of Feyerabend’s program was to challenge the positivist notion that in real science, empiricism trumps theory. Galileo’s genius, for Feyerabend, was allowing theory to dominate observation. In Dialogue Galileo wrote:

Nor can I ever sufficiently admire the outstanding acumen of those who have taken hold of this opinion and accepted it as true: they have, through sheer force of intellect, done such violence to their own senses as to prefer what reason told them over that which sensible experience plainly showed them to be the contrary.

For Feyerabend, against Popper and the logical positivists of the mid 1900’s, Galileo’s case exemplified a need to grant theory priority over evidence. This didn’t sit well with empiricist leanings of the the post-war western world. It didn’t set well with most scientists or philosophers. Sociologists and literature departments loved it. It became fuel for fire of relativism sweeping America in the 70’s and 80’s and for the 1990’s social constructivists eager to demote science to just another literary genre.

But in context, and in the spheres for which Against Method was written, many people – including Feyerabend’s peers from 1970 Berkeley, with whom I’ve had many conversations on the topic, conclude that the book’s goading style was a typical Feyerabendian corrective provocation to that era’s positivistic dogma.

Feyerabend distrusts the orthodoxy of social practices of what Thomas Kuhn termed “normal science” – what scientific institutions do in their laboratories. Unlike their friend Imre Lakatos, Feyerabend distrusts any rule-based scientific method at all. Instead, Feyerabend praises the scientific innovation and individual creativity. For Feyerabend science in the mid 1900’s had fallen prey to the “tyranny of tightly-knit, highly corroborated, and gracelessly presented theoretical systems.” What would he say if alive today?

As with everything in the philosophy of science in the late 20th century, some of the disagreement between Feyerabend, Kuhn, Popper and Lakatos revolved around miscommunication and sloppy use of language. The best known case of this was Kuhn’s inconsistent use of the term paradigm. But they all (perhaps least so Lakatos) talked past each other by failing to differentiate different meanings of the word science, including:

  1. An approach or set of rules and methods for inquiry about nature
  2. A body of knowledge about nature
  3. In institution, culture or community of scientists, including academic, government and corporate

Kuhn and Feyerabend in particular vacillated between meaning science as a set of methods and science as an institution. Feyerabend certainly was referring to an institution when he said that science was a threat to democracy and that there must be “a separation of state and science just as there is a separation between state and religious institutions.” Along these lines Feyerabend thought that modern institutional science resembles more the church of Galileo’s day than it resembles Galileo.

On the matter of state control of science, Feyerabend went further than Eisenhower did in his “military industrial complex” speech, even with the understanding that what Eisenhower was describing was a military-academic-industrial complex. Eisenhower worried that a government contract with a university “becomes virtually a substitute for intellectual curiosity.” Feyerabend took this worry further, writing that university research requires conforming to orthodoxy and “a willingness to subordinate one’s ideas to those of a team leader.” Feyerabend disparaged Kuhn’s normal science as dogmatic drudgery that stifles scientific creativity.

A second area of apparent miscommunication about the history/philosophy of science in the mid 1900’s was the descriptive/normative distinction. John Heilbron, who was Kuhn’s grad student when Kuhn wrote Structure of Scientific Revolutions, told me that Kuhn absolutely despised Popper, not merely as a professional rival. Kuhn wanted to destroy Popper’s notion that scientists discard theories on finding disconfirming evidence. But Popper was describing ideally performed science; his intent was clearly normative. Kuhn’s work, said Heilbron (who doesn’t share my admiration for Feyerabend), was intended as normative only for historians of science, not for scientists. True, Kuhn felt that it was pointless to try to distinguish the “is” from the “ought” in science, but this does not mean he thought they were the same thing.

As with Kuhn’s use of paradigm, Feyerabend’s use of the term science risks equivocation. He drifts between methodology and institution to suit the needs of his argument. At times he seems to build a straw man of science in which science insists it creates facts as opposed to building models. Then again, on this matter (fact/truth vs. models as the claims of science) he seems to be more right about the science of 2019 than he was about the science of 1975.

While heavily indebted to Popper, Feyerabend, like Kuhn, grew hostile to Popper’s ideas of demarcation and falsification: “let us look at the standards of the Popperian school, which are still being taken seriously in the more backward regions of knowledge.” He eventually expanded his criticism of Popper’s idea of theory falsification to a categorical rejection of Popper’s demarcation theories and of Popper’s critical rationalism in general. Now from the perspective of half a century later, a good bit of the tension between Popper and both Feyerabend and Kuhn and between Kuhn and Feyerabend seems to have been largely semantic.

For me, Feyerabend seems most relevant today through his examination of science as a threat to democracy. He now seems right in ways that even he didn’t anticipate. He thought it a threat mostly in that science (as an institution) held complete control over what is deemed scientifically important for society. In contrast, people as individuals or small competing groups, historically have chosen what counts as being socially valuable. In this sense science bullied the citizen, thought Feyerabend. Today I think we see a more extreme example of bullying, in the case of global warming for example, in which government and institutionalized scientists are deciding not only what is important as a scientific agenda but what is important as energy policy and social agenda. Likewise the role that neuroscience plays in primary education tends to get too much of the spotlight in the complex social issues of how education should be conducted. One recalls Lakatos’ concern against Kuhn’s confidence in the authority of “communities.” Lakatos had been imprisoned by the Nazis for revisionism. Through that experience he saw Kuhn’s “assent of the relevant community” as not much of a virtue if that community has excessive political power and demands that individual scientists subordinate their ideas to it.

As a tiny tribute to Feyerabend, about whom I’ve noted caution is due in removal of his quotes from their context, I’ll honor his provocative spirit by listing some of my favorite quotes, removed from context, to invite misinterpretation and misappropriation.

“The similarities between science and myth are indeed astonishing.”

“The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo’s doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism.”

“All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’.”

“Revolutions have transformed not only the practices their initiators wanted to change buy the very principles by means of which… they carried out the change.”

“Kuhn’s masterpiece played a decisive role. It led to new ideas, Unfortunately it also led to lots of trash.”

“First-world science is one science among many.”

“Progress has always been achieved by probing well-entrenched and well-founded forms of life with unpopular and unfounded values. This is how man gradually freed himself from fear and from the tyranny of unexamined systems.”

“Research in large institutes is not guided by Truth and Reason but by the most rewarding fashion, and the great minds of today increasingly turn to where the money is — which means military matters.”

“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.”

“Without a constant misuse of language, there cannot be any discovery, any progress.”

__________________

Photos of Paul Feyerabend courtesy of Grazia Borrini-Feyerabend

, ,

2 Comments

Can Science Survive?

galileo
In my last post I ended with the question of whether science in the pure sense can withstand science in the corporate, institutional, and academic senses. Here’s a bit more on the matter.

Ronald Reagan, pandering to a church group in Dallas, famously said about evolution, “Well, it is a theory. It is a scientific theory only.” (George Bush, often “quoted” as saying this, did not.) Reagan was likely ignorant of the distinction between two uses of the word, theory. On the street, “theory” means an unsettled conjecture. In science a theory – gravitation for example – is a body of ideas that explains observations and makes predictions. Reagan’s statement fueled years of appeals to teach creationism in public schools, using titles like creation science and intelligent design. While the push for creation science is usually pinned on southern evangelicals, it was UC Berkeley law professor Phillip E Johnson who brought us intelligent design.

Arkansas was a forerunner in mandating equal time for creation science. But its Act 590 of 1981 (Balanced Treatment for Creation-Science and Evolution-Science Act) was shut down a year later by McLean v. Arkansas Board of Education. Judge William Overton made philosophy of science proud with his set of demarcation criteria. Science, said Overton:

  • is guided by natural law
  • is explanatory by reference to natural law
  • is testable against the empirical world
  • holds tentative conclusions
  • is falsifiable

For earlier thoughts on each of Overton’s five points, see, respectively, Isaac Newton, Adelard of Bath, Francis Bacon, Thomas Huxley, and Karl Popper.

In the late 20th century, religious fundamentalists were just one facet of hostility toward science. Science was also under attack on the political and social fronts, as well an intellectual or epistemic front.

President Eisenhower, on leaving office in 1960, gave his famous “military industrial complex” speech warning of the “danger that public policy could itself become the captive of a scientific technological elite.” At about the same time the growing anti-establishment movements – perhaps centered around Vietnam war protests –  vilified science for selling out to corrupt politicians, military leaders and corporations. The ethics of science and scientists were under attack.

Also at the same time, independently, an intellectual critique of science emerged claiming that scientific knowledge necessarily contained hidden values and judgments not based in either objective observation (see Francis Bacon) or logical deduction (See Rene Descartes). French philosophers and literary critics Michel Foucault and Jacques Derrida argued – nontrivially in my view – that objectivity and value-neutrality simply cannot exist; all knowledge has embedded ideology and cultural bias. Sociologists of science ( the “strong program”) were quick to agree.

This intellectual opposition to the methodological validity of science, spurred by the political hostility to the content of science, ultimately erupted as the science wars of the 1990s. To many observers, two battles yielded a decisive victory for science against its critics. The first was publication of Higher Superstition by Gross and Levitt in 1994. The second was a hoax in which Alan Sokal submitted a paper claiming that quantum gravity was a social construct along with other postmodern nonsense to a journal of cultural studies. After it was accepted and published, Sokal revealed the hoax and wrote a book denouncing sociology of science and postmodernism.

Sadly, Sokal’s book, while full of entertaining examples of the worst of postmodern critique of science, really defeats only the most feeble of science’s enemies, revealing a poor grasp of some of the subtler and more valid criticism of science. For example, the postmodernists’ point that experimentation is not exactly the same thing as observation has real consequences, something that many earlier scientists themselves – like Robert Boyle and John Herschel – had wrestled with. Likewise, Higher Superstition, in my view, falls far below what we expect from Gross and Levitt. They deal Bruno Latour a well-deserved thrashing for claiming that science is a completely irrational process, and for the metaphysical conceit of holding that his own ideas on scientific behavior are fact while scientists’ claims about nature are not. But beyond that, Gross and Levitt reveal surprisingly poor knowledge of history and philosophy of science. They think Feyerabend is anti-science, they grossly misread Rorty, and waste time on a lot of strawmen.

Following closely  on the postmodern critique of science were the sociologists pursuing the social science of science. Their findings: it is not objectivity or method that delivers the outcome of science. In fact it is the interests of all scientists except social scientists that govern the output of scientific inquiry. This branch of Science and Technology Studies (STS), led by David Bloor at Edinburgh in the late 70s, overplayed both the underdetermination of theory by evidence and the concept of value-laden theories. These scientists also failed to see the irony of claiming a privileged position on the untenability of privileged positions in science. I.e., it is an absolute truth that there are no absolute truths.

While postmodern critique of science and facile politics in STC studies seem to be having a minor revival, the threats to real science from sociology, literary criticism and anthropology (I don’t mean that all sociology and anthropology are non-scientific) are small. But more subtle and possibly more ruinous threats to science may exist; and they come partly from within.

Modern threats to science seem more related to Eisenhower’s concerns than to the postmodernists. While Ike worried about the influence the US military had over corporations and universities (see the highly nuanced history of James Conant, Harvard President and chair of the National Defense Research Committee), Eisenhower’s concern dealt not with the validity of scientific knowledge but with the influence of values and biases on both the subjects of research and on the conclusions reached therein. Science, when biased enough, becomes bad science, even when scientists don’t fudge the data.

Pharmaceutical research is the present poster child of biased science. Accusations take the form of claims that GlaxoSmithKline knew that Helicobacter pylori caused ulcers – not stress and spicy food – but concealed that knowledge to preserve sales of the blockbuster drugs, Zantac and Tagamet. Analysis of those claims over the past twenty years shows them to be largely unsupported. But it seems naïve to deny that years of pharmaceutical companies’ mailings may have contributed to the premature dismissal by MDs and researchers of the possibility that bacteria could in fact thrive in the stomach’s acid environment. But while Big Pharma may have some tidying up to do, its opponents need to learn what a virus is and how vaccines work.

Pharmaceutical firms generally admit that bias, unconscious and of the selection and confirmation sort – motivated reasoning – is a problem. Amgen scientists recently tried to reproduce results considered landmarks in basic cancer research to study why clinical trials in oncology have such high failure rate. They reported in Nature that they were able to reproduce the original results in only six of 53 studies. A similar team at Bayer reported that only about 25% of published preclinical studies could be reproduced. That the big players publish analyses of bias in their own field suggests that the concept of self-correction in science is at least somewhat valid, even in cut-throat corporate science.

Some see another source of bad pharmaceutical science as the almost religious adherence to the 5% (+- 1.96 sigma) definition of statistical significance, probably traceable to RA Fisher’s 1926 The Arrangement of Field Experiments. The 5% false-positive probability criterion is arbitrary, but is institutionalized. It can be seen as a classic case of subjectivity being perceived as objectivity because of arbitrary precision. Repeat any experiment long enough and you’ll get statistically significant results within that experiment. Pharma firms now aim to prevent such bias by participating in a registration process that requires researchers to publish findings, good, bad or inconclusive.

Academic research should take note. As is often reported, the dependence of publishing on tenure and academic prestige has taken a toll (“publish or perish”). Publishers like dramatic and conclusive findings, so there’s a strong incentive to publish impressive results – too strong. Competitive pressure on 2nd tier publishers leads to their publishing poor or even fraudulent study results. Those publishers select lax reviewers, incapable of or unwilling to dispute authors. Karl Popper’s falsification model of scientific behavior, in this scenario, is a poor match for actual behavior in science. The situation has led to hoaxes like Sokal’s, but within – rather than across – disciplines. Publication of the nonsensical “Fuzzy”, Homogeneous Configurations by Marge Simpson and Edna Krabappel (cartoon character names) by the Journal of Computational Intelligence and Electronic Systems in 2014 is a popular example. Following Alan Sokal’s line of argument, should we declare the discipline of computational intelligence to be pseudoscience on this evidence?

Note that here we’re really using Bruno Latour’s definition of science – what scientists and related parties do with a body of knowledge in a network, rather than simply the body of knowledge. Should scientists be held responsible for what corporations and politicians do with their knowledge? It’s complicated. When does flawed science become bad science. It’s hard to draw the line; but does that mean no line needs to be drawn?

Environmental science, I would argue, is some of the worst science passing for genuine these days. Most of it exists to fill political and ideological roles. The Bush administration pressured scientists to suppress communications on climate change and to remove the terms “global warming” and “climate change” from publications. In 2005 Rick Piltz resigned from the  U.S. Climate Change Science Program claiming that Bush appointee Philip Cooney had personally altered US climate change documents to lessen the strength of their conclusions. In a later congressional hearing, Cooney confirmed having done this. Was this bad science, or just bad politics? Was it bad science for those whose conclusions had been altered not to blow the whistle?

The science of climate advocacy looks equally bad. Lack of scientific rigor in the IPCC is appalling – for reasons far deeper than the hockey stick debate. Given that the IPCC started with the assertion that climate change is anthropogenic and then sought confirming evidence, it is not surprising that the evidence it has accumulated supports the assertion. Compelling climate models, like that of Rick Muller at UC Berkeley, have since given strong support for anthropogenic warming. That gives great support for the anthropogenic warming hypothesis; but gives no support for the IPCC’s scientific practices. Unjustified belief, true or false, is not science.

Climate change advocates, many of whom are credentialed scientists, are particularly prone to a mixing bad science with bad philosophy, as when evidence for anthropogenic warming is presented as confirming the hypothesis that wind and solar power will reverse global warming. Stanford’s Mark Jacobson, a pernicious proponent of such activism, does immeasurable damage to his own stated cause with his descent into the renewables fantasy.

Finally, both major climate factions stoop to tying their entire positions to the proposition that climate change has been measured (or not). That is, both sides are in implicit agreement that if no climate change has occurred, then the whole matter of anthropogenic climate-change risk can be put to bed. As a risk man observing the risk vector’s probability/severity axes – and as someone who buys fire insurance though he has a brick house – I think our science dollars might be better spent on mitigation efforts that stand a chance of being effective rather than on 1) winning a debate about temperature change in recent years, or 2) appeasing romantic ideologues with “alternative” energy schemes.

Science survived Abe Lincoln (rain follows the plow), Ronald Reagan (evolution just a theory) and George Bush (coercion of scientists). It will survive Barack Obama (persecution of deniers) and Jerry Brown and Al Gore (science vs. pronouncements). It will survive big pharma, cold fusion, superluminal neutrinos, Mark Jacobson, Brian Greene, and the Stanford propaganda machine. Science will survive bad science because bad science is part of science, and always has been. As Paul Feyerabend noted, Galileo routinely used propaganda, unfair rhetoric, and arguments he knew were invalid to advance his worldview.

Theory on which no evidence can bear is religion. Theory that is indifferent to evidence is often politics. Granting Bloor, for sake of argument, that all theory is value-laden, and granting Kuhn, for sake of argument, that all observation is theory-laden, science still seems to have an uncanny knack for getting the world right. Planes fly, quantum tunneling makes DVD players work, and vaccines prevent polio. The self-corrective nature of science appears to withstand cranks, frauds, presidents, CEOs, generals and professors. As Carl Sagan Often said, science should withstand vigorous skepticism. Further, science requires skepticism and should welcome it, both from within and from irksome sociologists.

.

 

the multidisciplinarian

.

XKCD cartoon courtesy of xkcd.com

 

, , ,

1 Comment

Great Philosophers Damned to Hell

April 1 2015.

My neighbor asked me if I thought anything new ever happened in philosophy, or whether, 2500 years after Socrates, all that could be worked out in philosophy had been wrapped up and shipped. Alfred Whitehead came to mind, who wrote in Process and Reality that the entire European philosophical tradition was merely footnotes to Plato. I don’t know what Whitehead meant by this, or for that matter, by the majority of his metaphysical ramblings. I’m no expert, but for my money most of what’s great in philosophy has happened in the last few centuries – including some real gems in the last few decades.

For me, ancient, eastern, and medieval philosophy is merely a preface to Hume. OK, a few of his predecessors deserve a nod – Peter Abelard, Adelard of Bath, and Francis Bacon. But really, David Hume was the first human honest enough to admit that we can’t really know much about anything worth knowing and that our actions are born of custom, not reason. Hume threw a wrench into the works of causation and induction and stopped them cold. Hume could write clearly and concisely. Try his Treatise some time.

Immanuel Kant, in an attempt to reconcile empiricism with rationalism, fought to rescue us from Hume’s skepticism and failed miserably. Kant, often a tad difficult to grasp (“transcendental idealism” actually can make sense once you get his vocabulary), succeeded in opposing every one of his own positions while paving the way for the great steaming heap of German philosophy that reeks to this day.

The core of that heap is, of course, the domain of GWF Hegel, which the more economical Schopenhauer called “pseudo-philosophy paralyzing all mental powers, stifling all real thinking.”

Don’t take my word (or Schopenhauer‘s) for it. Read Karl Marx’s Critique of Hegel’s Philosophy. On second thought, don’t. Just read Imre Lakatos’s critique of Marx’s critique of Hegel. Better yet, read Paul Feyerabend’s critique of Lakatos’s critique of Marx’s critique. Of Hegel. Now you’re getting the spirit of philosophy. For every philosopher there is an equal and opposite philosopher. For Kant, they were the same person. For Hegel, the opposite and its referent are both all substance and non-being. Or something like that.

Hegel set out to “make philosophy speak German” and succeeded in making German speak gibberish. Through great effort and remapping your vocabulary you can eventually understand Hegel, at which point you realize what an existential waste that effort has been. But not all of what Hegel wrote was gibberish; some of it was facile politics.

Hegel writes – in the most charitable of translations – that reason “is Substance, as well as Infinite Power; its own Infinite Material underlying all the natural and spiritual life which it originates, as also the Infinite Form, – that which sets this Material in motion”

I side with the logical positivists, who, despite ultimately crashing into Karl Popper’s brick wall, had the noble cause of making philosophy work like science. The positivists, as seen in writings by AJ Ayer and Hans Reichenbach, thought the words of Hegel simply did no intellectual work. Rudolf Carnap relentlessly mocked Heidegger’s “the nothing itself nothings.” It sounds better in the Nazi philosopher’s own German: “Das Nichts nichtet,” and reveals that Reichenbach could have been more sympathetic in his translation by using nihilates instead of nothings.  The removal of a sentence from its context was unfair, as you can plainly see when it is returned to its native habitat:

In anxiety occurs a shrinking back before … which is surely not any sort of flight but rather a kind of bewildered calm. This “back before” takes its departure from the nothing. The nothing itself does not attract; it is essentially repelling. But this repulsion is itself as such a parting gesture toward beings that are submerging as a whole. This wholly repelling gesture toward beings that are in retreat as a whole, which is the action of the nothing that oppresses Dasein in anxiety, is the essence of the nothing: nihilation. It is neither an annihilation of beings nor does it spring from a negation. Nihilation will not submit to calculation in terms of annihilation and negation. The nothing itself nihilates.

Heidegger goes on like that for 150 pages.

The positivists found fault with philosophers who argued from their armchairs that Einstein could not have been right. Yes, they really did this; and not all of them opposed Einstein’s science just because it was Jewish. The philosophy of the positivists had some real intellectual heft, despite being wrong, more or less. They were consumed not only by causality and determinism, but by the quest for demarcation – the fine line between science and nonsense. They failed. Popper burst their bubble by pointing out that scientific theory selection relied more on absence of disconfirming evidence than on the presence of confirming evidence. Positivism fell victim mainly to its own honest efforts. The insider Willard Van Orman Quine (like Popper), put a nail in positivism’s coffin by showing the distinction between analytic and synthetic statements to be false. Hillary Putnam, killing the now-dead horse, then showed the distinction between “observational” and “theoretical” to be meaningless. Finally, in 1960, Thomas Kuhn showed up in Berkeley with the bomb that the truth conditions for science do not stand independent of their paradigms. I think often and write occasionally on the highly misappropriated Kuhn. He was wrong in all his details and overall one of the rightest men who ever lived.

Before leaving logical positivism, I must mention another hero from its ranks, Carl Hempel. Hempel is best known, at least in scientific circles, for his wonderful illustration of Hume’s problem of induction known as the Raven Paradox.

But I digress. I mainly intended to say that philosophy for me really starts with Hume and some of his contemporaries, like Adam Smith, William Blackstone, Voltaire, Diderot, Moses Mendelssohn, d’Alembert, and Montesquieu.

And to say that 20th century philosophers have still been busy, and have broken new ground. As favorites I’ll cite Quine, Kuhn and Hempel, mentioned above, along with Ludwig Wittgenstein, Richard Rorty (late works in particular), Hannah Arendt, John Rawls (read about, don’t read – great thinker, tedious writer), Michel Foucault (despite his Hegelian tendencies), Charles Peirce, William James (writes better than his brother), Paul Feyerabend, 7th Circuit Judge Richard Posner, and the distinguished Simon Blackburn, with whom I’ll finish.

One of Thomas Kuhn’s more controversial concepts is that of incommensurability. He maintained that cross-paradigm argument is futile because members of opposing paradigms do not share a sufficiently common language in which to argue. At best, they lob their words across each other’s bows. This brings to mind a story told by Simon Blackburn at a talk I attended a few years back. It recalls Theodoras and Protagoras against Socrates on truth being absolute vs. relative – if you’re into that sort of thing. If not, it’s still good.

Blackburn said that Lord Jeremy Waldron was attending a think tank session on ethics at Princeton, out of obligation, not fondness for such sessions. As Blackburn recounted Waldron’s experience, Waldron sat on a forum in which representatives of the great religions gave presentations.

First the Buddhist talked of the corruption of life by desire, the eight-fold way, and the path of enlightenment, to which all the panelists said  “Wow, terrific. If that works for you that’s great” and things of the like.

Then the Hindu holy man talked of the cycles of suffering and birth and rebirth, the teachings of Krishna and the way to release. And the panelists praised his conviction, applauded and cried ‘Wow, terrific – if it works for you that’s fabulous” and so on.

A Catholic priest then came to the podium, detailing  the message of Christ, the promise of salvation, and the path to eternal life. The panel cheered at his great passion, applauded and cried, ‘Wow, terrific, if that works for you, great”.

And the priest pounded his fist on the podium and shouted, ‘No! Not a question of whether it works for me! This is the true word of the living God; and if you don’t believe it you’re all damned to Hell!”

The panel cheered and gave a standing ovation, saying: “Wow! Terrific! If that works for you that’s great”!

,

5 Comments