Posts Tagged science

Bad Science, Broken Trust: Commentary on Pandemic Failure

In my three previous posts (1, 2, 3) on the Covid-19 response and statistical reasoning, I deliberately sidestepped a deeper, more uncomfortable truth that emerges from such analysis: that ideologically driven academic and institutional experts – credentialed, celebrated, and deeply embedded in systems of authority – played a central role in promoting flawed statistical narratives that served political agendas and personal advancement. Having defended my claims in two previous posts – from the perspective of a historian of science – I now feel I justified in letting it rip. Bad science, bad statistics, and institutional arrogance directly shaped a public health disaster.

What we witnessed was not just error, but hubris weaponized by institutions. Self-serving ideologues – cloaked in the language of science – shaped policies that led, in no small part, to hundreds of thousands of preventable deaths. This was not a failure of data, but of science and integrity, and it demands a historical reckoning.

The Covid-19 pandemic exacted a devastating toll: a 13% global GDP collapse in Q2 2020, and a 12–15% spike in adolescent suicidal ideation, as reported by Nature Human Behaviour (2020) and JAMA Pediatrics (2021). These catastrophic outcomes –economic freefall and a mental health crisis – can’t be blamed on the pathogen. Its lethality was magnified by avoidable policy blunders rooted in statistical incompetence and institutional cowardice. Five years on, the silence from public health authorities is deafening. The opportunity to learn from these failures – and to prevent their repetition – is being squandered before our eyes.

One of the most glaring missteps was the uncritical use of raw case counts to steer public policy – a volatile metric, heavily distorted by shifting testing rates, as The Lancet (2021, cited earlier) highlighted. More robust measures like deaths per capita or infection fatality rates, advocated by Ioannidis (2020), were sidelined, seemingly for facile politics. The result: fear-driven lockdowns based on ephemeral, tangential data. The infamous “6-foot rule,” based on outdated droplet models, continued to dominate public messaging through 2020 and beyond – even though evidence (e.g., BMJ, 2021) solidly pointed to airborne transmission. This refusal to pivot toward reality delayed life-saving ventilation reforms and needlessly prolonged school closures, economic shutdowns, and the cascading psychological harm they inflicted.

At the risk of veering into anecdote, this example should not be lost to history: In 2020, a surfer was arrested off Malibu Beach and charged with violating the state’s stay-at-home order. As if he might catch or transmit Covid – alone, in the open air, on the windswept Pacific. No individual could possibly believe that posed a threat. It takes a society – its institutions, its culture, its politics – to manufacture collective stupidity on that scale.

The consequences of these reasoning failures were grave. And yet, astonishingly, there has been no comprehensive, transparent institutional reckoning. No systematic audits. No revised models. No meaningful reforms from the CDC, WHO, or major national agencies. Instead, we see a retrenchment: the same narratives, the same faces, and the same smug complacency. The refusal to account for aerosol dynamics, mental health trade-offs, or real-time data continues to compromise our preparedness for future crises. This is not just negligence. It is a betrayal of public trust.

If the past is not confronted, it will be repeated. We can’t afford another round of data-blind panic, policy overreach, and avoidable harm. What’s needed now is not just reflection but action: independent audits of pandemic responses, recalibrated risk models that incorporate full-spectrum health and social impacts, and a ruthless commitment to sound use of data over doctrine.

The suffering of 2020–2022 must mean something. If we want resilience next time, we must demand accountability this time. The era of unexamined expert authority must end – not to reject expertise – but to restore it to a foundation of integrity, humility, and empirical rigor.

It’s time to stop forgetting – and start building a public health framework worthy of the public it is supposed to serve.

___ ___ ___

, , , , , , , , ,

4 Comments

Covid Response – Case Counts and Failures of Statistical Reasoning

In my previous post I defended three claims made in an earlier post about relative successes in statistics and statistical reasoning in the American Covid-19 response. This post gives support for three claims regarding misuse of statistics and poor statistical reasoning during the pandemic.

Misinterpretation of Test Results (4)
Early in the COVID-19 pandemic, many clinicians and media figures misunderstood diagnostic test accuracy, misreading PCR and antigen test results by overlooking pre-test probability. This caused false reassurance or unwarranted alarm, though some experts mitigated errors with Bayesian reasoning. This was precisely the type of mistake highlighted in the Harvard study decades earlier. (4)

Polymerase chain reaction (PCR) tests, while considered the gold standard for detecting SARS-CoV-2, were known to have variable sensitivity (70–90%) depending on factors like sample quality, timing of testing relative to infection, and viral load. False negatives were a significant concern, particularly when clinicians or media interpreted a negative result as definitively ruling out infection without considering pre-test probability (the likelihood of disease based on symptoms, exposure, or prevalence). Similarly, antigen tests, which are less sensitive than PCR, were prone to false negatives, especially in low-prevalence settings or early/late stages of infection.

A 2020 article in Journal of General Internal Medicine noted that physicians often placed undue confidence in test results, minimizing clinical reasoning (e.g., pre-test probability) and deferring to imperfect tests. This was particularly problematic for PCR false negatives, which could lead to a false sense of security about infectivity.

A 2020 Nature Reviews Microbiology article reported that during the early pandemic, the rapid development of diagnostic tests led to implementation challenges, including misinterpretation of results due to insufficient consideration of pre-test probability. This was compounded by the lack of clinical validation for many tests at the time.

Media reports often oversimplified test results, presenting PCR or antigen tests as definitive without discussing limitations like sensitivity, specificity, or the role of pre-test probability. Even medical professionals struggled with Bayesian reasoning, leading to public confusion about test reliability.

Antigen tests, such as lateral flow tests, were less sensitive than PCR (pooled sensitivity of 64.2% in pediatric populations) but highly specific (99.1%). Their performance varied significantly with pre-test probability, yet early in the pandemic, they were sometimes used inappropriately in low-prevalence settings, leading to misinterpretations. In low-prevalence settings (e.g., 1% disease prevalence), a positive antigen test with 99% specificity and 64% sensitivity could have a high false-positive rate, but media and some clinicians often reported positives as conclusive without contextualizing prevalence. Conversely, negative antigen tests were sometimes taken as proof of non-infectivity, despite high false-negative rates in early infection.

False negatives in PCR tests were a significant issue, particularly when testing was done too early or late in the infection cycle. A 2020 study in Annals of Internal Medicine found that the false-negative rate of PCR tests varied by time since exposure, peaking at 20–67% depending on the day of testing. Clinicians who relied solely on a negative PCR result without considering symptoms or exposure history often reassured patients they were not infected, potentially allowing transmission.

In low-prevalence settings, even highly specific tests like PCR (specificity ~99%) could produce false positives, especially with high cycle threshold (Ct) values indicating low viral loads. A 2020 study in Clinical Infectious Diseases found that only 15.6% of positive PCR results in low pre-test probability groups (e.g., asymptomatic screening) were confirmed by an alternate assay, suggesting a high false-positive rate. Media amplification of positive cases without context fueled public alarm, particularly during mass testing campaigns.

Antigen tests, while rapid, had lower sensitivity and were prone to false positives in low-prevalence settings. An oddly credible 2021 Guardian article noted that at a prevalence of 0.3% (1 in 340), a lateral flow test with 99.9% specificity could still yield a 5% false-positive rate among positives, causing unnecessary isolation or panic. In early 2020, widespread testing of asymptomatic individuals in low-prevalence areas led to false positives being reported as “new cases,” inflating perceived risk.

Many Covid professionals mitigated errors with Bayesian reasoning, using pre-test probability, test sensitivity, and specificity to calculate the post-test probability of disease. Experts who applied this approach were better equipped to interpret COVID-19 test results accurately, avoiding over-reliance on binary positive/negative outcomes.

Robert Wachter, MD, in a 2020 Medium article, explained Bayesian reasoning for COVID-19 testing, stressing that test results must be interpreted with pre-test probability. For example, a negative PCR in a patient with a 30% pre-test probability (based on symptoms and prevalence) still carried a significant risk of infection, guiding better clinical decisions. In Germany, mathematical models incorporating pre-test probability optimized PCR allocation, ensuring testing was targeted to high-risk groups.

Cases vs. Deaths (5)
One of the most persistent statistical missteps during the pandemic was the policy focus on case counts, devoid of context. Case numbers ballooned or dipped not only due to viral spread but due to shifts in testing volume, availability, and policies. Covid deaths per capita rather than case count would have served as a more stable measure of public health impact. Infection fatality rates would have been better still.

There was a persistent policy emphasis on cases alone. Throughout the COVID-19 pandemic, public health policies, such as lockdowns, mask mandates, and school closures, were often justified by rising case counts reported by agencies like the CDC, WHO, and national health departments. For example, in March 2020, the WHO’s situation reports emphasized confirmed cases as a primary metric, influencing global policy responses. In the U.S., states like California and New York tied reopening plans to case thresholds (e.g., California’s Blueprint for a Safer Economy, August 2020), prioritizing case numbers over other metrics. Over-reliance on case-based metrics was documented by Trisha Greenhalgh in Lancet (Ten scientific reasons in support of airborne transmission…).

Case counts, without context, were frequently reported without contextualizing factors like testing rates or demographics, leading to misinterpretations. A 2021 BMJ article criticized the overreliance on case counts, noting they were used to “justify public health measures” despite their variability, supporting the claim of a statistical misstep. Media headlines, such as “U.S. Surpasses 100,000 Daily Cases” (CNN, November 4, 2020), amplified case counts, often without clarifying testing changes, fostering fear-driven policy decisions.

Case counts were directly tied to testing volume, which varied widely. In the U.S., testing increased from ~100,000 daily tests in April 2020 to over 2 million by November 2020 (CDC data). Surges in cases often coincided with testing ramps, e.g., the U.S. case peak in July 2020 followed expanded testing in Florida and Texas. Testing access was biased (in the statistical sense). Widespread testing including asymptomatic screening inflated counts. Policies like mandatory testing for hospital admissions or travel (e.g., New York’s travel testing mandate, November 2020) further skewed numbers. 2020 Nature study highlighted that case counts were “heavily influenced by testing capacity,” with countries like South Korea detecting more cases due to aggressive testing, not necessarily higher spread. This supports the claim that testing volume drove case fluctuations beyond viral spread (J Peto, Nature – 2020).

Early in the pandemic, testing was limited due to supply chain issues and regulatory delays. For example, in March 2020, the U.S. conducted fewer than 10,000 tests daily due to shortages of reagents and swabs, underreporting cases (Johns Hopkins data). This artificially suppressed case counts. A 2021 Lancet article (R Horton) noted that “changes in testing availability distorted case trends,” with low availability early on masking true spread and later increases detecting more asymptomatic cases, aligning with the claim.

Testing policies, such as screening asymptomatic populations or requiring tests for specific activities, directly impacted case counts. For example, in China, mass testing of entire cities like Wuhan in May 2020 identified thousands of cases, many asymptomatic, inflating counts. In contrast, restrictive policies early on (e.g., U.S. CDC’s initial criteria limiting tests to symptomatic travelers, February 2020) suppressed case detection.

In the U.S., college campuses implementing mandatory weekly testing in fall 2020 reported case spikes, often driven by asymptomatic positives (e.g., University of Wisconsin’s 3,000+ cases, September 2020). A 2020 Science study (Assessment of SARS-CoV-2 screening) emphasized that “testing policy changes, such as expanded screening, directly alter reported case numbers,” supporting the claim that policy shifts drove case variability.

Deaths per capita, calculated as total Covid-19 deaths divided by population, are less sensitive to testing variations than case counts. For example, Sweden’s deaths per capita (1,437 per million by December 2020, Our World in Data) provided a clearer picture of impact than its case counts, which fluctuated with testing policies. Belgium and the U.K. used deaths per capita to compare regional impacts, guiding resource allocation. A 2021 JAMA study argued deaths per capita were a “more reliable indicator” of pandemic severity, as they reflected severe outcomes less influenced by testing artifacts. Death reporting had gross inconsistencies (e.g., defining “Covid-19 death”), but it was more standardized than case detection.

Infection Fatality Rates (IFR) reports the proportion of infections resulting in death, making it less prone to testing biases. A 2020 Bulletin of the WHO meta-analysis estimated a global IFR of ~0.6% (range 0.3-1.0%), varying by age and region. IFR gave a truer measure of lethality. Seroprevalence studies in New York City (April 2020) estimated an IFR of ~0.7%, offering insight into true mortality risk compared to case fatality rates (CFR), which were inflated by low testing (e.g., CFR ~6% in the U.S., March 2020).

US Covid cases vs deaths (vertical scales differ by 250X) from WHO data (cases, deaths) 2020-2023

Shifting Guidelines and Aerosol Transmission (6)
The “6-foot rule” was based on outdated models of droplet transmission. When evidence of aerosol spread emerged, guidance failed to adapt. Critics pointed out the statistical conservatism in risk modeling, its impact on mental health and the economy. Institutional inertia and politics prevented vital course corrections.

The 6-foot (or 2-meter) social distancing guideline, widely adopted by the CDC and WHO in early 2020, stemmed from historical models of respiratory disease transmission, particularly the 1930s work of William F. Wells on tuberculosis. Wells’ droplet model posited that large respiratory droplets fall within 1–2 meters, implying that maintaining this distance reduces transmission risk. The CDC’s March 2020 guidance explicitly recommended “at least 6 feet” based on this model, assuming most SARS-CoV-2 transmission occurred via droplets.

The droplet model was developed before modern understanding of aerosol dynamics. It assumed that only large droplets (>100 μm) were significant, ignoring smaller aerosols (<5–10 μm) that can travel farther and remain airborne longer. A 2020 Nature article noted that the 6-foot rule was rooted in “decades-old assumptions” about droplet size, which did not account for SARS-CoV-2’s aerosol properties, such as its ability to spread in poorly ventilated spaces beyond 6 feet.

Studies, like a 2020 Lancet article by Morawska and Milton, argued that the 6-foot rule was inadequate for aerosolized viruses, as aerosols could travel tens of meters in certain conditions (e.g., indoor settings with low air exchange). Real-world examples, such as choir outbreaks (e.g., Skagit Valley, March 2020, where 53 of 61 singers were infected despite spacing), highlighted transmission beyond 6 feet, undermining the droplet-only model.

The WHO initially downplayed aerosol transmission, stating in March 2020 that COVID-19 was “not airborne” except in specific medical procedures (e.g., intubation). After the July 2020 letter, the WHO updated its guidance on July 9, 2020, to acknowledge “emerging evidence” of airborne spread but maintained droplet-focused measures (e.g., 1-meter distancing) without emphasizing ventilation or masks for aerosols. A 2021 BMJ article criticized the WHO for “slow and risk-averse” updates, noting that full acknowledgment of aerosol spread was delayed until May 2021.

The CDC also failed to update its guidance. In May 2020, it emphasized droplet transmission and 6-foot distancing. A brief September 2020 update mentioning “small particles” was retracted days later, reportedly due to internal disagreement. The CDC fully updated its guidance to include aerosol transmission in May 2021, recommending improved ventilation, but retained the 6-foot rule in many contexts (e.g., schools) until 2022. Despite aerosol evidence, the 6-foot rule remained a cornerstone of policies. For example, U.S. schools enforced 6-foot desk spacing in 2020–2021, delaying reopenings despite studies (e.g., a 2021 Clinical Infectious Diseases study).

Early CDC and WHO models overestimated droplet transmission risks while underestimating aerosol spread, leading to rigid distancing rules. A 2021 PNAS article by Prather et al. criticized these models as “overly conservative,” noting they ignored aerosol physics and real-world data showing low outdoor transmission risks. Risk models overemphasized close-contact droplet spread, neglecting long-range aerosol risks in indoor settings. John Ioannidis, in a 2020 European Journal of Clinical Investigation commentary, criticized the “precautionary principle” in modeling, which prioritized avoiding any risk over data-driven adjustments, leading to policies like prolonged school closures based on conservative assumptions about transmission.

Risk models rarely incorporated Bayesian updates with new data, specifically low transmission in well-ventilated spaces. A 2020 Nature commentary by Tang et al. noted that models failed to adjust for aerosol decay rates or ventilation, overestimating risks in outdoor settings while underestimating them indoors.

Researchers and public figures criticized prolonged social distancing and lockdowns, driven by conservative risk models, for exacerbating mental health issues. A 2021 The Lancet Psychiatry study reported a 25% global increase in anxiety and depression in 2020, attributing it to isolation from distancing measures. Jay Bhattacharya, co-author of the Great Barrington Declaration, argued in 2020 that rigid distancing rules, like the 6-foot mandate, contributed to social isolation without proportional benefits.

Tragically, A 2021 JAMA Pediatrics study concluded that Covid school closures increased adolescent suicide ideation by 12–15%. Economists and policy analysts, such as those at the American Institute for Economic Research (AIER), criticized the economic fallout of distancing policies. The 6-foot rule led to capacity restrictions in businesses (e.g., restaurants, retail), contributing to economic losses. A 2020 Nature Human Behaviour study estimated a 13% global GDP decline in Q2 2020 due to lockdowns and distancing measures.

Institutional inertia and political agendas prevented course corrections, such as prioritizing ventilation over rigid distancing. The WHO’s delay in acknowledging aerosols was attributed to political sensitivities. A 2020 Nature article (Lewis) reported that WHO advisors faced pressure to align with member states’ policies, slowing updates.

Next post, I’ll offer commentary on Covid policy from the perspective of a historian of science.

, , , , , , , ,

2 Comments

Statistical Reasoning in Healthcare: Lessons from Covid-19

For centuries, medicine has navigated the tension between science and uncertainty. The Covid pandemic exposed this dynamic vividly, revealing both the limits and possibilities of statistical reasoning. From diagnostic errors to vaccine communication, the crisis showed that statistics is not just a technical skill but a philosophical challenge, shaping what counts as knowledge, how certainty is conveyed, and who society trusts.

Historical Blind Spot

Medicine’s struggle with uncertainty has deep roots. In antiquity, Galen’s reliance on reasoning over empirical testing set a precedent for overconfidence insulated by circular logic. If his treatments failed, it was because the patient was incurable. Enlightenment physicians, like those who bled George Washington to death, perpetuated this resistance to scrutiny. Voltaire wrote, “The art of medicine consists in amusing the patient while nature cures the disease.” The scientific revolution and the Enlightenment inverted Galen’s hierarchy, yet the importance of that reversal is often neglected, even by practitioners. Even in the 20th century, pioneers like Ernest Codman faced ostracism for advocating outcome tracking, highlighting a medical culture that prized prestige over evidence. While evidence-based practice has since gained traction, a statistical blind spot persists, rooted in training and tradition.

The Statistical Challenge

Physicians often struggle with probabilistic reasoning, as shown in a 1978 Harvard study where only 18% correctly applied Bayes’ Theorem to a diagnostic test scenario (a disease with 1/1,000 prevalence and a 5% false positive rate yields a ~2% chance of disease given a positive test). A 2013 follow-up showed marginal improvement (23% correct). Medical education, which prioritizes biochemistry over probability, is partly to blame. Abusive lawsuits, cultural pressures for decisiveness, and patient demands for certainty further discourage embracing doubt, as Daniel Kahneman’s work on overconfidence suggests.

Neil Ferguson and the Authority of Statistical Models

Epidemiologist Neil Ferguson and his team at Imperial College London produced a model in March 2020 predicting up to 500,000 UK deaths without intervention. The US figure could top 2 million. These weren’t forecasts in the strict sense but scenario models, conditional on various assumptions about disease spread and response.

Ferguson’s model was extraordinarily influential, shifting the UK and US from containment to lockdown strategies. It also drew criticism for opaque code, unverified assumptions, and the sheer weight of its political influence. His eventual resignation from the UK’s Scientific Advisory Group for Emergencies (SAGE) over a personal lockdown violation further politicized the science.

From the perspective of history of science, Ferguson’s case raises critical questions: When is a model scientific enough to guide policy? How do we weigh expert uncertainty under crisis? Ferguson’s case shows that modeling straddles a line between science and advocacy. It is, in Kuhnian terms, value-laden theory.

The Pandemic as a Pedagogical Mirror

The pandemic was a crucible for statistical reasoning. Successes included the clear communication of mRNA vaccine efficacy (95% relative risk reduction) and data-driven ICU triage using the SOFA score, though both had limitations. Failures were stark: clinicians misread PCR test results by ignoring pre-test probability, echoing the Harvard study’s findings, while policymakers fixated on case counts over deaths per capita. The “6-foot rule,” based on outdated droplet models, persisted despite disconfirming evidence, reflecting resistance to updating models, inability to apply statistical insights, and institutional inertia. Specifics of these issues are revealing.

Mostly Positive Examples:

  • Risk Communication in Vaccine Trials (1)
    The early mRNA vaccine announcements in 2020 offered clear statistical framing by emphasizing a 95% relative risk reduction in symptomatic COVID-19 for vaccinated individuals compared to placebo, sidelining raw case counts for a punchy headline. While clearer than many public health campaigns, this focus omitted absolute risk reduction and uncertainties about asymptomatic spread, falling short of the full precision needed to avoid misinterpretation.

  • Clinical Triage via Quantitative Models (2)
    During peak ICU shortages, hospitals adopted the SOFA score, originally a tool for assessing organ dysfunction, to guide resource allocation with a semi-objective, data-driven approach. While an improvement over ad hoc clinical judgment, SOFA faced challenges like inconsistent application and biases that disadvantaged older or chronically ill patients, limiting its ability to achieve fully equitable triage.

  • Wastewater Epidemiology (3)
    Public health researchers used viral RNA in wastewater to monitor community spread, reducing the sampling biases of clinical testing. This statistical surveillance, conducted outside clinics, offered high public health relevance but faced biases and interpretive challenges that tempered its precision.

Mostly Negative Examples:

  • Misinterpretation of Test Results (4)
    Early in the COVID-19 pandemic, many clinicians and media figures misunderstood diagnostic test accuracy, misreading PCR and antigen test results by overlooking pre-test probability. This caused false reassurance or unwarranted alarm, though some experts mitigated errors with Bayesian reasoning. This was precisely the type of mistake highlighted in the Harvard study decades earlier.

  • Cases vs. Deaths (5)
    One of the most persistent statistical missteps during the pandemic was the policy focus on case counts, devoid of context. Case numbers ballooned or dipped not only due to viral spread but due to shifts in testing volume, availability, and policies. COVID deaths per capita rather than case count would have served as a more stable measure of public health impact. Infection fatality rates would have been better still.

  • Shifting Guidelines and Aerosol Transmission (6)
    The “6-foot rule” was based on outdated models of droplet transmission. When evidence of aerosol spread emerged, guidance failed to adapt. Critics pointed out the statistical conservatism in risk modeling, its impact on mental health and the economy. Institutional inertia and politics prevented vital course corrections.

(I’ll defend these six examples in another post.)

A Philosophical Reckoning

Statistical reasoning is not just a mathematical tool – it’s a window into how science progresses, how it builds trust, and its special epistemic status. In Kuhnian terms, the pandemic exposed the fragility of our current normal science. We should expect methodological chaos and pluralism within medical knowledge-making. Science during COVID-19 was messy, iterative, and often uncertain – and that’s in some ways just how science works.

This doesn’t excuse failures in statistical reasoning. It suggests that training in medicine should not only include formal biostatistics, but also an eye toward history of science – so future clinicians understand the ways that doubt, revision, and context are intrinsic to knowledge.

A Path Forward

Medical education must evolve. First, integrate Bayesian philosophy into clinical training, using relatable case studies to teach probabilistic thinking. Second, foster epistemic humility, framing uncertainty as a strength rather than a flaw. Third, incorporate the history of science – figures like Codman and Cochrane – to contextualize medicine’s empirical evolution. These steps can equip physicians to navigate uncertainty and communicate it effectively.

Conclusion

Covid was a lesson in the fragility and potential of statistical reasoning. It revealed medicine’s statistical struggles while highlighting its capacity for progress. By training physicians to think probabilistically, embrace doubt, and learn from history, medicine can better manage uncertainty – not as a liability, but as a cornerstone of responsible science. As John Heilbron might say, medicine’s future depends not only on better data – but on better historical memory, and the nerve to rethink what counts as knowledge.


______

All who drink of this treatment recover in a short time, except those whom it does not help, all of whom die. It is obvious, therefore, that it fails only in incurable cases. – Galen

, , , ,

4 Comments

Extraordinary Popular Miscarriages of Science, Part 6 – String Theory

Introduction: A Historical Lens on String Theory

In 2006, I met John Heilbron, widely credited with turning the history of science from an emerging idea into a professional academic discipline. While James Conant and Thomas Kuhn laid the intellectual groundwork, it was Heilbron who helped build the institutions and frameworks that gave the field its shape. Through John I came to see that the history of science is not about names and dates – it’s about how scientific ideas develop, and why. It explores how science is both shaped by and shapes its cultural, social, and philosophical contexts. Science progresses not in isolation but as part of a larger human story.

The “discovery” of oxygen illustrates this beautifully. In the 18th century, Joseph Priestley, working within the phlogiston theory, isolated a gas he called “dephlogisticated air.” Antoine Lavoisier, using a different conceptual lens, reinterpreted it as a new element – oxygen – ushering in modern chemistry. This was not just a change in data, but in worldview.

When I met John, Lee Smolin’s The Trouble with Physics had just been published. Smolin, a physicist, critiques string theory not from outside science but from within its theoretical tensions. Smolin’s concerns echoed what I was learning from the history of science: that scientific revolutions often involve institutional inertia, conceptual blind spots, and sociopolitical entanglements.

My interest in string theory wasn’t about the physics. It became a test case for studying how scientific authority is built, challenged, and sustained. What follows is a distillation of 18 years of notes – string theory seen not from the lab bench, but from a historian’s desk.

A Brief History of String Theory

Despite its name, string theory is more accurately described as a theoretical framework – a collection of ideas that might one day lead to testable scientific theories. This alone is not a mark against it; many scientific developments begin as frameworks. Whether we call it a theory or a framework, it remains subject to a crucial question: does it offer useful models or testable predictions – or is it likely to in the foreseeable future?

String theory originated as an attempt to understand the strong nuclear force. In 1968, Gabriele Veneziano introduced a mathematical formula – the Veneziano amplitude – to describe the scattering of strongly interacting particles such as protons and neutrons. By 1970, Pierre Ramond incorporated supersymmetry into this approach, giving rise to superstrings that could account for both fermions and bosons. In 1974, Joël Scherk and John Schwarz discovered that the theory predicted a massless spin-2 particle with the properties of the hypothetical graviton. This led them to propose string theory not as a theory of the strong force, but as a potential theory of quantum gravity – a candidate “theory of everything.”

Around the same time, however, quantum chromodynamics (QCD) successfully explained the strong force via quarks and gluons, rendering the original goal of string theory obsolete. Interest in string theory waned, especially given its dependence on unobservable extra dimensions and lack of empirical confirmation.

That changed in 1984 when Michael Green and John Schwarz demonstrated that superstring theory could be anomaly-free in ten dimensions, reviving interest in its potential to unify all fundamental forces and particles. Researchers soon identified five mathematically consistent versions of superstring theory.

To reconcile ten-dimensional theory with the four-dimensional spacetime we observe, physicists proposed that the extra six dimensions are “compactified” into extremely small, curled-up spaces – typically represented as Calabi-Yau manifolds. This compactification allegedly explains why we don’t observe the extra dimensions.

In 1995, Edward Witten introduced M-theory, showing that the five superstring theories were different limits of a single 11-dimensional theory. By the early 2000s, researchers like Leonard Susskind and Shamit Kachru began exploring the so-called “string landscape” – a space of perhaps 10^500 (1 followed by 500 zeros) possible vacuum states, each corresponding to a different compactification scheme. This introduced serious concerns about underdetermination – the idea that available empirical evidence cannot determine which among many competing theories is correct.

Compactification introduces its own set of philosophical problems. Critics Lee Smolin and Peter Woit argue that compactification is not a prediction but a speculative rationalization: a move designed to save a theory rather than derive consequences from it. The enormous number of possible compactifications (each yielding different physics) makes string theory’s predictive power virtually nonexistent. The related challenge of moduli stabilization – specifying the size and shape of the compact dimensions – remains unresolved.

Despite these issues, string theory has influenced fields beyond high-energy physics. It has informed work in cosmology (e.g., inflation and the cosmic microwave background), condensed matter physics, and mathematics (notably algebraic geometry and topology). How deep and productive these connections run is difficult to assess without domain-specific expertise that I don’t have. String theory has, in any case, produced impressive mathematics. But mathematical fertility is not the same as scientific validity.

The Landscape Problem

Perhaps the most formidable challenge string theory faces is the landscape problem: the theory allows for an enormous number of solutions – on the order of 10^500. Each solution represents a possible universe, or “vacuum,” with its own physical constants and laws.

Why so many possibilities? The extra six dimensions required by string theory can be compactified in myriad ways. Each compactification, combined with possible energy configurations (called fluxes), gives rise to a distinct vacuum. This extreme flexibility means string theory can, in principle, accommodate nearly any observation. But this comes at the cost of predictive power.

Critics argue that if theorists can forever adjust the theory to match observations by choosing the right vacuum, the theory becomes unfalsifiable. On this view, string theory looks more like metaphysics than physics.

Some theorists respond by embracing the multiverse interpretation: all these vacua are real, and our universe is just one among many. The specific conditions we observe are then attributed to anthropic selection – we could only observe a universe that permits life like us. This view aligns with certain cosmological theories, such as eternal inflation, in which different regions of space settle into different vacua. But eternal inflation can exist independent of string theory, and none of this has been experimentally confirmed.

The Problem of Dominance

Since the 1980s, string theory has become a dominant force in theoretical physics. Major research groups at Harvard, Princeton, and Stanford focus heavily on it. Funding and institutional prestige have followed. Prominent figures like Brian Greene have elevated its public profile, helping transform it into both a scientific and cultural phenomenon.

This dominance raises concerns. Critics such as Smolin and Woit argue that string theory has crowded out alternative approaches like loop quantum gravity or causal dynamical triangulations. These alternatives receive less funding and institutional support, despite offering potentially fruitful lines of inquiry.

In The Trouble with Physics, Smolin describes a research culture in which dissent is subtly discouraged and young physicists feel pressure to align with the mainstream. He worries that this suppresses creativity and slows progress.

Estimates suggest that between 1,000 and 5,000 researchers work on string theory globally – a significant share of theoretical physics resources. Reliable numbers are hard to pin down.

Defenders of string theory argue that it has earned its prominence. They note that theoretical work is relatively inexpensive compared to experimental research, and that string theory remains the most developed candidate for unification. Still, the issue of how science sets its priorities – how it chooses what to fund, pursue, and elevate – remains contentious.

Wolfgang Lerche of CERN once called string theory “the Stanford propaganda machine working at its fullest.” As with climate science, 97% of string theorists agree that they don’t want to be defunded.

Thomas Kuhn’s Perspective

The logical positivists and Karl Popper would almost certainly dismiss string theory as unscientific due to its lack of empirical testability and falsifiability – core criteria in their respective philosophies of science. Thomas Kuhn would offer a more nuanced interpretation. He wouldn’t label string theory unscientific outright, but would express concern over its dominance and the marginalization of alternative approaches. In Kuhn’s framework, such conditions resemble the entrenchment of a paradigm during periods of normal science, potentially at the expense of innovation.

Some argue that string theory fits Kuhn’s model of a new paradigm, one that seeks to unify quantum mechanics and general relativity – two pillars of modern physics that remain fundamentally incompatible at high energies. Yet string theory has not brought about a Kuhnian revolution. It has not displaced existing paradigms, and its mathematical formalism is often incommensurable with traditional particle physics. From a Kuhnian perspective, the landscape problem may be seen as a growing accumulation of anomalies. But a paradigm shift requires a viable alternative – and none has yet emerged.

Lakatos and the Degenerating Research Program

Imre Lakatos offered a different lens, seeing science as a series of research programs characterized by a “hard core” of central assumptions and a “protective belt” of auxiliary hypotheses. A program is progressive if it predicts novel facts; it is degenerating if it resorts to ad hoc modifications to preserve the core.

For Lakatos, string theory’s hard core would be the idea that all particles are vibrating strings and that the theory unifies all fundamental forces. The protective belt would include compactification schemes, flux choices, and moduli stabilization – all adjusted to fit observations.

Critics like Sabine Hossenfelder argue that string theory is a degenerating research program: it absorbs anomalies without generating new, testable predictions. Others note that it is progressive in the Lakatosian sense because it has led to advances in mathematics and provided insights into quantum gravity. Historians of science are divided. Johansson and Matsubara (2011) argue that Lakatos would likely judge it degenerating; Cristin Chall (2019) offers a compelling counterpoint.

Perhaps string theory is progressive in mathematics but degenerating in physics.

The Feyerabend Bomb

Paul Feyerabend, who Lee Smolin knew from his time at Harvard, was the iconoclast of 20th-century philosophy of science. Feyerabend would likely have dismissed string theory as a dogmatic, aesthetic fantasy. He might write something like:

String theory dazzles with equations and lulls physics into a trance. It’s a mathematical cathedral built in the sky, a triumph of elegance over experience. Science flourishes in rebellion. Fund the heretics.”

Even if this caricature overshoots, Feyerabend’s tools offer a powerful critique:

  1. Untestability: String theory’s predictions remain out of reach. Its core claims – extra dimensions, compactification, vibrational modes – cannot be tested with current or even foreseeable technology. Feyerabend challenged the privileging of untested theories (e.g., Copernicanism in its early days) over empirically grounded alternatives.

  2. Monopoly and suppression: String theory dominates intellectual and institutional space, crowding out alternatives. Eric Weinstein recently said, in Feyerabendian tones, “its dominance is unjustified and has resulted in a culture that has stifled critique, alternative views, and ultimately has damaged theoretical physics at a catastrophic level.”

  3. Methodological rigidity: Progress in string theory is often judged by mathematical consistency rather than by empirical verification – an approach reminiscent of scholasticism. Feyerabend would point to Johannes Kepler’s early attempt to explain planetary orbits using a purely geometric model based on the five Platonic solids. Kepler devoted 17 years to this elegant framework before abandoning it when observational data proved it wrong.

  4. Sociocultural dynamics: The dominance of string theory stems less from empirical success than from the influence and charisma of prominent advocates. Figures like Brian Greene, with their public appeal and institutional clout, help secure funding and shape the narrative – effectively sustaining the theory’s privileged position within the field.

  5. Epistemological overreach: The quest for a “theory of everything” may be misguided. Feyerabend would favor many smaller, diverse theories over a single grand narrative.

Historical Comparisons

Proponents say other landmark theories emerging from math predated their experimental confirmation. They compare string theory to historical cases. Examples include:

  1. Planet Neptune: Predicted by Urbain Le Verrier based on irregularities in Uranus’s orbit, observed in 1846.
  2. General Relativity: Einstein predicted the bending of light by gravity in 1915, confirmed by Arthur Eddington’s 1919 solar eclipse measurements.
  3. Higgs Boson: Predicted by the Standard Model in the 1960s, observed at the Large Hadron Collider in 2012.
  4. Black Holes: Predicted by general relativity, first direct evidence from gravitational waves observed in 2015.
  5. Cosmic Microwave Background: Predicted by the Big Bang theory (1922), discovered in 1965.
  6. Gravitational Waves: Predicted by general relativity, detected in 2015 by the Laser Interferometer Gravitational-Wave Observatory (LIGO).

But these examples differ in kind. Their predictions were always testable in principle and ultimately tested. String theory, in contrast, operates at the Planck scale (~10^19 GeV), far beyond what current or foreseeable experiments can reach.

Special Concern Over Compactification

A concern I have not seen discussed elsewhere – even among critics like Smolin or Woit – is the epistemological status of compactification itself. Would the idea ever have arisen apart from the need to reconcile string theory’s ten dimensions with the four-dimensional spacetime we experience?

Compactification appears ad hoc, lacking grounding in physical intuition. It asserts that dimensions themselves can be small and curled – yet concepts like “small” and “curled” are defined within dimensions, not of them. Saying a dimension is small is like saying that time – not a moment in time, but time itself – can be “soon” or short in duration. It misapplies the very conceptual framework through which such properties are understood. At best, it’s a strained metaphor; at worst, it’s a category mistake and conceptual error.

This conceptual inversion reflects a logical gulf that proponents overlook or ignore. They say compactification is a mathematical consequence of the theory, not a contrivance. But without grounding in physical intuition – a deeper concern than empirical support – compactification remains a fix, not a forecast.

Conclusion

String theory may well contain a correct theory of fundamental physics. But without any plausible route to identifying it, string theory as practiced is bad science. It absorbs talent and resources, marginalizes dissent, and stifles alternative research programs. It is extraordinarily popular – and a miscarriage of science.

, , , , , ,

3 Comments

Extraordinary Popular Miscarriages of Science, Part 5 – Climate Science

NASA reports that ninety-seven percent of climate scientists agree that human-caused climate change is happening.

As with earlier posts on popular miscarriages of science, I look at climate science through the lens of the 20th century historians of science and philosophers of science and conclude that climate science is epistemically thin.

To elaborate a bit, most sensible folk accept that climate science addresses a potentially critical concern and that it has many earnest and talented practitioners. Despite those practitioners, it can be critiqued as bad science. We can do that without delving into the levels or claims, disputations, and counterarguments on relationships between ice cores, CO₂ concentrations and temperature. We can instead use the perspectives of prominent historians and philosophers of science of the 20th century, including the Logical Positivists in general, positivist Carl Hempel in particular, Karl Popper, Thomas Kuhn, Imre Lakatos, and Paul Feyerabend. Each perspective offers a distinct philosophical lens that highlights shortcomings in climate science’s methodologies and practices. I’ll explain each of those perspectives, why I think they’re important, and I’ll explore the critiques they would likely advance. These critiques don’t invalidate climate science conceptually as a field of inquiry but they highlight serious logical and philosophical concerns about its methodologies, practices, and epistemic foundations.

The historians and philosophers invoked here were fundamentally concerned with the demarcation problem: how to differentiate good science, bad science, and pseudoscience using a methodological perspective. They didn’t necessarily agree with each other. In some cases, like Kuhn versus Popper, they outright despised each other. All were flawed, but they were giants who shone brightly and presented systematic visions of how science works and what good science is.

Carnap, Ayer and the Positivists: Verification

The early Logical Positivists, particularly Rudolf Carnap and A.J. Ayer, saw empirical verification as the cornerstone of scientific claims. To be meaningful, a claim must be testable through observation or experiment. Climate science, while rooted in empirical data, struggles with verifiability because of its focus on long-term, global phenomena. Predictions about future consequences like sea level change, crop yield, hurricane frequency, and average temperature are not easily verifiable within a human lifespan or with current empirical methods. That might merely suggest that climate science is hard, not that it is bad. But decades of past predictions and retrodictions have been notoriously poor. Consequently, theories have been continuously revised in light of failed predictions. The reliance on indirect evidence – proxy data and computer simulations – rather than controlled experiments (which would be impossible or unethical) would not satisfy the positivists’ demand for direct, observable confirmation. Climatologist Michael Mann (originator of the “hockey stick” graph) often refers to climate simulation results as data. It is not – not in any sense that a positivist would use the term data. Positivists would see these difficulties and predictive failures as falling short of their strict criteria for scientific legitimacy.

Carl Hempel: Absence of Appeal to Universal Laws

The philosophy of Carl Hempel centered on the deductive-nomological model (aka covering-law model), which holds that scientific explanations should be derived from universal, timeless laws of nature combined with deductive logic about specific sense observations (empirical data). For Hempel, explanation and prediction were two sides of the same coin. If you can’t predict, then you cannot explain. For Hempel to judge a scientific explanation valid, deductive logic applied to laws of nature must confer nomic expectability upon the phenomenon being explained.

Climate science rarely operates with the kinds of laws of nature Hempel considered suitably general, simple, and verifiable. Instead, it relies on statistical correlations and computer models such as linking CO₂ concentrations to temperature increases through statistical trends, rather than strict, law-like statements. These approaches contrast with Hempel’s ideal of deductive certifiability. Scientific explanations should, by Hempel’s lights, be structured as deductive arguments, where the truth of the premises (law of nature plus initial conditions plus empirical data) entails the truth of the phenomenon to be explained. Without universal laws to anchor its explanations, climate science would appear to Hempel to lack the logical rigor of good science. On Hempel’s view, climate science’s dependence on complex models having parameters that are constantly re-tuned further weakens its explanatory power.

Hempel’s deductive-nomological model was a solid effort at removing causality from scientific explanations, something the positivists, following David Hume, thought to be too metaphysical.  The deductive-nomological model ultimately proved unable to bear the load Hempel wanted it to carry. Scientific explanation doesn’t work in certain cases without appeal to the notion of causality. That failure of Hempel’s model doesn’t weaken its criticism of climate science, or criticism of any other theory, however. It merely limits the deductive-nomological model’s ability to defend a theory by validating its explanations.

Karl Popper: Falsifiability

Karl Popper’s central criterion for demarcating good science from bad science and pseudoscience is falsifiability. A scientific theory, in his view, must make risky predictions that can be tested and potentially proven false. If a theory could not in principle be falsified, it does not belong to the realm of science.

The predictive models of climate science face severe challenges under this criterion. Climate models often project long-term trends, typically, global temperature increases over decades or centuries, which are probabilistic and difficult to test. Shorter-term, climate science has made abundant falsifiable predictions that were in fact falsified. Popper would initially see this as a mark of bad science, rather than pseudoscience.

But climate scientists have frequently adjusted their models or invoked external factors like previously unknown aerosol concentrations or volcanic eruptions to explain discrepancies. This would make climate science look, to Popper, too much like scientific Marxism and psychoanalysis, both of which he condemned for accommodating all possible outcomes to a prediction. When global temperatures temporarily stabilize or decrease, climate scientists often argue that natural variability is masking a long-term trend, rather than conceding a flaw in the theory. On this point, Popper would see climate science more akin to pseudoscience, since it lacks clear, testable predictions that could definitively refute its core claims.

For Popper, climate science must vigorously court skepticism and invite attempts at disputation and refutation, especially from dissenting insiders like Tol, Curry, and Michaels (more on below). Instead, climate science brands them as traitors.

Thomas Kuhn: Paradigm Rigidity

Thomas Kuhn agreed that Popper’s notion of falsifiability was how scientists think they behave, eager to subject their theories to disconfirmation. But scientific institutions don’t behave like that. Kuhn described science as progressing through paradigms, the frameworks, shared within a scientific community, that define normal scientific practice, periodically interrupted by revolutionary shifts, with a new theory displacing an older one.

A popular criticism of climate science is that science is not based on consensus. Kuhn would disagree, arguing that all scientific paradigms are fundamentally consensus-based.

“Normal science” for Kuhn was the state of things in a paradigm where most activity is aimed at defending the paradigm, thereby rationalizing the rejection of any evidence that disconfirms its theories. In this sense, everyday lab-coat scientists are some of the least scientific of professionals.

“Even in physics,” wrote Kuhn, “there is no standard higher than the assent of the relevant community.” So for Kuhn, evidence does not completely speak for itself, since assent about what evidence exists (Is that blip on the chart a Higgs boson or isn’t it?) must exist within the community for a theory to show consistency with observation. Climate science, more than any current paradigm except possibly string theory, has built high walls around its dominant theory.

That theory is the judgement, conclusion, or belief that human activity, particularly CO₂ emissions, has driven climate change for 150 years and will do so at an accelerated pace in the future. The paradigm virtually ensures that the vast majority of climate scientists agree with the theory because the theory is the heart of the paradigm, as Kuhn would see it. Within a paradigm, Kuhn accepts the role of consensus, but he wants outsiders to be able to overthrow the paradigm.

Given the relevant community’s insularity, Kuhn would see climate scientists’ claim that the anthropogenic warming theory is consistent with all their data as a case of anomalies being rationalized to preserve the paradigm. He would point to Michael Mann’s resistance to disclose his hockey stick data and simulation code as brutal shielding of the paradigm, regardless of Mann’s being found innocent of ethics violations.

Climate science’s tendency to dismiss solar influence and alternative hypotheses would likely be interpreted by Kuhn as the marginalization of dissent and paradigm rigidity. Kuhn might not see this rigidity as a sign of dishonesty or interest – as Paul Feyerabend (below) would – but would see the prevailing framework as stifling the revolutionary thinking he believed necessary for scientific advancement. From Kuhn’s perspective, climate science’s entrenched consensus could make it deeply flawed by prioritizing conformity too heavily over innovation.

Imre Lakatos: Climate as “Research Programme”

Lakatos developed his concept of “research programmes” to evaluate scientific progress.  He blended ideas from Popper’s falsification and Kuhn’s paradigm shifts. Lakatos distinguished between progressive and degenerating research programs based on their ability to predict new facts and handle challenges effectively.

Lakatos viewed scientific progress as developing within research programs having two main components. The hard core, for Lakatos, was the set of central assumptions that define the program, which are not easily abandoned. The protective belt is a flexible layer of auxiliary hypotheses, methods, and data interpretations that can be adjusted to defend the hard core from anomalies. A research program is progressive if it predicts novel phenomena and those predictions are confirmed empirically. It is degenerating if its predictions fail and it relies on ad hoc modifications to explain away anomalies.

In climate science, the hard core would be that global climate is changing, that greenhouse gas emissions drive this change, and that climate models can reliably predict future trends. Its protective belt would be the evolving methods of collecting, revising, and interpreting weather data adjustments due to new evidence such as volcanic activity.

Lakatos would be more lenient than Popper about continual theory revision and model-tweaking on the grounds that a progressive research agenda’s revision of its protective belt is justified by the complexity of the topic. Signs of potential degeneration of the program would include the “pause” in warming from 1998–2012, explained ad hoc as natural variability, particularly since natural variability was invoked too early to know whether the pause would continue. I.e., it was called a pause with no knowledge of whether the pause would end.

I suspect Lakatos would be on the fence about climate science, seeing it as more progressive (in his terms, not political ones) than rival programs, but would be concerned about its level of dogmatism.

Paul Feyerabend: Tyranny of Methodological Monism

Kuhn, Lakatos, and Paul Feyerabend were close friends who, while drawing on each other’s work, differed greatly in viewpoint. Feyerabend advocated epistemological anarchism, defending his claim that no scientific advancement ever proceeds purely within what is taught as “the scientific method.” He argued that science should be open to diverse approaches and that imposing methodological rules suppresses necessary creativity and innovation. Feyerabend often cited Galileo’s methodology, which bears little in common with what is called the scientific method. He famously claimed that anything goes in science, emphasizing the importance of methodological pluralism.

From Feyerabend’s perspective, climate science excessively relies on a narrow set of methodologies, particularly computer modeling and statistical analysis. The field’s heavy dependence on these tools and its discounting of historical climatology is a form of methodological monism. Its emphasis on consensus, rigid practices, and public hostility to dissent (more on below) would be viewed as stifling the kind of creative, unorthodox thinking that Feyerabend believed essential for scientific breakthroughs. The pressure to conform coupled with the politicization of climate science has led to a homogenized field that lacks cognitive diversity.

Feyerabend distrusted the orthodoxy of the social practices in what Kuhn termed “normal science” – what scientific institutions do in their laboratories. Against Lakatos, Feyerabend distrusted any rule-based scientific method at all. Science in the mid 1900’s had fallen prey to the “tyranny of tightly knit, highly corroborated, and gracelessly presented theoretical systems.”

Viewing science as an institution, he said that science was a threat to democracy and that there must be “a separation of state and science just as there is a separation between state and religious institutions.” He called 20th century science “the most aggressive, and most dogmatic religious institution.” He wrote that institutional science resembled more the church of Galileo’s day than it resembled Galileo. I think he would say the same of climate science.

Feyerabend complained that university research requires “a willingness to subordinate one’s ideas to those of a team leader.” In the case of global warming, government and government-funded scientists are deciding not only what is important as a scientific program but what is important as energy policy and social agenda. Feyerabend would be utterly horrified.

Feyerabend’s biggest concern, I suspect, would be the frequent alignment of climate scientists with alternative energy initiatives. Climate scientists who advocate for solar, wind, and hydrogen step beyond their expertise in diagnosing climate change into prescribing solutions, a policy domain involving engineering and economics. Michael Mann still prioritizes “100% renewable energy,” despite all evidence of its engineering and economical infeasibility.

Further, advocacy for a specific solution over others (nuclear power is often still shunned) suggests a theoretical precommitment likely to introduce observational bias. Climate research grants from renewable energy advocates including NGOs the Department of Energy’s ARPA-E program create incentives for scientists to emphasize climate problems that those technologies could cure. Climate science has been a gravy train for bogus green tech, such as Solyndra and Abound Solar.

Why Not Naomi Oreskes?

All my science history gods are dead white men. Why not include a prominent living historian? Naomi Oreskes at Harvard is the obvious choice. We need not speculate about how she would view climate science. She has been happy to tell us. Her activism and writings suggest she functions more as an advocate for the climate political cause than a historian of science. Her role extends past documenting the past to shaping contemporary debate.

Oreskes testified before U.S. congressional committees (House Select Committee on the Climate Crisis, 2019, and the Senate Budget Committee, 2023), as a Democratic-invited witness. There she accused political figures of harassing scientists and pushed for action against fossil fuel companies. She aligns with progressive anti-nuclear leanings. An objective historian would limit herself to historical facts and the resulting predictions and explanations rather than advocating specific legislative actions. She embraces the term “climate activist,” arguing that citizen engagement is essential for democracy.

Oreskes’s scholarship, notably her 2004 “The Scientific Consensus on Climate Change” and her book Merchants of Doubt, employ the narrative of universal scientific agreement on anthropogenic climate change while portraying dissent solely as industry-driven disinformation. She wrote that 100% of 928 peer-reviewed papers supported the IPCC’s position on climate change. Conflicting peer-reviewed papers show Oreskes to have, at best, cherry-picked data to bolster a political point. Pursuing legal attacks on fossil fuel companies is activism, not analysis.

Acts of the “Relevant Community”

Countless scientists themselves engage in climate advocacy, even in the analysis of effectiveness of advocacy. Advocacy backed by science, and science applied to advocacy. A paradigmatic example – using Kuhn’s term literally – is Dr. James Lawrence Powell’s 2017 “The Consensus on Anthropogenic Global Warming Matters.” In it, Powell addresses a critic’s response to Powell’s earlier report on the degree of scientific consensus. Powell argues that 99.99% of scientists accept anthropogenic warming, rather than 97% as his critic claims. But the thrust of Powell’s paper is that the degree of consensus matters greatly, “because scholars have shown that the stronger the public believe the consensus to be, the more they support the action on global warming that human society so desperately needs.” Powell goes on for seven fine-print pages, citing Oreskes’ work, with charts and appendices on the degree of scientific consensus. He not only focuses on consensus, he seeks consensus about consensus.

Of particular interest to anyone with Kuhn’s perspective – let alone Feyerabend’s – is the way climate science treats its backsliders. Dissenters are damned from the start, but those who have left the institution (literally, in the case of The Intergovernmental Panel on Climate Change) are further vilified.

Dr. Richard Tol, lead author for the Fifth IPCC Assessment Report, later identified methodological flaws in IPCC work. Dr. Judith Curry, lead author for the Third Assessment Report, later became a prominent critic of the IPCC’s consensus-driven process. She criticized climate models and the IPCC’s dismissal of natural climate variability. She believes (in Kuhnian terms) that the IPCC’s theories are value-laden and that their observations are theory-laden, the theory being human causation. Scientific American, a once agenda-less publication, called Curry a “climate heretic.” Dr. Patrick Michaels, contributor to the Second Assessment Report later emerged as a vocal climate change skeptic, arguing that the IPCC ignores natural climate variability and uses a poor representation of climate dynamics.

These scientists represent a small minority of the relevant community. But that community has challenged the motives and credentials of Tol, Curry, and Michaels more than their science. Michael Mann accused Curry of undermining science with “confusionism and denialism” in a 2017 congressional testimony. Mann said that any past legitimate work by Curry was invalidated by her “boilerplate denial drivel.” Mann said her exit strengthened the field by removing a disruptive voice. Indeed.

Tampering with Evidence

Everything above deals with methodological and social issues in climate science. Kuhn, Feyerabend, and even the Strong Program sociologists of science, assumed that scientists were above fudging the data. Tony Heller, Harvard emeritus professor of Geophysics, has, for over a decade, assembled screenshots of NASA and NOAA temperature records that prove continual revision of historic data, making the past look colder and the present look hotter. Heller’s opponents relentlessly engage in ad hominem attacks and character-based dismissals, rather than focusing on the substance of his arguments. If I can pick substance from his opponents’ positions, it would be that Heller cherry-picks U.S.-only examples and dismisses global evidence and corroboration of climate theory by evidence beyond temperature data. Heller may be guilty of cherry-picking. I haven’t followed the debate closely for many years.

But in 2013, I wrote to Judith Curry on the topic, assuming she was close to the issue. I asked her what fraction of NASA’s adjustments were consistent with strengthening the argument for 20th-century global warming, i.e., what fraction was consistent with Heller’s argument. She said the vast majority of it was.

Curry acknowledged that adjustments like those for urban heat-island effects and differences in observation times are justified in principle, but she challenged their implementation. In a 2016 interview with The Spectator, she said, “The temperature record has been adjusted in ways that make the past look cooler and the present warmer – it’s not a conspiracy, but it’s not neutral either.” She ties the bias to institutional pressures like funding and peer expectations. Feyerabend would smirk and remark that a conspiracy is not needed when the paradigm is ideologically aligned from the start.

In a 2017 testimony before the U.S. House Committee on Science, Space, and Technology, Curry said, “Adjustments to historical temperature data have been substantial, and in many cases, these adjustments enhance the warming trend.” She cited this as evidence of bias, implying the process lacks transparency and independent validation.

Conclusion

From the historical and philosophical perspectives discussed above, climate science can be critiqued as bad science. For the Logical Positivists, its global, far-future claims are hard to verify directly, challenging their empirical basis. For Hempel, its reliance on models and statistical trends rather than universal laws undermines its deductive explanatory power. For Popper, its long-term predictions resist falsification, blurring the line between science and non-science. For Kuhn, its dominant paradigm suppresses alternative viewpoints, hindering progress. Lakatos would likely endorse its progressive program, but would challenge its dogmatism. Feyerabend would be disgusted by its narrow methodology and its institutional rigidness. He would call it a religion – a bad one. He would quip that 97% of climate scientists agree that they do not want to be defunded. Naomi Oreskes thinks climate science is vital. I think it’s crap.

, , , , , ,

8 Comments

Fuck Trump: The Road to Retarded Representation

-Bill Storage, Apr 2, 2025

On February 11, 2025, the American Federation of Government Employees (AFGE) staged a “Rally to Save the Civil Service” at the U.S. Capitol. The event aimed to protest proposed budget cuts and personnel changes affecting federal agencies under the Trump administration. Notable attendees included Senators Brian Schatz (D-HI) and Chris Van Hollen (D-MD), and Representatives Donald Norcross (D-NJ) and Maxine Dexter (D-OR).

Dexter took the mic and said that “we have to fuck Trump.” Later Norcross led a “Fuck Trump” chant. The senators and representatives then joined a song with the refrain, “We want Trump in jail.” “Fuck Donald Trump and Elon Musk,” added Rep. Mark Pocan (D-WI).

This sort of locution might be seen as a paradigmatic example of free speech and authenticity in a moment of candid frustration, devised to align the representatives with a community that is highly critical of Trump. On this view, “Fuck Trump” should be understood within the context of political discourse and rhetorical appeal to a specific audience’s emotions and cultural values.

It might also be seen as a sad reflection of how low the Democratic Party has sunk and how low the intellectual bar has dropped to become a representative in the US congress.

I mostly write here about the history of science, more precisely, about History of Science, the academic field focused on the development of scientific knowledge and the ways that scientific ideas, theories, and discoveries have evolved over time. And how they shape and are shaped by cultural, social, political, and philosophical contexts. I held a Visiting Scholar appointment in the field at UC Berkeley for a few years.

The Department of the History of Science at UC Berkeley was created in 1960. There in 1961, Thomas Kuhn (1922 – 1996) completed the draft of The Structure of Scientific Revolutions, which very unexpectedly became the most cited academic book of the 20th century. I was fortunate to have second-hand access to Kuhn through an 18-year association with John Heilbron (1924 – 2023), who, outside of family, was by far the greatest influence on what I spend my time thinking about. John, Vice-Chancellor Emeritus of the UC System and senior research fellow at Oxford, was Kuhn’s grad student and researcher while Kuhn was writing Structure.

Thomas Kuhn

I want to discuss here the uncannily direct ties between Thomas Kuhn’s analysis of scientific revolutions and Rep. Norcross’s chanting “Fuck Trump,” along with two related aspects of the Kuhnian aftermath. The second is academic precedents that might be seen as giving justification to Norcross’s pronouncements. Third is the decline in academic standards over the time since Kuhn was first understood to be a validation of cultural relativism. To make this case, I need to explain why Thomas Kuhn became such a big deal, what relativism means in this context, and what Kuhn had to do with relativism.

To do that I need to use the term epistemology. I can’t do without it. Epistemology deals with questions that were more at home with the ancient Greeks than with modern folk. What counts as knowledge? How do we come to know things? What can be known for certain? What counts as evidence? What do we mean by probable? Where does knowledge come from, and what justifies it?

These questions are key to History of Science because science claims to have special epistemic status. Scientists and most historians of science, including Thomas Kuhn, believe that most science deserves that status.

Kernels of scientific thinking can be found in the ancient Greeks and Romans and sporadically through the Middle Ages. Examples include Adelard of Bath, Roger Bacon, John of Salisbury, and Averroes (Ibn Rushd). But prior to the Copernican Revolution (starting around 1550 and exploding under Galileo, Kepler, and Newton) most people were happy with the idea that knowledge was “received,” either through the ancients or from God and religious leaders, or from authority figures of high social status. A statement or belief was considered “probable”, not if it predicted a likely future outcome but if it could be supported by an authority figure or was justified by received knowledge.

Scientific thinking, roughly after Copernicus, introduced the radical notion that the universe could testify on its own behalf. That is, physical evidence and observations (empiricism) could justify a belief against all prior conflicting beliefs, regardless of what authority held them.

Science, unlike the words of God, theologians, and kings, does not deal in certainty, despite the number of times you have heard the phrase “scientifically proven fact.” There is no such thing. Proof is in the realm of math, not science. Laws of nature are generalizations about nature that we have good reason to act as if we know them to be universally and timelessly true. But they are always contingent. 2 + 2 is always 4, in the abstract mathematical sense. Two atoms plus two atoms sometimes makes three atoms. It’s called fission or transmutation. No observation can ever show 2 + 2 = 4 to be false. In contrast, an observation may someday show E = MC2 to be false.

Science was contagious. Empiricism laid the foundation of the Enlightenment by transforming the way people viewed the natural world. John Locke’s empirical philosophy greatly influenced the foundation of the United States. Empiricism contrasts with rationalism, the idea that knowledge can be gained by shear reasoning and through innate ideas. Plato was a rationalist. Aristotle thought Plato’s rationalism was nonsense. His writings show he valued empiricism, though was not a particularly good empiricist (“a dreadfully bad physical scientist,” wrote Kuhn). 2400 years ago, there was tension between rationalism and empiricism.

The ancients held related concerns about the contrast between absolutism and relativism. Absolutism posits that certain truths, moral principles, and standards are universally and timelessly valid, regardless of perspectives, cultures, or circumstances. Relativism, in contrast, holds that truth, morality, and knowledge are context-sensitive and are not universal or timeless.

In Plato’s dialogue, Theaetetus, Plato, examines epistemological relativism by challenging his adversary Protagoras, who asserts that truth and knowledge are not absolute. In Theaetetus Socrates, Plato’s mouthpiece, asks, “If someone says, ‘This is true for me, but that is true for you,’ then does it follow that truth is relative to the individual?”


Epistemological relativism holds that truth is relative to a community. It is closely tied to the anti-enlightenment romanticism that developed in the late 1700s. The romantics thought science was spoiling the mystery of nature. “Our meddling intellect mis-shapes the beauteous forms of things: We murder to dissect,” wrote Wordsworth.

 Relativism of various sorts – epistemological, moral, even ontological (what kinds of things exist) – resurged in the mid 1900s in poststructuralism and postmodernism. I’ll return to postmodernism later.

The contingent nature of scientific beliefs (as opposed to the certitude of math), right from the start in the Copernican era, was not seen by scientists or philosophers as support for epistemological relativism. Scientists – good ones, anyway – hold it only probable, not certain, that all copper is conductive. This contingent state of scientific knowledge does not, however, mean that copper can be conductive for me but not for you. Whatever evidence might exist for the conductivity of copper, scientists believe, can speak for itself. If we disagreed about conductivity, we could pull out an Ohmmeter and that would settle the matter, according to scientists.

Science has always had its enemies, at times including clerics, romantics, Luddites, and environmentalists. Science, viewed as an institution, could be seen as the monster that spawned atomic weapons, environmental ruin, stem cell hubris, and inequality. But those are consequences of science, external to its fundamental method. They don’t challenge science’s special epistemic status, but epistemic relativists do.

Relativism about knowledge – epistemological relativism – gained steam in the 1800s. Martin Heidegger, Karl Marx (though not intentionally), and Sigmund Freud, among others, brought the idea into academic spheres. While moral relativism and ethical pluralism (likely influenced by Friedrich Nietzsche) had long been in popular culture, epistemological relativism was sealed in Humanities departments, apparently because the objectivity of science was unassailable.

Enter Thomas Kuhn, Physics PhD turned historian for philosophical reasons. His Structure was originally published as a humble monograph in International Encyclopedia of Unified Science, then as a book in 1962. One of Kuhn’s central positions was that evidence cannot really settle non-trivial scientific debates because all evidence relies on interpretation. One person may “see” oxygen in the jar while another “sees” de-phlogisticated air. (Phlogiston was part of a theory of combustion that was widely believed before Antoine Lavoisier “disproved” it along with “discovering” oxygen.) Therefore, there is always a social component to scientific knowledge.

Kuhn’s point, seemingly obvious and innocuous in retrospect, was really nothing new. Others, like Michael Polanyi, had published similar thoughts earlier. But for reasons we can only guess about in retrospect, Kuhn’s contention that scientific paradigms are influenced by social, historical, and subjective factors was just the ammo that epistemological relativism needed to escape the confines of Humanities departments. Kuhn’s impact probably stemmed from the political climate of the 1960s and the detailed way he illustrated examples of theory-laden observations in science. His claim that, “even in physics, there is no standard higher than the assent of the relevant community” was devoured by socialists and relativists alike – two classes with much overlap in academia at that time. That makes Kuhn a relativist of sorts, but he still thought science to be the best method of investigating the natural world.

Kuhn argued that scientific revolutions and paradigm shifts (a term coined by Kuhn) are fundamentally irrational. That is, during scientific revolutions, scientific communities depart from empirical reasoning. Adherents often defend their theories illogically, discounting disconfirming evidence without grounds. History supports Kuhn on this for some cases, like Copernicus vs. Ptolemy, Einstein vs. Newton, quantum mechanics vs. Einstein’s deterministic view of the subatomic, but not for others like plate tectonics and Watson and Crick’s discovery of the double-helix structure of DNA, where old paradigms were replaced by new ones with no revolution.

The Strong Programme, introduced by David Bloor, Barry Barnes, John Henry and the Edinburgh School as Sociology of Scientific Knowledge (SSK), drew heavily on Kuhn. It claimed to understand science only as a social process. Unlike Kuhn, it held that all knowledge, not just science, should be studied in terms of social factors without privileging science as a special or uniquely rational form of knowledge. That is, it denied that science had a special epistemic status and outright rejected the idea that science is inherently objective or rational. For the Strong Programme, science was “socially constructed.” The beliefs and practices of scientific communities are shaped solely by social forces and historical contexts. Bloor and crew developed their “symmetry principle,” which states that the same kinds of causes must be used to explain both true and false scientific beliefs.

The Strong Programme folk called themselves Kuhnians. What they got from Kuhn was that science should come down from its pedestal, since all knowledge, including science, is relative to a community. And each community can have its own truth. That is, the Strong Programmers were pure epistemological relativists.  Kuhn repudiated epistemological relativism (“I am not a Kuhnian!”), and to his chagrin, was still lionized by the strong programmers. “What passes for scientific knowledge becomes, then, simply the belief of the winners. I am among those who have found the claims of the strong program absurd: an example of deconstruction gone mad.” (Deconstruction is an essential concept in postmodernism.)

“Truth, at least in the form of a law of noncontradiction, is absolutely essential,” said Kuhn in a 1990 interview. “You can’t have reasonable negotiation or discourse about what to say about a particular knowledge claim if you believe that it could be both true and false.”

No matter. The Strong Programme and other Kuhnians appropriated Kuhn and took it to the bank. And the university, especially the social sciences. Relativism had lurked in academia since the 1800s, but Kuhn’s scientific justification that science isn’t justified (in the eyes of the Kuhnians) brought it to the surface.


Herbert Marcuse, ” Father of the New Left,” also at Berkeley in the 1960s, does not appear to have had contact with Kuhn. But Marcuse, like the Strong Programme, argued that knowledge was socially constructed, a position that Kuhnians attributed to Kuhn. Marcuse was critical of the way that Enlightenment values and scientific rationality were used to legitimize oppressive structures of power in capitalist societies. He argued that science, in its role as part of the technological apparatus, served the interests of oppressors. Marcuse saw science as an instrument of domination rather than emancipation. The term “critical theory” originated in the Frankfurt School in the early 20th century, but Marcuse, once a main figure in Frankfurt’s Institute for Social Research, put Critical Theory on the map in America. Higher academics began its march against traditional knowledge, waving the banners of Marcusian cynicism and Kuhnian relativism.

Postmodernism means many things in different contexts. In 1960s academia, it referred to a reaction against modernism and Enlightenment thinking, particularly thought rooted in reason, progress, and universal truth. Many of the postmodernists saw in Kuhn a justification for certain forms of both epistemic and moral relativism. Prominent postmodernists included Jean-François Lyotard, Michel Foucault, Jean Baudrillard, Richard Rorty, and Jacques Derrida. None of them, to my knowledge, ever made a case for unqualified epistemological relativism. Their academic intellectual descendants often do.

20th century postmodernism had significant intellectual output, a point lost on critics like Gross and Levitt (Higher Superstition, 1994) and Dinesh De Souza. Derrida’s application of deconstruction of written text took hermeneutics to a new level and has proved immensely valuable to analysis of ancient texts, as has the reader-response criticism approach put forth by Louise Rosenblatt (who was not aligned with the radical skepticism typical of postmodernism) and Jacques Derrida, and embraced by Stanley Fish (more on whom below). All practicing scientists would benefit from Richard Rorty’s elaborations on the contingency of scientific knowledge, which are consistent with those held by Descartes, Locke, and Kuhn.

Michel Foucault attacked science directly, particularly psychology and, oddly, from where we stand today, sociology. He thought those sciences constructed a specific normative picture of what it means to be human, and that the farther a person was from the idealized clean-cut straight white western European male, the more aberrant those sciences judged the person to be. Males, on Foucault’s view, had repressed women for millennia to construct an ideal of masculinity that serves as the repository of political power. He was brutally anti-Enlightenment and was disgusted that “our discourse has privileged reason, science, and technology.” Modernity must be condemned constantly and ruthlessly. Foucault was gay, and for a time, he wanted sex to be the center of everything.

Foucault was once a communist. His influence on identity politics and woke ideology is obvious, but Foucault ultimately condemned communism and concluded that sexual identity was an absurd basis on which to form one’s personal identity.

Rosenblatt, Rorty, Derrida, and even at times Foucault, despite their radical positions, displayed significant intellectual rigor. This seems far less true of their intellectual offspring. Consider Sandra Harding, author of “The Gender Dimension of Science and Technology” and consultant to the U.N. Commission on Science and Technology for Development. Harding argues that the Enlightenment resulted in a gendered (male) conception of knowledge. She wrote in The Science Question in Feminism that it would be “illuminating and honest” to call Newton’s laws of motion “Newton’s rape manual.”

Cornel West, who has held fellowships at Harvard, Yale, Princeton, and Dartmouth, teaches that the Enlightenment concepts of reason and of individual rights, which were used since the Enlightenment were projected by the ruling classes of the West to guarantee their own liberty while repressing racial minorities. Critical Race Theory, the offspring of Marcuse’s Critical Theory, questions, as stated by Richard Delgado in Critical Race Theory, “the very foundations of the liberal order, including equality theory, legal reasoning, Enlightenment rationalism, and neutral principles of constitutional law.”

Allan Bloom, a career professor of Classics who translated Plato’s Republic in 1968, wrote in his 1987 The Closing of the American Mind on the decline of intellectual rigor in American universities. Bloom wrote that in the 1960s, “the culture leeches, professional and amateur, began their great spiritual bleeding” of academics and democratic life. Bloom thought that the pursuit of diversity and universities’ desire to increase the number of college graduates at any cost undermined the outcomes of education. He saw, in the 1960s, social and political goals taking priority over the intellectual and academic purposes of education, with the bulk of unfit students receiving degrees of dubious value in the Humanities, his own area of study.

At American universities, Marx, Marcuse, and Kuhn were invoked in the Humanities to paint the West, and especially the US, as cultures of greed and exploitation. Academia believed that Enlightenment epistemology and Enlightenment values had been stripped of their grandeur by sound scientific and philosophical reasoning (i.e. Kuhn). Bloom wrote that universities were offering students every concession other than education. “Openness used to be the virtue that permitted us to seek the good by using reason. It now means accepting everything and denying reason’s power,” wrote Bloom, adding that by 1980 the belief that truth is relative was essential to university life.

Anti-foundationalist Stanley Fish, Visiting Professor of Law at Yeshiva University, invoked Critical Theory in 1985 to argue that American judges should think of themselves as “supplementers” rather than “textualists.” As such, they “will thereby be marginally more free than they otherwise would be to infuse into constitutional law their current interpretations of our society’s values.” Fish openly rejects the idea of judicial neutrality because interpretation, whether in law or literature, is always contingent and socially constructed.


If Bloom’s argument is even partly valid, we now live in a second or third generation of the academic consequences of the combined decline of academic standards and the incorporation of moral, cultural, and epistemological relativism into college education. We have graduated PhDs in the Humanities, educated by the likes of Sandra Harding and Cornel West, who never should have been in college, and who learned nothing of substance there beyond relativism and a cynical disgust for reason. And those PhDs are now educators who have graduated more PhDs.

Peer reviewed journals are now being reviewed by peers who, by the standards of three generations earlier, might not be qualified to grade spelling tests. The academic products of this educational system are hired to staff government agencies, HR departments, and to teach school children Critical Race Theory, Queer Theory, and Intersectionality – which are given the epistemic eminence of General Relativity – and the turpitude of national pride and patriotism.

An example, with no offense intended to those who call themselves queer, would be to challenge the epistemic status of Queer Theory. Is it parsimonious? What is its research agenda? Does it withstand empirical scrutiny and generate consistent results? Do its theorists adequately account for disconfirming evidence? What bold hypothesis in Queer Theory makes a falsifiable prediction?

Herbert Marcuse’s intellectual descendants, educated under the standards detailed by Bloom, now comprise progressive factions within the Democratic Party, particularly those advocating socialism and Marxist-inspired policies. The rise of figures like Bernie Sanders, Alexandria Ocasio-Cortez, and others associated with the “Democratic Socialists of America” reflects a broader trend in American politics toward embracing a combination of Marcuse’s critique of capitalism, epistemic and moral relativism, and a hefty decline in academic standards.

One direct example is the notion that certain forms of speech including reactionary rhetoric should not be tolerated if they undermine social progress and equity. Allan Bloom again comes to mind: “The most successful tyranny is not the one that uses force to assure uniformity but the one that removes the awareness of other possibilities.”

Echoes of Marcuse, like others of the 1960s (Frantz Fanon, Stokely Carmichael, the Weather Underground) who endorsed rage and violence in anti-colonial struggles, are heard in modern academic outrage that is seen by its adherents as a necessary reaction against oppression. Judith Butler of UC Berkeley, who called the October 2023 Hamas attacks an “act of armed resistance,” once wrote that “understanding Hamas, Hezbollah as social movements that are progressive, that are on the left, that are part of a global left, is extremely important.” College students now learn that rage is an appropriate and legitimate response to systemic injustice, patriarchy, and oppression. Seing the US as a repressive society that fosters complacency toward the marginalization of under-represented groups while striving to impose heteronormativity and hegemonic power is, to academics like Butler, grounds for rage, if not for violent response.

Through their college educations and through ideas and rhetoric supported by “intellectual” movements bred in American universities, politicians, particularly those more aligned with relativism and Marcuse-styled cynicism, feel justified in using rhetorical tools born of relaxed academic standards and tangential admissions criteria.

In the relevant community, “Fuck Trump” is not an aberrant tantrum in an echo chamber but a justified expression of solidary-building and speaking truth to power. But I would argue, following Bloom, that it reveals political retardation originating in shallow academic domains following the deterioration of civic educational priorities.


Examples of such academic domains serving as obvious predecessors to present causes at the center of left politics include:

  • 1965: Herbert Marcuse (UC Berkeley) in Repressive Tolerance argues for intolerance toward prevailing policies, stating that a “liberating tolerance” would consist of intolerance to right-wing movements and toleration of left-wing movements. Marcuse advanced Critical Theory and a form of Marxism modified by genders and races replacing laborers as the victims of capitalist oppression.

  • 1971: Murray Bookchin’s (Alternative University, New York) Post-Scarcity Anarchism followed by The Ecology of Freedom (1982) introduce the eco-socialism that gives rise to the Green New Deal.

  • 1980: Derrick Bell’s (New York University School of Law) “Brown v. Board of Education and the Interest-Convergence Dilemma” wrote that civil rights advance only when they align with the interests of white elites. Later, Bell, Kimberlé Crenshaw, and Richard Delgado (Seattle University) develop Critical Race Theory, claiming that “colorblindness” is a form of oppression.

  • 1984: Michel Foucault’s (Collège de France) The Courage of Truth addresses how individuals and groups form identities in relation to truth and power. His work greatly informs Queer Theory, post-colonial ideology, and the concept of toxic masculinity.

  • 1985: Stanley Fish (Yeshiva University) and Thomas Grey (Stanford Law School) reject judicial neutrality and call for American judges to infuse into constitutional law their current interpretations of our society’s values.

  • 1989: Kimberlé Crenshaw of Columbia Law School introduced the concept of Intersectionality, claiming that traditional frameworks for understanding discrimination were inadequate because they overlooked the ways that multiple forms of oppression (e.g., race, gender, class) interacted.

  • 1990: Judith Butler’s (UC Berkeley) Gender Trouble introduces the concept of gender performativity, arguing that gender is socially constructed through repeated actions and expressions. Butler argues that the emotional well-being of vulnerable individuals supersedes the right to free speech.

  • 1991: Teresa de Lauretis of UC Santa Cruz: introduced the term “Queer Theory” to challenge traditional understandings of gender and sexuality, particularly in relation to identity, norms, and power structures.

Marcusian cynicism might have simply died an academic fantasy, as it seemed destined to do through the early 1980s, if not for its synergy with the cultural relativism that was bolstered by the universal and relentless misreading and appropriation of Thomas Kuhn that permeated academic thought in the 1960s through 1990s. “Fuck Trump” may have happened without Thomas Kuhn through a different thread of history, but the path outlined here is direct and well-travelled. I wonder what Kuhn would think.

, , , , , ,

3 Comments

Extraordinary Miscarriages of Science, Part 2 – Creation Science

By Bill Storage, Jan. 21, 2024

Creation Science can refer either to young-earth or old-earth creation theories. Young Earth Creationism (YEC) makes specific claims about the creation of the universe from nothing, the age of the earth as inferred from the Book of Genesis and about the creation of separate “kinds” of creatures. Wikipedia’s terse coverage, as with Lysenkoism, brands it a pseudoscience without explanation. But YEC makes bold, falsifiable claims about biology and genetics (not merely evolution), geology (plate tectonics or lack thereof), and, most significantly, Newtonian mechanics. While it posits unfalsifiable unobservables including a divinity that sculpts the universe in six days, much of its paradigm contrasts modern physics in testable ways. Creation Science is not a miscarriage of science in the sense of some of the others. I’m covering it here because it has many similarities to other bad sciences and is a great test of demarcation criteria. Creation Science does limited harm because it preaches to the choir. I doubt anyone ever joined a cult because they were persuaded that creationism is scientific.

Intelligent Design

Old-earth creationism, now known as Intelligent Design (ID) theory is much different. While ID could have confined itself to the realm of metaphysics and stayed out of our cross hairs, it did not. ID mostly confines itself to the realm of descriptions and explanations, but it explicitly claims to be a science. Again, Wikipedia brands ID as pseudoscience, and, again, this distinction seems shallow. I’m also concerned that the label is rooted in anti-Christian bias with reasons invented after the labelling as a rationalization. To be clear, I see nothing substantial in ID that is scientific, but its opponents’ arguments are often not much better than those of its proponents.

It might be true that a supreme being, benevolent or otherwise, guided the hand of cosmological and biological evolution. But simpler, adequate explanations of those processes exist outside of ID, and ID adds no explanatory power to the theories of cosmology and biology that are independent of it. This was not always the case. The US founding fathers, often labeled Christian by modern Christians, were not Christian at all. They were deists, mainly because they lacked a theoretical framework to explain the universe without a creator, who had little interest in earthly affairs. They accepted the medieval idea that complex organisms, like complex mechanisms, must have a designer. Emergent complexity wasn’t seen as an option. That they generally – notably excepting David Hume – failed to see the circularity of this “teleological argument” can likely be explained by Kuhn’s notion of the assent of the relevant community. Each of them bought it because they all bought it. It was the reigning paradigm.

While intelligent design could logically be understood to not require a Judeo-Christian god, ID seems to have emerged out of fundamentalist Christian objection to teaching evolution in public schools. Logically, “intelligent design” could equally apply to theories involving a superior but not supreme creator or inventor. Space aliens may have seeded the earth with amino acids – the Zoo Hypothesis. Complex organic molecules could have been sent to earth on a comet by highly advanced – and highly patient – aliens, something we might call directed panspermia. Or we could be living in a computer simulation of an alien school kid. Nevertheless, ID seems to be a Christian undertaking positing a Christian God.

Opponents are quick to point this out. ID is motivated by Christian sentiments and is closely aligned with Christian evangelism. Is this a fair criticism of ID as a science? I tend to think not. Newton was strongly motivated by Christian beliefs, though his religion, something like Arianism or Unitarianism, would certainly be rejected by modern Christians. Regardless, Newton’s religious motivation for his studies no more invalidates them than Linus Pauling’s (covered below) economic motivations invalidate his work. Motivations of practitioners, in my view, cannot be grounds for calling a field of inquiry pseudoscience or bad science. Some social scientists disagree.

Dominated by Negative Arguments

YEC and ID writings focus on arguing that much of modern science, particularly evolutionary biology, cannot be correct. For example, much of YEC’s efforts are directed at arguing that the earth cannot be 4.5 billion years old. Strictly speaking, this ( the theory that another theory is wrong) is a difficult theory to disprove. Most scientists tend to think that disproving a theory that itself aims to disprove geology is pointless. They hold that the confirming evidence for modern geologic theory is sufficient. Karl Popper, who held that absence of disconfirmation was the sole basis for judging a theory good, would seem to have a problem with this though. YEC also holds theories defending a single worldwide flood within the last 5,000 years. That seems reasonably falsifiable, if one accepts a large body of related science including several radioactive dating techniques, mechanics of solids, denudation rate calculations, and much more.

Further, it is flawed reasoning (“false choice”) to think that exposing a failure of classical geology is support for a specific competing theory.

YEC and, perhaps surprisingly, much of ID have assembled a body of negative arguments against Darwinism, geology, and other aspects of a naturalistic worldview. Arguing that fossil evidence is an insufficient basis for evolution and that natural processes cannot explain the complexity of the eyeball are characteristically negative arguments. This raises the question of whether a bunch of negative arguments can rightly be called a science. While Einstein started with the judgement that the wave theory of light could not be right (he got the idea from Maxwell), his program included developing a bold, testable, and falsifiable theory that posited that light was something that came in discreet packages, along with predictions about how it would behave in a variety of extreme circumstances. Einsteinian relativity gives us global positioning and useful tools in our cell phones. Creationism’s utility seems limited to philosophical realms. Is lack of practical utility or observable consequences a good basis for calling an endeavor unscientific? See String Theory, below.

Wikipedia (you might guess that I find Wikipedia great for learning the discography of Miley Cyrus but poor for serious inquiries), appealing to “consensus” and “the scientific community,” judges Creation Science to be pseudoscience because creationism invokes supernatural causes. In the same article, it decries the circular reasoning of ID’s argument from design (the teleological argument). But claiming that Creation Science invokes supernatural causes is equally circular unless we’re able to draw the natural/supernatural distinction independently from the science/pseudoscience distinction. Creationists hold that creation is natural; that’s their whole point.

Ignoring Disconfirming Evidence

YEC proponents seem to refuse to allow that any amount of radioactive dating evidence falsifies their theory. I’m tempted to say this alone makes YEC either a pseudoscience or just terrible science. But doing so would force me to accept the 2nd and 3rd definitions of science that I gave in the previous post. In other words, I don’t want to judge a scientific inquiry’s status (or even the status of a non-scientific one) on the basis of what its proponents (a community or institution) do at an arbitrary point in time. Let’s judge the theory, not its most vocal proponents. A large body of German physicists denied that Edington’s measurement confirmed Einstein’s prediction of bent light rays during an eclipse because they rejected Jewish physics. Their hardheadedness is no reason to call their preferred wave theory of light a bad theory. It was a good theory with bad adherents, a good theory for which we now have excellent reasons to judge wrong.

Some YEC proponents hold that, essentially, the fossil record is God’s little joke. Indeed it is possible that when God created the world in six days a few thousand years ago he laid down a lot of evidence to test our faith. The ancient Christian writer Tertullian argued that Satan traveled backward in time to plant evidence against Christian doctrine (more on him soon). It’s hard to disprove. The possibility of deceptive evidence is related to the worry expressed by Hume and countless science fiction writers that the universe, including fossils and your memories of today’s breakfast, could have been planted five minutes ago. Like the Phantom Time hypothesis, it cannot be disproved. Also, as with Phantom Time, we have immense evidence against it. And from a practical perspective, nothing in the future would change if it were true.

Lakatos Applied to Creation Science

Lakatos might give us the best basis for rejecting Creation Science as pseudoscience rather than as an extraordinarily bad science, if that distinction has any value, which it might in the case of deciding what can be taught in elementary school. (We have no laws against unsuccessful theories or poor science.) Lakatos was interested in how a theory makes use of laws of nature and what its research agenda looks like. Laws of nature are regularities observed in nature so widely that we assume them to be true, contingently, and ground predictions about nature on them. Creation Science usually has little interest in making testable predictions about nature or the universe on the basis of such laws. Dr. Duane Gish of the Institute for Creation Research (ICR) wrote in Evolution, The Fossils Say No that “God used processes which are not now operating anywhere in the natural universe.” This is a major point against Creation Science counting as science.

Creation Science’s lack of testable predictions might not even be a fair basis for judging a pursuit to be unscientific. Botany is far more explanatory than predictive, and few of us, including Wikipedia, are ready to expel botany from the science club.

Most significant for me, Lakatos casts doubt on Creation Science by the thinness of its research agenda. A look at the ICR’s site reveals a list of papers and seminars all by PhDs and MDs. They seem to fall in two categories: evolution is wrong (discussed above), and topics that are plausible but that don’t give support for creationism in any meaningful way. The ploy here is playing a game with the logic of confirmation.

By the Will of Elvis

Consider the following statement of hypothesis. Everything happens by the will of Elvis. Now this statement, if true, logically ensures that the following disjunctive statement is true: Either everything happens by the will of Elvis or all cats have hearts. Now let’s go out with a stethoscope and do some solid cat science to gather empirical evidential support for all cats having hearts. This evidence gives us reasonable confidence that the disjunctive statement is true. Since the original simple hypothesis logically implies the disjunction, evidence that cats have hearts gives support for the hypothesis that everything happens by the will of Elvis. This is a fun game (like Hempel’s crows) in the logic of confirmation, and those who have studied it will instantly see the ruse. But ICR has dedicated half its research agenda to it, apparently to deceive its adherents.

The creationist research agenda is mostly aimed at negating evolution and at large philosophical matters. Where it deals with small and specific scientific questions – analogous to cat hearts in the above example – the answers to those questions don’t in any honest sense provide evidentiary support for divine creation.

If anything fails the test of being valid science, Creation Science does. Yet popular arguments that attempt to logically dismiss it from the sciences seem prejudiced or ill motivated. As discussed in the last post, fair and honest demarcation is not so simple. This may be a case where we have to take the stance of Justice Potter Stewart, who, when judging whether Lady Chatterley’s Lover was pornography, said “I shall not today attempt further to define [it], but I know it when I see it, and this is not it.”

To be continued.

, , , , , ,

3 Comments

Nobel Laureates Stoop to the Level of Greenpeace

Over 100 Nobel laureates signed a letter urging Greenpeace to stop opposing genetically modified organisms (GMOs). The letter specifically address golden rice, a genetically engineered crop designed to reduce Vitamin-A deficiencies, which cause blindness in children of the developing world.

My first thought is to endorse any effort against the self-obsessed, romantic dogmatism of Greenpeace. But that may be a bit hasty.

The effort behind the letter was organized by Sir Richard Roberts, Chief Scientific Officer of New England Biolabs and Phillip Sharp, winner of the 1993 Nobel Prize in Physiology or Medicine for the discovery that genes in eukaryotes are not contiguous strings and contain introns. UC Berkeley’s Randy Schekman, professor of cell and developmental biology and 2013 Nobel laureate also signed the letter.

I expect Roberts, Sharp, Schekman and other signers are highly qualified to offer an opinion on the safety of golden rice. And I suspect they’re right about Greenpeace. But I think the letter is a terrible move for science.

Of the 110 Nobel laureate signers as of today, 26 are physicists and 34 are chemists. Laureates in Peace, Literature and Economics are also on the list. It’s possible that a physicists or an economist might be highly skilled in judging the safety of golden rice; but I doubt that most Nobel winners who signed that letter are more qualified than the average molecular biologist without a Nobel Prize.

Scientists, more than most folk, should be aware that consensus should not be recruited to support a theory. Instead, consensus should occur only when the last skeptic is dragged, kicking and screaming, over the evidence, then succumbing to the same explanatory theory held by peers. That clearly didn’t happen with Roberts’ campaign and argument from authority.

Also, if these Nobel-winning scientist had received slightly less specialized educations, they might see a terrible irony here. They naively attempt to side step Hume’s Guillotine. That is, by thinking that scientific knowledge allows deriving an “ought” statement from an “is” statement (or collection of scientific facts), they indulge in ethical naturalism and are exposed to the naturalistic fallacy. And in a very literal sense, ethical naturalism is exactly the delusion under which Greenpeace operates.

.


 

Each day I wonder how many things I am dead wrong about. – Jim Harrison

 


 

3 Comments

Great Innovative Minds: A Discord on Method

Great minds do not think alike. Cognitive diversity has served us well. That’s not news to those who study innovation; but I think you’ll find this to be a different take on the topic, one that gets at its roots.

The two main figures credited with setting the scientific revolution in motion did not agree at all on what the scientific method actually was. It’s not that they differed on the finer points; they disagreed on the most basic aspect of what it meant to do science – though they didn’t yet use that term. At the time of Francis Bacon and Rene Descartes, there were no scientists. There were natural philosophers. This distinction is important for showing just how radical and progressive Descartes and Bacon were.

'Descartes" In Discourse on Method, Descartes argued that philosophers, over thousands of years of study, had achieved absolutely nothing. They pursued knowledge, but they had searched in vain. Descartes shared some views with Aristotle, but denied Aristotelian natural philosophy, which had been woven into Christian beliefs about nature. For Aristotle, rocks fell to earth because the natural order is for rocks to be on the earth, not above it – the Christian version of which was that it was God’s plan. In medieval Europe truths about nature were revealed by divinity or authority, not discovered. Descartes and Bacon were both devout Christians, but believed that Aristotelian philosophy of nature had to go. Observing that there is no real body of knowledge that can be claimed by philosophy, Descartes chose to base his approach to the study of nature on mathematics and reason. A mere 400 years after Descartes, we have trouble grasping just how radical this notion was. Descartes believed that the use of reason could give us knowledge of nature, and thus give us control over nature. His approach was innovative, in the broad sense of that term, which I’ll discuss below. Observation and experience, however, in Descartes’ view, could be deceptive. They had to be subdued by pure reason. His approach can be called rationalism. He sensed that we could use rationalism to develop theories – predictive models – with immense power, which would liberate mankind. He was right. 

Francis Bacon, Descartes slightly older counterpart in the scientific revolution, was a British philosopher and statesman who became attorney general in 1613 under James I. He is now credited with being the father of empiricism, the hands-on, experimental basis for modern science, engineering, and technology. Bacon believed that acquiring knowledge of nature had to be rooted in observation and sensory experience alone. Do experiments and then decide what it means. Infer conclusions from the facts. Bacon argued that we must quiet the mind and apply a humble, mechanistic approach to studying nature and developing theories. Reason biases observation, he said. In this sense, the theory-building models of Bacon and Descartes were almost completely opposite. I’ll return to Bacon after a clarification of terms needed to make a point about him.

Innovation has many meanings. Cicero said he regarded it with great suspicion. He saw innovation as the haphazard application of untested methods to important matters. For Cicero, innovators were prone to understating the risks and overstating the potential gains to the public, while the innovators themselves had a more favorable risk/reward quotient. If innovation meant dictatorship for life for Julius Caesar after 500 years of self-governance by the Roman people, Cicero’s position might be understandable.

Today, innovation usually applies specifically to big changes in commercial products and services, involving better consumer value, whether by new features, reduced prices, reduced operator skill level, or breaking into a new market. Peter Drucker, Clayton Christensen and the tech press use innovation in roughly this sense. It is closely tied to markets, and is differentiated from invention (which may not have market impact), improvement (may be merely marginal), and discovery.

BaconThat business-oriented definition of innovation is clear and useful, but it leaves me with no word for what earlier generations meant by innovation. In a broader sense, it seems fair that innovation also applies to what vanishing point perspective brought to art during the renaissance. John Locke, a follower of both Bacon and Descartes, and later Thomas Jefferson and crew, conceived of the radical idea that a nation could govern itself by the application of reason. Discovery, invention and improvement don’t seem to capture the work of Locke and Jefferson either. Innovation seems the best fit. So for discussion purposes, I’ll call this innovation in the broader sense as opposed to the narrower sense, where it’s tied directly to markets.

In the broader sense, Descartes was the innovator of his century. But in the narrow sense (the business and markets sense), Francis Bacon can rightly be called the father of innovation – and it’s first vocal advocate. Bacon envisioned a future where natural philosophy (later called science) could fuel industry, prosperity and human progress. Again, it’s hard to grasp how radical this was; but in those days the dominant view was that mankind had reached its prime in ancient times, and was on a downhill trajectory. Bacon’s vision was a real departure from the reigning view that philosophy, including natural philosophy, was stuff of the mind and the library, not a call to action or a route to improving life. Historian William Hepworth Dixon wrote in 1862 that everyone who rides in a train, sends a telegram or undergoes a painless surgery owes something to Bacon. In 1620, Bacon made, in The Great Instauration, an unprecedented claim in the post-classical world:

“The explanation of which things, and of the true relation between the nature of things and the nature of the mind … may spring helps to man, and a line and race of inventions that may in some degree subdue and overcome the necessities and miseries of humanity.”

In Bacon’s view, such explanations would stem from a mechanistic approach to investigation; and it must steer clear of four dogmas, which he called idols. Idols of the tribe are the set of ambient cultural prejudices. He cites our tendency to respond more strongly to positive evidence than to negative evidence, even if they are equally present; we leap to conclusions. Idols of the cave are one’s individual preconceptions that must be overcome. Idols of the theater refer to dogmatic academic beliefs and outmoded philosophies; and idols of the marketplace are those prejudices stemming from social interactions, specifically semantic equivocation and terminological disputes.

Descartes realized that if you were to strictly follow Bacon’s method of fact collecting, you’d never get anything done. Without reasoning out some initial theoretical model, you could collect unrelated facts forever with little chance of developing a usable theory. Descartes also saw Bacon’s flaw in logic to be fatal. Bacon’s method (pure empiricism) commits the logical sin of affirming the consequent. That is, the hypothesis, if A then B, is not made true by any number of observations of B.  This is because C, D or E (and infinitely more letters) might also cause B, in the absence of A. This logical fallacy had been well documented by the ancient Greeks, whom Bacon and Descartes had both studied. Descartes pressed on with rationalism, developing tools like analytic geometry and symbolic logic along the way.

Interestingly, both Bacon and Descartes were, from our perspective, rather miserable scientists. Bacon denied Copernicanism, refused to accept Kepler’s conclusion that planet orbits were elliptical, and argued against William Harvey’s conclusion that the heart pumped blood to the brain through a circulatory system. Likewise, by avoiding empiricism, Descartes reached some very wrong conclusions about space, matter, souls and biology, even arguing that non-human animals must be considered machines, not organisms. But their failings were all corrected by time and the approaches to investigation they inaugurated. The tension between their approaches didn’t go unnoticed by their successors. Isaac Newton took a lot from Bacon and a little from Descartes; his rival Gottfried Leibniz took a lot from Descartes and a little from Bacon. Both were wildly successful. Science made the best of it, striving for deductive logic where possible, but accepting the problems of Baconian empiricism. Despite reliance on affirming the consequent, inductive science seems to work rather well, especially if theories remain open to revision.

Bacon’s idols seem to be as relevant to the boardroom as they were to the court of James I. Seekers of innovation, whether in the classroom or in the enterprise, might do well to consider the approaches and virtues of Bacon and Descartes, of contrasting and fusing rationalism and observation. Bacon and Descartes envisioned a brighter future through creative problem-solving. They broke the bonds of dogma and showed that a new route forward was possible. Let’s keep moving, with a diversity of perspectives, interpretations, and predictive models.

, ,

4 Comments

Paul Feyerabend – The Worst Enemy of Science

Moved to Paul Feyerabend, The Worst Enemy of Science

 

, , , ,

7 Comments