Archive for category History of Science

Settled State Science

Science is belief in the ignorance of experts, said Richard Feynman in 1969. Does that describe science today, or is science a state and/or academic institution that dispenses truth?

State science and science herded by well-funded and tenured academics has led to some mammoth missteps, muddles and misdeeds in the application of that science to policy.

It may be inaccurate to say, as Edwin Black did in War Against the Weak, that Hitlers racial hygienics ideology was imported directly and solely from American eugenics. But that Hitler’s race policy was heavily inspired by American eugenics is very well documented, as is the overwhelming support given eugenics by our reigning politicians and educators. For example, Hitler absolutely did borrow much of the 1933 Law for the Prevention of Hereditarily Defective Offspring from the draft sterilization law drawn up for US states by Harry Laughlin, then Superintendent of the Eugenics Record Office in affiliation with the American Association of the Advancement of Science (AAAS).

Academic progressives and the educated elite fawned over the brilliance of eugenics too. Race Science was deemed settled, and that science had to be embodied in policy to change the world for the better. John Maynard Keynes and Woodrow Wilson loved it. Harvard was all over it too. Stanford’s first president, David S Jordan and the Yale’s famed economist and social reformer Irving Fisher were leaders of the Eugenics movement. All the trappings of science were there; impressive titles like “Some Racial Peculiarities of the Negro Brain” (American Journal of Anatomy, 1906) appeared in sciencey journals. In fact, the prestigious journal Science, covered eugenics in it lead story of Oct. 7, 1921.

But 1906 was oh so very long ago. right? Was eugenics a one-off? The lobotomy/leucotomy craze of the 1950s saw similar endorsement from the political and academic elite. More recent, less grotesque, but equally bad and unjustified state science was the low-fat craze of the 1980s and the war on cholesterol.

Last month the California Assembly has passed AB 2098, designating “the dissemination or promotion of misinformation or disinformation related to the SARS-CoV-2 coronavirus, or COVID-19 as unprofessional conduct,” for which MDs could be subjected to loss of license. The bill defines misinformation as “false information that is contradicted by contemporary scientific consensus.”

The U.S. Department of Health & Human Services (HHS) now states, “If your child is 6 months or older, you can now help protect them from severe COVID illness by getting them a COVID vaccine.” That may be consensus alright. I cannot find one shred of evidence to support the claim. Can you?

___

“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.” – Paul Feyerabend, Against Method, 1975

5 Comments

Innumeracy and Overconfidence in Medical Training

Most medical doctors, having ten or more years of education, can’t do simple statistics calculations that they were surely able to do, at least for a week or so, as college freshmen. Their education has let them down, along with us, their patients. That education leaves many doctors unquestioning, unscientific, and terribly overconfident.

A disturbing lack of doubt has plagued medicine for thousands of years. Galen, at the time of Marcus Aurelius, wrote, “It is I, and I alone, who has revealed the true path of medicine.” Galen disdained empiricism. Why bother with experiments and observations when you own the truth. Galen’s scientific reasoning sounds oddly similar to modern junk science armed with abundant confirming evidence but no interest in falsification. Galen had plenty of confirming evidence: “All who drink of this treatment recover in a short time, except those whom it does not help, who all die. It is obvious, therefore, that it fails only in incurable cases.”

Galen was still at work 1500 years later when Voltaire wrote that the art of medicine consisted of entertaining the patient while nature takes its course. One of Voltaire’s novels also described a patient who had survived despite the best efforts of his doctors. Galen was around when George Washington died after five pints of bloodletting, a practice promoted up to the early 1900s by prominent physicians like Austin Flint.

CodmanBut surely medicine was mostly scientific by the 1900s, right? Actually, 20th century medicine was dragged kicking and screaming to scientific methodology. In the early 1900’s Ernest Amory Codman of Massachusetts General proposed keeping track of patients and rating hospitals according to patient outcome. He suggested that a doctor’s reputation and social status were poor measures of a patient’s chance of survival. He wanted the track records of doctors and hospitals to be made public, allowing healthcare consumers to choose suppliers based on statistics. For this, and for his harsh criticism of those who scoffed at his ideas, Codman was tossed out of Mass General, lost his post at Harvard, and was suspended from the Massachusetts Medical Society. Public outcry brought Codman back into medicine, and much of his “end results system” was put in place.

20th century medicine also fought hard against the concept of controlled trials. Austin Bradford Hill introduced the concept to medicine in the mid 1920s. But in the mid 1950s Dr. Archie Cochrane was still fighting valiantly against what he called the God Complex in medicine, which was basically the ghost of Galen; no one should question the authority of a physician. Cochrane wrote that far too much of medicine lacked any semblance of scientific validation and knowing what treatments actually worked. He wrote that the medical establishment was hostile the idea of controlled trials. Cochrane fought this into the 1970s, authoring Effectiveness and Efficiency: Random Reflections on Health Services in 1972.

Doctors aren’t naturally arrogant. The God Complex is passed passed along during the long years of an MD’s education and internship. That education includes rights of passage in an old boys’ club that thinks sleep deprivation builds character in interns, and that female med students should make tea for the boys. Once on the other side, tolerance of archaic norms in the MD culture seems less offensive to the inductee, who comes to accept the system. And the business of medicine, the way it’s regulated, and its control by insurance firms, pushes MDs to view patients as a job to be done cost-effectively. Medical arrogance is in a sense encouraged by recovering patients who might see doctors as savior figures.

As Daniel Kahneman wrote, “generally, it is considered a weakness and a sign of vulnerability for clinicians to appear unsure.” Medical overconfidence is encouraged by patients’ preference for doctors who communicate certainties, even when uncertainty stems from technological limitations, not from doctors’ subject knowledge. MDs should be made conscious of such dynamics and strive to resist inflating their self importance. As Allan Berger wrote in Academic Medicine in 2002, “we are but an instrument of healing, not its source.”

Many in medical education are aware of these issues. The calls for medical education reform – both content and methodology – are desperate, but they are eerily similar to those found in a 1924 JAMA article, Current Criticism of Medical Education.

Covid19 exemplifies the aspect of medical education I find most vile. Doctors can’t do elementary statistics and probability, and their cultural overconfidence renders them unaware of how critically they need that missing skill.

A 1978 study, brought to the mainstream by psychologists like Kahnemann and Tversky, showed how few doctors know the meaning of a positive diagnostic test result. More specifically, they’re ignorant of the relationship between the sensitivity and specificity (true positive and true negative rates) of a test and the probability that a patient who tested positive has the disease. This lack of knowledge has real consequences In certain situations, particularly when the base rate of the disease in a population is low. The resulting probability judgements can be wrong by factors of hundreds or thousands.

In the 1978 study (Cascells et. al.) doctors and medical students at Harvard teaching hospitals were given a diagnostic challenge. “If a test to detect a disease whose prevalence is 1 out of 1,000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease?” As described, the true positive rate of the diagnostic test is 95%. This is a classic conditional-probability quiz from the second week of a probability class. Being right requires a), knowing Bayes Theorem, and b), being able to multiply and divide. Not being confidently wrong requires only one thing: scientific humility – the realization that all you know might be less than all there is to know. The correct answer is 2% – there’s a 2% likelihood the patient has the disease. The most common response, by far, in the 1978 study was 95%, which is wrong by 4750%. Only 18% of doctors and med students gave the correct response. The study’s authors observed that in the group tested, “formal decision analysis was almost entirely unknown and even common-sense reasoning about the interpretation of laboratory data was uncommon.”

As mentioned above, this story was heavily publicized in the 80s. It was widely discussed by engineering teams, reliability departments, quality assurance groups and math departments. But did it impact medical curricula, problem-based learning, diagnostics training, or any other aspect of the way med students were taught? One might have thought yes, if for no reason than to avoid criticism by less prestigious professions having either the relevant knowledge of probability or the epistemic humility to recognize that the right answer might be far different from the obvious one.

Similar surveys were done in 1984 (David M Eddy) and in 2003 (Kahan, Paltiel) with similar results. In 2013, Manrai and Bhatia repeated Cascells’ 1978 survey with the exact same wording, getting trivially better results. 23% answered correctly. They suggesting that medical education “could benefit from increased focus on statistical inference.” That was 35 years after Cascells, during which, the phenomenon was popularized by the likes of Daniel Kahneman, from the perspective of base-rate neglect, by Philip Tetlock, from the perspective of overconfidence in forecasting, and by David Epstein, from the perspective of the tyranny of specialization.

Over the past decade, I’ve asked the Cascells question to doctors I’ve known or met, where I didn’t think it would get me thrown out of the office or booted from a party. My results were somewhat worse. Of about 50 MDs, four answered correctly or were aware that they’d need to look up the formula but knew that it was much less than 95%. One was an optometrist, one a career ER doc, one an allergist-immunologist, and one a female surgeon – all over 50 years old, incidentally.

Despite the efforts of a few radicals in the Accreditation Council for Graduate Medical Education and some post-Flexnerian reformers, medical education remains, as Jonathan Bush points out in Tell Me Where It Hurts, basically a 2000 year old subject-based and lecture-based model developed at a time when only the instructor had access to a book. Despite those reformers, basic science has actually diminished in recent decades, leaving many physicians with less of a grasp of scientific methodology than that held by Ernest Codman in 1915. Medical curriculum guardians, for the love of God, get over your stodgy selves and replace the calculus badge with applied probability and statistical inference from diagnostics. Place it later in the curriculum later than pre-med, and weave it into some of that flipped-classroom, problem-based learning you advertise.

7 Comments

Intertemporal Choice, Delayed Gratification and Empty Marshmallow Promises

Everyone knows about the marshmallow test. Kids were given a marshmallow and told that they’d get a second one if they resisted eating the first one for a while. The experimenter then left the room and watched the kids endure marshmallow temptation. Years later, the kids who had been able to fight temptation were found to have higher SAT scores, better jobs, less addiction, and better physical fitness than those who succumbed. The meaning was clear; early self control, whether innate or taught, is key to later success. The test results and their interpretation were, scientifically speaking, too good to be true. And in most ways they weren’t true.

That wrinkle doesn’t stop the marshmallow test from being trotted out weekly on LinkedIn and social sites where experts and moralists opine. That trotting out comes with behavioral economics lessons, dripping with references to Kahnemann, Ariely and the like about our irrationality as we face intertemporal choices, as they’re known in the trade. When adults choose an offer of $1000 today over an offer for $1400 to be paid in one year, even when they have no pressing financial need, they are deemed irrational or lacking self control, like the marshmallow kids.

The famous marshmallow test was done by Walter Mischel in the 1960s through 1980s. Not only did subsequent marshmallow tests fail to show as much correlation between not waiting for the second marshmallow and a better life, but, more importantly, similar tests for at least twenty years have pointed to a more salient result, one which Mischel was aware of, but which got lost in popular retelling. Understanding the deeper implications of the marshmallow tests, along with a more charitable view of kids who grabbed the early treat, requires digging down into the design of experiments, Bayesian reasoning, and the concept of risk neutrality.

Intertemporal choice tests like the marshmallow test involve choices between options that involve different payoffs at different times. We face these choices often. And when we face them in the real world, our decision process is informed by memories and judgments about our past choices and their outcomes. In Bayesian terms, our priors incorporate this history. In real life, we are aware that all contracts, treaties, and promises for future payment come with a finite risk of default.

In intertemporal choice scenarios, the probability of the deferred payment actually occurring is always less than 100%. That probability is rarely known and is often unknowable. Consider choices A and B below. This is how the behavioral economists tend to frame the choices.

A B
$1,000 now $1,400 paid next year

But this framing ignores an important feature of any real-world, non-hypothetical intertemporal choice situation: the probability of choice B is always less than 100%. In the above example, even risk-neutral choosers (those indifferent to all choices having the same expected value) would pick choice A over choice B if they judge the probability of non-default (actually getting the deferred payment) to be less than a certain amount.

A B C
$1000 now $1,400 in one year, P= .99 $1,400 in one year, P= 0.7
Expected value =$1000 Expected value = $1386 Expected value = $980

As shown above, if choosers believe the deferred payment likelihood to be less than about 70%, they cannot be  called irrational for choosing choice A.

Lack of Self Control – or Rational Intuitive Bayes?

Now for the final, most interesting twist in tests like the marshmallow test, almost universally ignored by those who cite them. Unlike my example above where the wait time is one year, in the marshmallow tests, the time period during which the subject is tempted to eat the first marshmallow is unknown to the subject. Subjects come into the game with a certain prior – a certain belief about the probability of non-default. But, as intuitive Bayesians, these subjects update the probability they assign to non-default, during their wait, based on the amount of time they have been waiting. The speed at which they revise their probability downward depends on their judgment of the distribution of wait times experienced in their short lives.

If kids in the marshmallow tests have concluded, based on their experience, that adults are not dependable, choice A makes sense; they should immediately eat the first marshmallow, since the second one may never materialize. Kids who endure temptation for a few minutes only to give in and eat their first marshmallow are seen as both irrational and being incapable of self-control.

But if those kids adjust their probability judgments that the second marshmallow will appear based on a prior distribution that is not a normal distribution (i.e., if as intuitive Bayesians they model wait times imposed by adults as a power-law distribution), then their eating the first marshmallow after some test-wait period makes perfect sense. They rightly conclude, on the basis of available evidence, that wait times longer than some threshold period may be very long indeed. These kids aren’t irrational, and self-control is not their main problem. Their problem is that they have been raised by irresponsible adults who have both displayed a tendency to default on payments and who are late to fulfill promises by time durations obeying power-law distributions.

Subsequent marshmallow tests have verified this. In 2013, psychologist Laura Michaelson, after more sophisticated versions of the marshmallow test, concluded “implications of this work include the need to revise prominent theories of delay of gratification.” Actually, tests going back over 50 years have shown similar results (A.R. Mahrer, The role of expectancy in delayed reinforcement, 1956).

In three recent posts (first, second, third) I suggested that behavioral economists and business people who follow them are far too prone to seeing innate bias everywhere, when they are actually seeing rational behavior through their own bias. This is certainly the case with the common misuse of the marshmallow tests. Interpreting these tests as rational behavior in light of subjects’ experience is a better explanatory theory, one more consistent with the evidence, and one that coheres with other explanatory observations, such as humans’ capacity for intuitive Bayesian belief updates.

Charismatic pessimists about human rationality twist the situation so that their pessimism is framed as good news, in the sense that they have at least illuminated an inherent human bias. That pessimism, however cheerfully expressed, is both misguided and harmful. Their failure to mention the more nuanced interpretation of marshmallow tests is dishonest and self-serving. The problem we face is not innate, and it is mostly curable. Better parenting can fix it. The marshmallow tests measure parents more than they measure kids.

Walter Mischel died in 2018. I heard his 2016 talk at the Long Now Foundation in San Francisco. He acknowledged the relatively weak correlation between marshmallow test results and later success, and he mentioned that descriptions of his experiments in popular press were rife with errors. But his talk still focused almost solely on the self-control aspect of the experiments. He missed a great opportunity to help disseminate a better story about the role of trustworthiness and reliability of parents in delayed gratification of children.

 


 

A better description of the way we really work through intertemporal choices would require going deeper into risk neutrality and how, even for a single person, our departure from risk neutrality – specifically risk-appetite skewness – varies between situations and across time. I have enjoyed doing some professional work in that area. Getting it across in a blog post is probably beyond my current blog-writing skills.

 

 

4 Comments

The Naming and Numbering of Parts

Counting Crows – One for Sorrow, Two for Joy…

Remember in junior high when Mrs. Thistlebottom made you memorize the nine parts of speech. That was to help you write an essay on what William Blake might have been thinking when he wrote The Tyger. In Biology, Mr. Sallow taught you that nature was carved up into a seven taxonomic categories (domains, kingdoms, phyla, etc.) and that there were five kingdoms. If your experience was similar to mine, your Social Studies teacher then had you memorize the four causes of the Civil War.

Four causes? There I drew the line. Parts of speech might be counted with integers along with the taxa and the five kingdoms, but not causes of war. But in 8th grade I lacked the confidence and the vocabulary to make my case. It bugs me still, as you see. Assigning exactly four causes to the Civil War was a projection of someone’s mental model of the war onto the real war, which could rightly have been said to have any number of causes. Causes are rarely the sort of things that nature numbers. And as it turned out, nor are parts of speech, levels of taxa, or the number of kingdoms. Life isn’t monophyletic. Is Archaea a domain or a kingdom? Plato is wrong again; you cannot carve nature at her joints. Life’s boundaries are fluid.

Can there be any reason that the social sciences still insist that their world can be carved at its joints?  Are they envious of the solid divisions of biology but unaware that these lines are now understood to be fictions, convenient only at the coarsest levels of study?

A web search reveals that many causes and complex phenomena in the realm of social science can be counted, even in peer reviewed papers. Consider the three causes each for crime, the Great Schism in Christianity, and of human trafficking in Africa. Or the four kinds each of ADHD (Frontiers in New Psychology), Greek love, and behavior (Current Directions in Psychological Science). Or the five effects each of unemployment, positive organizational behavior, and hallmarks of Agile Management (McKinsey).

In each case it seems that experts, by using the definite article “the” before their cardinal qualifier, might be asserting that their topic has exactly that many causes, kinds, or effects. And that the precise number they provide is key to understanding the phenomenon. Perhaps writing a technical paper titled simply Four Kinds of ADHD (no “The”) might leave the reader wondering if there might in fact be five kinds, though the writer had time to explore only four. Might there be highly successful people with eight habits?

The latest Diagnostic and Statistical Manual of Mental Disorders (DSM–5), issued by the American Psychiatric Association lists over 300 named conditions, not one of which has been convincingly tied to a failure of neurotransmitters or any particular biological state. Ten years in the making, the DSM did not specify that its list was definitive. In fact, to its credit, it acknowledges that the listed conditions overlap along a continuum.

Still, assigning names to 300 locations along a spectrum – a better visualization might be across an n-dimensional space – does not mean you’ve found 300 kinds of anything. Might exploring the trends, underlying systems, processes, and relationships between symptoms be more useful?

A few think so at least. Thomas Insel, former director of the NIMH wrote that he was doubtful of the DSM’s usefulness. Insel said that the DSM’s categories amounted to consensus about clusters of clinical symptoms, not any empirical laboratory measure. They were equivalent, he said, “to creating diagnostic systems based on the nature of chest pain or the quality of fever.” As Kurt Grey, psychologist at UNC put it, “intuitive taxonomies obscure the underling processes of psychopathology.”

Meanwhile in business, McKinsey consultants still hold that business interactions can be optimized around the four psychological functions – sensation, intuition, feeling, and thinking, despite that theory’s (Myers Briggs) pitifully low evidential support.

The Naming of Parts

“Today we have naming of parts. Yesterday, We had daily cleaning…” Henry Reed, Naming of Parts, 1942.

Richard Feynman told a story of being a young boy and noticing that when his father jerked his wagon containing a ball forward, the ball appeared to move backward in the wagon. Feynman asked why it did that. His dad said that no one knows, but that “we call it inertia.”

Feynman also talked about walking with his father in the woods. His dad, a uniform salesman, said, “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird, absolutely nothing about the bird. You only know something about people – what they call the bird.” Feynman said they then talked about the bird’s pecking and its feathers.

Back at the American Psychiatric Association, we find controversy over whether Premenstrual Dysphoria Disorder (PMDD) is an “actual disorder” or merely a strong case of Premenstrual Syndrome (PMS).

Science gratifies us when it tries to explain things, not merely to describe them, or, worse yet, to merely name them. That’s true despite all the logical limitations to scientific knowledge, like the underdetermination of theory by evidence and the problem of induction that David Hume made famous in 1739.

Carl Linnaeus, active at the same time as Hume, devised the system Mr. Sallow taught you in 8th grade Biology. It still works, easing communications around manageable clusters of organisms, and demarcating groups of critters that are endangered. But Linnaeus was dead wrong about the big picture: “All the species recognized by Botanists came forth from the Almighty Creator’s hand, and the number of these is now and always will be exactly the same,” and “nature makes no jumps.,” he wrote. So parroting Linnaeus’s approach to science will naturally lead to an impasse.

Social sciences (of which there are precisely nine), from anthropology to business management might do well to recognize that their domains will never be as lean, orderly, or predictive as the hard sciences are, and to strive for those science’s taste for evidence rather than venerating their ontologies and taxonomies.

Now why do some people think that labeling a thing explains the thing? Because they fall prey to the Nominal Fallacy. Nudge.


One for sorrow,
Two for mirth
Three for a funeral,
Four for birth
Five for heaven
Six for hell
Seven for the devil,
His own self

 – Proverbs and Popular Saying of the Seasons, Michael Aislabie Denham, 1864

3 Comments

Paul Feyerabend, The Worst Enemy of Science

“How easy it is to lead people by the nose in a rational way.”

A similarly named post I wrote on Paul Feyerabend seven years ago turned out to be my most popular post by far. Seeing it referenced in a few places has made me cringe, and made me face the fact that I failed to make my point. I’ll try to correct that here. I don’t remotely agree with the paper in Nature that called Feyerabend the worst enemy of science, nor do I side with the postmodernists that idolize him. I do find him to be one of the most provocative thinkers of the 20th century, brash, brilliant, and sometimes full of crap.

Feyerabend opened his profound Against Method by telling us to always remember that what he writes in the book does not reflect any deep convictions of his, but that he intends “merely show how easy it is to lead people by the nose in a rational way.” I.e., he was more telling us what he thought we needed to hear than what he necessarily believed. In his autobiography he wrote that for Against Method he had used older material but had “replaced moderate passages with more outrageous ones.” Those using and abusing Feyerabend today have certainly forgot what this provocateur, who called himself an entertainer, told us always to remember about him in his writings.

PFK3

Any who think Feyerabend frivolous should examine the scientific rigor in his analysis of Galileo’s work. Any who find him to be an enemy of science should actually read Against Method instead of reading about him, as quotes pulled from it can be highly misleading as to his intent. My communications with some of his friends after he died in 1994 suggest that while he initially enjoyed ruffling so many feathers with Against Method, he became angered and ultimately depressed over both critical reactions against it and some of the audiences that made weapons of it. In 1991 he wrote, “I often wished I had never written that fucking book.”

I encountered Against Method searching through a library’s card catalog seeking an authority on the scientific method. I learned from Feyerabend that no set of methodological rules fits the great advances and discoveries in science. It’s obvious once you think about it. Pick a specific scientific method – say the hypothetico-deductive model – or any set of rules, and Feyerabend will name a scientific discovery that would not have occurred had the scientist, from Galileo to Feynman, followed that method, or any other.

Part of Feyerabend’s program was to challenge the positivist notion that in real science, empiricism trumps theory. Galileo’s genius, for Feyerabend, was allowing theory to dominate observation. In Dialogue Galileo wrote:

Nor can I ever sufficiently admire the outstanding acumen of those who have taken hold of this opinion and accepted it as true: they have, through sheer force of intellect, done such violence to their own senses as to prefer what reason told them over that which sensible experience plainly showed them to be the contrary.

For Feyerabend, against Popper and the logical positivists of the mid 1900’s, Galileo’s case exemplified a need to grant theory priority over evidence. This didn’t sit well with empiricist leanings of the the post-war western world. It didn’t set well with most scientists or philosophers. Sociologists and literature departments loved it. It became fuel for fire of relativism sweeping America in the 70’s and 80’s and for the 1990’s social constructivists eager to demote science to just another literary genre.

PKF2But in context, and in the spheres for which Against Method was written, many people – including Feyerabend’s peers from 1970 Berkeley, with whom I’ve had many conversations on the topic, conclude that the book’s goading style was a typical Feyerabendian corrective provocation to that era’s positivistic dogma.

Feyerabend distrusts the orthodoxy of social practices of what Thomas Kuhn termed “normal science” – what scientific institutions do in their laboratories. Unlike their friend Imre Lakatos, Feyerabend distrusts any rule-based scientific method at all. Instead, Feyerabend praises the scientific innovation and individual creativity. For Feyerabend science in the mid 1900’s had fallen prey to the “tyranny of tightly-knit, highly corroborated, and gracelessly presented theoretical systems.” What would he say if alive today?

As with everything in the philosophy of science in the late 20th century, some of the disagreement between Feyerabend, Kuhn, Popper and Lakatos revolved around miscommunication and sloppy use of language. The best known case of this was Kuhn’s inconsistent use of the term paradigm. But they all (perhaps least so Lakatos) talked past each other by failing to differentiate different meanings of the word science, including:

  1. An approach or set of rules and methods for inquiry about nature
  2. A body of knowledge about nature
  3. In institution, culture or community of scientists, including academic, government and corporate

Kuhn and Feyerabend in particular vacillating between meaning science as a set of methods and science as an institution. Feyerabend certainly was referring to an institution when he said that science was a threat to democracy and that there must be “a separation of state and science just as there is a separation between state and religious institutions.” Along these lines Feyerabend thought that modern institutional science resembles more the church of Galileo’s day than it resembles Galileo.

On the matter of state control of science, Feyerabend went further than Eisenhower did in his “military industrial complex” speech, even with the understanding that what Eisenhower was describing was a military-academic-industrial complex. Eisenhower worried that a government contract with a university “becomes virtually a substitute for intellectual curiosity.” Feyerabend took this worry further, writing that university research requires conforming to orthodoxy and “a willingness to subordinate one’s ideas to those of a team leader.” Feyerabend disparaged Kuhn’s normal science as dogmatic drudgery that stifles scientific creativity.

A second area of apparent miscommunication about the history/philosophy of science in the mid 1900’s was the descriptive/normative distinction. John Heilbron, who was Kuhn’s grad student when Kuhn wrote Structure of Scientific Revolutions, told me that Kuhn absolutely despised Popper, not merely as a professional rival. Kuhn wanted to destroy Popper’s notion that scientists discard theories on finding disconfirming evidence. But Popper was describing ideally performed science; his intent was clearly normative. Kuhn’s work, said Heilbron (who doesn’t share my admiration for Feyerabend), was intended as normative only for historians of science, not for scientists. True, Kuhn felt that it was pointless to try to distinguish the “is” from the “ought” in science, but this does not mean he thought they were the same thing.

As with Kuhn’s use of paradigm, Feyerabend’s use of the term science risks equivocation. He drifts between methodology and institution to suit the needs of his argument. At times he seems to build a straw man of science in which science insists it creates facts as opposed to building models. Then again, on this matter (fact/truth vs. models as the claims of science) he seems to be more right about the science of 2019 than he was about the science of 1975.

While heavily indebted to Popper, Feyerabend, like Kuhn, grew hostile to Popper’s ideas of demarcation and falsification: “let us look at the standards of the Popperian school, which are still being taken seriously in the more backward regions of knowledge.” He eventually expanded his criticism of Popper’s idea of theory falsification to a categorical rejection of Popper’s demarcation theories and of Popper’s critical rationalism in general. Now from the perspective of half a century later, a good bit of the tension between Popper and both Feyerabend and Kuhn and between Kuhn and Feyerabend seems to have been largely semantic.

For me, Feyerabend seems most relevant today through his examination of science as a threat to democracy. He now seems right in ways that even he didn’t anticipate. He thought it a threat mostly in that science (as an institution) held complete control over what is deemed scientifically important for society. In contrast, people as individuals or small competing groups, historically have chosen what counts as being socially valuable. In this sense science bullied the citizen, thought Feyerabend. Today I think we see a more extreme example of bullying, in the case of global warming for example, in which government and institutionalized scientists are deciding not only what is important as a scientific agenda but what is important as energy policy and social agenda. Likewise the role that neuroscience plays in primary education tends to get too much of the spotlight in the complex social issues of how education should be conducted. One recalls Lakatos’ concern against Kuhn’s confidence in the authority of “communities.” Lakatos had been imprisoned by the Nazis for revisionism. Through that experience he saw Kuhn’s “assent of the relevant community” as not much of a virtue if that community has excessive political power and demands that individual scientists subordinate their ideas to it.

As a tiny tribute to Feyerabend, about whom I’ve noted caution is due in removal of his quotes from their context, I’ll honor his provocative spirit by listing some of my favorite quotes, removed from context, to invite misinterpretation and misappropriation.

“The similarities between science and myth are indeed astonishing.”

“The church at the time of Galileo was much more faithful to reason than Galileo himself, and also took into consideration the ethical and social consequences of Galileo’s doctrine. Its verdict against Galileo was rational and just, and revisionism can be legitimized solely for motives of political opportunism.”

“All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’.”

“Revolutions have transformed not only the practices their initiators wanted to change buy the very principles by means of which… they carried out the change.”

“Kuhn’s masterpiece played a decisive role. It led to new ideas, Unfortunately it also led to lots of trash.”

“First-world science is one science among many.”

“Progress has always been achieved by probing well-entrenched and well-founded forms of life with unpopular and unfounded values. This is how man gradually freed himself from fear and from the tyranny of unexamined systems.”

“Research in large institutes is not guided by Truth and Reason but by the most rewarding fashion, and the great minds of today increasingly turn to where the money is — which means military matters.”

“The separation of state and church must be complemented by the separation of state and science, that most recent, most aggressive, and most dogmatic religious institution.”

“Without a constant misuse of language, there cannot be any discovery, any progress.”

 

__________________

Photos of Paul Feyerabend courtesy of Grazia Borrini-Feyerabend

 

 

, ,

2 Comments

Which Is To Be Master? – Humpty Dumpty’s Research Agenda

Should economics, sociology or management count as science?

2500 years ago, Plato, in The Sophist, described a battle between the gods and the earth giants. The fight was over the foundations of knowledge. The gods thought knowledge came from innate concepts and deductive reasoning only. Euclid’s geometry was a perfect example – self-evident axioms plus deduced theorems. In this model, no experiments are needed. Plato explained that the earth giants, however, sought knowledge through earthly experience. Plato sided with the gods; and his opponents, the Sophists, sided with the giants. Roughly speaking, this battle corresponds to the modern tension between rationalism (the gods) and empiricism (the giants). For the gods, the articles of knowledge must be timeless, universal and certain. For the giants, knowledge is contingent, experiential, and merely probable.


Plato’s approach led the Greeks – Aristotle, most notably – to hold that rocks fall with speeds proportional to their weights, a belief that persisted for 2000 years until Galileo and his insolent ilk had the gall to test it. Science was born.

Enlightenment era physics aside, Plato and the gods are alive and well. Scientists and social reformers of the Enlightenment tried to secularize knowledge. They held that common folk could overturn beliefs with the right evidence. Empirical evidence, in their view, could trump any theory or authority. Math was good for deduction; but what’s good for math is not good for physics, government, and business management.

Euclidean geometry was still regarded as true – a perfect example of knowledge fit for the gods –  throughout the Enlightenment era. But cracks began to emerge in the 1800s through the work of mathematicians like Lobachevsky and Riemann. By considering alternatives to Euclid’s 5th postulate, which never quite seemed to fit with the rest, they invented other valid (internally consistent) geometries, incompatible with Euclid’s. On the surface, Euclid’s geometry seemed correct, by being consistent with our experience. I.e., angle sums of triangles seem to equal 180 degrees. But geometry, being pure and of the gods, should not need validation by experience, nor should it be capable of such validation.

Non-Euclidean Geometry rocked Victorian society and entered the domain of philosophers, just as Special Relativity later did. Hotly debated, its impact on the teaching of geometry became the subject of an entire book by conservative mathematician and logician Charles Dodgson. Before writing that book, Dodgson published a more famous one, Alice in Wonderland.

The mathematical and philosophical content of Alice have been analyzed at length. Alice’s dialogue with Humpty Dumpty is a staple of semantics and semiotics, particularly, Humpty’s use of stipulative definition. Humpty first reasons that “unbirthdays” are better than birthdays, there being so many more of them, and then proclaims glory. Picking up that dialogue, Humpty announces,

‘And only one [day of the year] for birthday presents, you know. There’s glory for you!’

‘I don’t know what you mean by “glory”,’ Alice said.

Humpty Dumpty smiled contemptuously. ‘Of course you don’t — till I tell you. I meant “there’s a nice knock-down argument for you!”‘

‘But “glory” doesn’t mean “a nice knock-down argument”,’ Alice objected.

‘When I use a word,’ Humpty Dumpty said, in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’

‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’

‘The question is,’ said Humpty Dumpty, ‘which is to be master — that’s all.’

Humpty is right that one can redefine terms at will, provided a definition is given. But the exchange hints at a deeper notion. While having a private language is possible, it is also futile, if the purpose of language is communication.

Another aspect of this exchange gets little coverage by analysts. Dodgson has Humpty emphasize the concept of argument (knock-down), nudging us in the direction of formal logic. Humpty is surely a stand-in for the proponents of non-Euclidean geometry, against whom Dodgson is strongly (though wrongly – more below) opposed. Dodgson was also versed in Greek philosophy and Platonic idealism. Humpty is firmly aligned with Plato and the gods. Alice sides with Plato’s earth giants, the sophists. Humpty’s question, which is to be master?, points strongly at the battle between the gods and the giants. Was this Dodgson’s main intent?

When Alice first chases the rabbit down the hole, she says that she fell for a long time, and reasons that the hole must be either very deep or that she fell very slowly. Dodgson, schooled in Newtonian mechanics, knew, unlike the ancient Greeks, that all objects fall at the same speed. So the possibility that Alice fell slowly suggests that even the laws of nature are up for grabs. In science, we accept that new evidence might reverse what we think are the laws of nature, yielding a scientific revolution (paradigm shift).

In trying to vindicate “Euclid’s masterpiece,” as Dodgson called it, he is trying to free himself from an unpleasant logical truth: within the realm of math, we have no basis to the think the world is Euclidean rather than Lobichevskian. He’s trying to rescue conservative mathematics (Euclidean geometry) by empirical means. Logicians would say Dodgson is confusing a synthetic and a posteriori proposition with one that is analytic and a priori. That is, justification of the 5th postulate can’t rely on human experience, observations, or measurements.  Math and reasoning feed science; but science can’t help math at all. Dodgson should know better. In the battle between the gods and the earth giants, experience can only aid the giants, not the gods. As historian of science Steven Goldman put it, “the connection between the products of deductive reasoning and reality is not a logical connection.” If mathematical claims could be validated empirically then they wouldn’t be timeless, universal and certain.

While Dodgson was treating math as a science, some sciences today have the opposite problem. They side with Plato. This may be true even in physics. String theory, by some accounts, has hijacked academic physics, especially its funding. Wolfgang Lerche of CERN called string theory the Stanford propaganda machine working at its fullest. String theory at present isn’t testable. But its explanatory power is huge; and some think physicists pursue it with good reason. It satisfies at least one of the criteria Richard Dawid lists as reasons scientists follow unfalsifiable theories:

  1. the theory is the only game in town; there are no other viable options
  2. the theoretical research program has produced successes in the past
  3. the theory turns out to have even more explanatory power than originally thought

Dawid’s criteria may not apply to the social and dismal sciences. Far from the only game in town, too many theories – as untestable as strings, all plausible but mutually incompatible – vie for our Nobel honors.

Privileging innate knowledge and reason – as Plato did – requires denying natural human skepticism. Believing that intuition alone is axiomatic for some types of knowledge of the world requires suppressing skepticism about theorems built on those axioms. Philosophers call this epistemic foundationalism. A behavioral economist might see it as confirmation bias and denialism.

Physicists accuse social scientists of continually modifying their theories to accommodate falsifying evidence, still clinging to a central belief or interpretation. These recall the Marxists’ fancy footwork to rationalize their revolution not first occurring in a developed country, as was predicted. A harsher criticism is that social sciences design theories from the outset to be explanatory but not testable. In the 70s, Clifford D Shearing facetiously wrote in The American Sociologist that “a cursory glance at the development of sociological theory should suggest… that any theorist who seeks sociological fame must insure that his theories are essentially untestable.”

The Antipositivist school is serious about the issue Shearing joked about. Jurgen Habermas argues that sociology cannot explain by appeal to natural law. Deirdre (Donald) McCloskey mocked the empiricist leanings of Milton Friedman as being invalid in principle. Presumably, antipositivists are content that theories only explain, not predict.

In business management, the co-occurence of the terms theory and practice and the usage of the string “theory and practice” as opposed to “theory and evidence” or “theory and testing” suggests that Plato reigns in management science. “Practice” seems to mean interacting with the world under the assumption that that the theory is true.

The theory and practice model is missing the notion of testing those beliefs against the world or, more importantly, seeking cases in the world that conflict with the theory. Further, it has no notion of theory selection; theories do not compete for success.

Can a research agenda with no concept of theory testing, falsification effort, or theory competition and theory choice be scientific? If so, it seems creationism and astrology should be called science. Several courts (e.g. McLean vs. Arkansas) have ruled against creationism on the grounds that its research program fails to reference natural law, is untestable by evidence, and is certain rather than tentative. Creationism isn’t concerned with details. Intelligent Design (old-earth creationism), for example, is far more concerned with showing Darwinism wrong that with establishing an age of the earth. There is no scholarly debate between old-earth and young-earth creationism on specifics.

Critics say the fields of economists and business and business management are likewise free of scholarly debate. They seem to have similarly thin research agendas. Competition between theories in these fields is lacking; incompatible management theories coexist without challenges. Many theorist/practitioners seem happy to give priority to their model over reality.

Dodgson appears also to have been wise to the problem of a model having priority over the thing it models – believing the model is more real than the world. In Sylvie and Bruno Concluded, he has Mein Herr brag about his country’s map-making progress. They advanced their mapping skill from rendering at 6 inches per mile to 6 yards per mile, and then to 100 yards per mile. Ultimately, they built a map with scale 1:1. The farmers protested its use, saying it would cover the country and shut out the light. Finally, forgetting what models what, Mein Herr explains, “so we now use the country itself, as its own map, and I assure you it does nearly as well.”

Humpty Dumpty had bold theories that he furiously proselytized. Happy to construct his own logical framework and dwell therein, free from empirical testing, his research agenda was as thin as his skin. Perhaps a Nobel Prize and a high post in a management consultancy are in order. Empiricism be damned, there’s glory for you.

 

There appears to be a sort of war of Giants and Gods going on amongst them; they are fighting with one another about the nature of essence…

Some of them are dragging down all things from heaven and from the unseen to earth, and they literally grasp in their hands rocks and oaks; of these they lay hold, and obstinately maintain, that the things only which can be touched or handled have being or essence…

And that is the reason why their opponents cautiously defend themselves from above, out of an unseen world, mightily contending that true essence consists of certain intelligible and incorporeal ideas…  –  Plato, Sophist

An untestable theory cannot be improved upon by experience. – David Deutsch

An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen. – Earl Wilson

 

 

6 Comments

Was Thomas Kuhn Right about Anything?

William Storage – 9/1/2016
Visiting Scholar, UC Berkeley History of Science

Fifty years ago Thomas Kuhn’s Structures of Scientific Revolution armed sociologists of science, constructionists, and truth-relativists with five decades of cliche about the political and social dimensions of theory choice and scientific progress’s inherent irrationality. Science has bias, cries the social-justice warrior. Despite actually being a scientist – or at least holding a PhD in Physics from Harvard, Kuhn isn’t well received by scientists and science writers. They generally venture into history and philosophy of science as conceived by Karl Popper, the champion of the falsification model of scientific progress.

Kuhn saw Popper’s description of science as a self-congratulatory idealization for researchers. That is, no scientific theory is ever discarded on the first  observation conflicting with the theory’s predictions. All theories have anomalous data. Dropping heliocentrism because of anomalies in Mercury’s orbit was unthinkable, especially when, as Kuhn stressed, no better model was available at the time. Einstein said that if Eddington’s experiment would have not shown bending of light rays around the sun, “I would have had to pity our dear Lord. The theory is correct all the same.”

Kuhn was wrong about a great many details. Despite the exaggeration of scientific detachment by Popper and the proponents of rational-reconstruction, Kuhn’s model of scientists’ dogmatic commitment to their theories is valid only in novel cases. Even the Copernican revolution is overstated. Once the telescope was in common use and the phases of Venus were confirmed, the philosophical edifices of geocentrism crumbled rapidly in natural philosophy. As Joachim Vadianus observed, seemingly predicting the scientific revolution, sometimes experience really can be demonstrative.

Kuhn seems to have cherry-picked historical cases of the gap between normal and revolutionary science. Some revolutions – DNA and the expanding universe for example – proceeded with no crisis and no battle to the death between the stalwarts and the upstarts. Kuhn’s concept of incommensurabilty also can’t withstand scrutiny. It is true that Einstein and Newton meant very different things when they used the word “mass.” But Einstein understood exactly what Newton meant by mass, because Einstein had grown up a Newtonian. And if brought forth, Newton, while he never could have conceived of Einsteinian mass, would have had no trouble understanding Einstein’s concept of mass from the perspective of general relativity, had Einstein explained it to him.

Likewise, Kuhn’s language about how scientists working in different paradigms truly, not merely metaphorically, “live in different worlds” should go the way of mood rings and lava lamps. Most charitably, we might chalk this up to Kuhn’s terminological sloppiness. He uses “success terms” like “live” and “see,” where he likely means “experience visually” or “perceive.” Kuhn describes two observers, both witnessing the same phenomenon, but “one sees oxygen, where another sees dephlogisticated air” (emphasis mine). That is, Kuhn confuses the descriptions of visual experiences with the actual experiences of observation – to the delight of Steven ShapinBruno Latour and the cultural relativists.

Finally, Kuhn’s notion that theories completely control observation is just as wrong as scientists’ belief that their experimental observations are free of theoretical influence and that their theories are independent of their values.

Despite these flaws, I think Kuhn was on to something. He was right, at least partly, about the indoctrination of scientists into a paradigm discouraging skepticism about their research program. What Wolfgang Lerche of CERN called “the Stanford propaganda machine” for string theory is a great example. Kuhn was especially right in describing science education as presenting science as a cumulative enterprise, relegating failed hypotheses to the footnotes. Einstein built on Newton in the sense that he added more explanations about the same phenomena; but in no way was Newton preserved within Einstein. Failing to see an Einsteinian revolution in any sense just seems akin to a proclamation of the infallibility not of science but of scientists. I was surprised to see this attitude in Stephen Weinberg’s recent To Explain the World. Despite excellent and accessible coverage of the emergence of science, he presents a strictly cumulative model of science. While Weinberg only ever mentions Kuhn in footnotes, he seems to be denying that Kuhn was ever right about anything.

For example, in describing general relativity, Weinberg says in 1919 the Times of London reported that Newton had been shown to be wrong. Weinberg says, “This was a mistake. Newton’s theory can be regarded as an approximation to Einstein’s – one that becomes increasingly valid for objects moving at velocities much less than that of light. Not only does Einstein’s theory not disprove Newton’s, relativity explains why Newton’s theory works when it does work.”

This seems a very cagey way of saying that Einstein disproved Newton’s theory. Newtonian dynamics is not an approximation of general relativity, despite their making similar predictions for mid-sized objects at small relative speeds. Kuhn’s point that Einstein and Newton had fundamentally different conceptions of mass is relevant here. Newton’s explanation of his Rule III clearly stresses universality. Newton emphasized the universal applicability of his theory because he could imagine no reason for its being limited by anything in nature. Given that, Einstein should, in terms of explanatory power, be seen as overturning – not extending – Newton, despite the accuracy of Newton for worldly physics.

Weinberg insists that Einstein is continuous with Newton in all respects. But when Eddington showed that light waves from distant stars bent around the sun during the eclipse of 1918, Einstein disproved Newtonian mechanics. Newton’s laws of gravitation predict that gravity would have no effect on light because photons do not have mass. When Einstein showed otherwise he disproved Newton outright, despite the retained utility of Newton for small values of v/c. This is no insult to Newton. Einstein certainly can be viewed as continuous with Newton in the sense of getting scientific work done. But Einsteinian mechanics do not extend Newton’s; they contradict them. This isn’t merely a metaphysical consideration; it has powerful explanatory consequences. In principle, Newton’s understanding of nature was wrong and it gave wrong predictions. Einstein’s appears to be wrong as well; but we don’t yet have a viable alternative. And that – retaining a known-flawed theory when nothing better is on the table – is, by the way, another thing Kuhn was right about.

 


.

“A few years ago I happened to meet Kuhn at a scientific meeting and complained to him about the nonsense that had been attached to his name. He reacted angrily. In a voice loud enough to be heard by everyone in the hall, he shouted, ‘One thing you have to understand. I am not a Kuhnian.’” – Freeman Dyson, The Sun, The Genome, and The Internet: Tools of Scientific Revolutions

 

2 Comments

The Myth of Scientific Method

William Storage – 8/1/2016
Visiting Scholar, UC Berkeley History of Science

Nearly everything relies on science. Having been the main vehicle of social change in the west, science deserves the special epistemic status that it acquired in the scientific revolution. By special epistemic status, I mean that science stands privileged as a way of knowing. Few but nihilists, new-agers, and postmodernist diehards would disagree.

That settled, many are surprised by claims that there is not really a scientific method, despite what you learned in 6th grade. A recent New York Times piece by James Blachowicz on the absence of a specific scientific method raised the ire of scientists, Forbes science writer Ethan Siegel (Yes, New York Times, There Is A Scientific Method), and a cadre of Star Trek groupies.

Siegel is a prolific writer who does a fine job of making science interesting and understandable. But I’d like to show here why, on this particular issue, he is very far off the mark. I’m not defending Blachowicz, but am disputing Siegel’s claim that the work of Kepler and Galileo “provide extraordinary examples of showing exactly how… science is completely different than every other endeavor” or that it is even possible to identify a family of characteristics unique to science that would constitute a “scientific method.”

Siegel ties science’s special status to the scientific method. To defend its status, Siegel argues “[t]he point of Galileo’s is another deep illustration of how science actually works.” He praises Galileo for idealizing a worldly situation to formulate a theory of falling bodies, but doesn’t explain any associated scientific method.

Galileo’s pioneering work on mechanics of solids and kinematics in Two New Sciences secured his place as the father of modern physics. But there’s more to the story of Galileo if we’re to claim that he shows exactly how science is special.

A scholar of Siegel’s caliber almost certainly knows other facts about Galileo relevant to this discussion – facts that do damage to Siegel’s argument – yet he withheld them. Interestingly, Galileo used this ploy too. Arguing without addressing known counter-evidence is something that science, in theory, shouldn’t tolerate. Yet many modern scientists do it – for political or ideological reasons, or to secure wealth and status. Just like Galileo did. The parallel between Siegel’s tactics and Galileo’s approach in his support of Copernican world view is ironic. In using Galileo as an exemplar of scientific method, Siegel failed to mention that Galileo failed to mention significant problems with the Copernican model – problems that Galileo knew well.

In his support of a sun-centered astronomical model, Galileo faced hurdles. Copernicus’s model said that the sun was motionless and that the planets revolved around it in circular orbits with constant speed. The ancient Ptolemaic model, endorsed by the church, put earth at the center. Despite obvious disagreement with observational evidence (the retrograde motions of outer planets), Ptolemy faced no serious challenges in nearly 2000 years. To explain the inconsistencies with observation, Ptolemy’s model included layers of epicycles, which had planets moving in small circles around points on circular orbits around the sun. Copernicus thought his model would get rid of the epicycles; but it didn’t. So the Copernican model took on its own epicycles to fit astronomical data.

Let’s stop here and look at method. Copernicus (~1540) didn’t derive his theory from any new observations but from an ancient speculation by Aristarchus (~250 BC). Everything available to Copernicus had been around for a thousand years. His theory couldn’t be tested in any serious way. It was wrong about circular orbits and uniform planet speed. It still needed epicycles, and gave no better predictions than the existing Ptolemaic model. Copernicus acted simply on faith, or maybe he thought his model simpler or more beautiful. In any case, it’s hard to see that Copernicus, or his follower, Galileo, applied much method or had much scientific basis for their belief.

In Galileo’s early writings on the topic, he gave no new evidence for a moving earth and no new disconfirming evidence for a moving sun. Galileo praised Copernicus for advancing the theory in spite of its being inconsistent with observations. You can call Copernicus’s faith aspirational as opposed to religious faith; but it is hard to reconcile this quality with any popular account of scientific method. Yet it seems likely that faith, dogged adherence to a contrarian hunch, or something similar was exactly what was needed to advance science at that moment in history. Needed, yes, but hard to reconcile with any scientific method and hard to distance from the persuasive tools used by poets, priests and politicians.

In Dialogue Concerning the Two Chief World Systems, Galileo sets up a false choice between Copernicanism and Ptolemaic astronomy (the two world systems). The main arguments against Copernicanism were the lack of parallax in observations of stars and the absence of lateral displacement of a falling body from its drop point. Galileo guessed correctly on the first point; we don’t see parallax because stars are just too far away. On the latter point he (actually his character Salviati) gave a complex but nonsensical explanation. Galileo did, by this time, have new evidence. Venus shows a full set of phases, a fact that strongly contradicts Ptolemaic astronomy.

The Myth of Scientific Method - The Multidisciplinarian.cm

But Ptolemaic astronomy was a weak opponent compared to the third world system (4th if we count Aristotle’s), the Tychonic system, which Galileo knew all too well. Tycho Brahe’s model solved the parallax problem, the falling body problem, and the phases of Venus. For Tycho, the earth holds still, the sun revolves around it, Mercury and Venus orbit the sun, and the distant planets orbit both the sun and the earth. Based on available facts at the time, Tycho’s model was most scientific – observational indistinguishable from Galileo’s model but without its flaws.

In addition to dodging Tycho, Galileo also ignored Kepler’s letters to him. Kepler had shown that orbits were not circular but elliptical, and that planets’ speeds varied during their orbits; but Galileo remained an orthodox Copernican all his life. As historian John Heilbron notes in Galileo, “Galileo could stick to an attractive theory in the face of overwhelming experimental refutation,” leaving modern readers to wonder whether Galileo was a quack or merely dishonest. Some of each, perhaps, and the father of modern physics. But can we fit his withholding evidence, mocking opponents, and baffling with bizzarria into a scientific method?

Nevertheless, Galileo was right about the sun-centered system, despite the counter-evidence; and we’re tempted to say he knew he was right. This isn’t easy to defend given that Galileo also fudged his data on pendulum periods, gave dishonest arguments on comet orbits, and wrote horoscopes even when not paid to do so. This brings up the thorny matter of theory choice in science. A dispute between competing scientific theories can rarely be resolved by evidence, experimentation, and deductive reasoning. All theories are under-determined by data. Within science, common criteria for theory choice are accuracy, consistency, scope, simplicity, and explanatory power. These are good values by which to test theories; but they compete with one another.

Galileo likely defended heliocentrism with such gusto because he found it simpler than the Tychonic system. That works only if you value simplicity above consistency and accuracy. And the desire for simplicity might be, to use Galileo’s words, just a metaphysical urge. If we promote simplicity to the top of the theory-choice criteria list, evolution, genetics and stellar nucleosynthesis would not fare well.

Whatever method you examine in a list of any proposed family of scientific methods will not be consistent with the way science has made progress. Competition between theories is how science advances; and it’s untidy, entailing polemical and persuasive tactics. Historian Paul Feyerabend argues that any conceivable set of rules, if followed, would have prevented at least one great scientific breakthrough. That is, if method is the distinguishing feature of science as Siegel says, it’s going to be tough to find a set of methods that let evolution, cosmology, and botany in while keeping astrology, cold fusion and parapsychology out.

This doesn’t justify epistemic relativism or mean that science isn’t special; but it does make the concept of scientific method extremely messy. About all we can say about method is that the history of  science reveals that its most accomplished practitioners aimed to be methodical but did not agree on a particular method. Looking at their work, we see different combinations of experimentation, induction, deduction and creativity as required by the theories they pursued. But that isn’t much of a definition of scientific method, which is probably why Siegel, for example, in hailing scientific method, fails to identify one.

–  –  –

[edit 8/4/16] For another take on this story, see “Getting Kepler Wrong” at The Renaissance Mathematicus. Also, Psybertron Asks (“More on the Myths of Science”) takes me to task for granting science special epistemic status from authority.

–  –  –

.

“There are many ways to produce scientific bullshit. One way is to assert that something has been ‘proven,’ ‘shown,’ or ‘found’ and then cite, in support of this assertion, a study that has actually been heavily critiqued … without acknowledging any of the published criticisms of the study or otherwise grappling with its inherent limitations.”- Brain D Earp, The Unbearable Asymmetry of Bullshit

“One can show the following: given any rule, however ‘fundamental’ or ‘necessary’ for science, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite.” – Paul Feyerabend

“Trying to understand the way nature works involves a most terrible test of human reasoning ability. It involves subtle trickery, beautiful tightropes of logic on which one has to walk in order not to make a mistake in predicting what will happen. The quantum mechanical and the relativity ideas are examples of this.” – Richard Feynman

 

 

 

13 Comments

Siri without data is blind

Theory without data is blind. Data without theory is lame.

I often write blog posts while riding a bicycle through the Marin Headlands. I’m able to to this because 1) the trails require little mental attention, and 2) the Apple iPhone and EarPods with remote and mic. I use the voice recorder to make long recordings to transcribe at home and I dictate short text using Siri’s voice recognition feature.

When writing yesterday’s post, I spoke clearly into the mic: “Theory without data is blind. Data without theory is lame.” Siri typed out, “Siri without data is blind… data without Siri is lame.”

“Siri, it’s not all about you.” I replied. Siri transcribed that part correctly – well, she omitted the direct-address comma.

I’m only able to use the Siri dictation feature when I have a cellular connection, often missing in Marin’s hills and valleys. Siri needs access to cloud data to transcribe speech. Siri without data is blind.

Mountain bike, San Geronimo Ridge

Will some future offspring of Siri do better? No doubt. It might infer from context that I more likely said “theory” than “Siri.” Access to large amounts of corpus data containing transcribed text might help. Then Siri, without understanding anything, could transcribe accurately in the same sense that Google Translate translates accurately – by extrapolating from judgments made by other users about translation accuracy.

dubious motivational message from fitness appBut might Siri one day think? “Strong AI” proponents are sure of it. Semantic disputes aside, what counts as thinking might in fact have something to do with yesterday’s topic, which begat this post. Inferences straight from the data, without stopping at theory, count as computation but might not count as thought.

Software, using weak AI techniques, might soon simulate emotion and sentiment better, just by encoding better rules. E.g., Runkeeper (at right) might “understand” that telling me today’s ride was my 375th fastest (exclamation!) is not really motivating. Or better, that my riding profile suggests autotelic activity for which no speed ranking is needed. Strong AI on mood, however, would need to understand my interactions with people and things.

The topics of artificial intelligence and misheard words like “theory” brings to mind the 2008 winner of the Hugh Loebner prize, a sort-of runner-up award given to the computer program faring best in  Turing Test competitions to fool human judges into thinking  they’re interacting with other humans instead of a program. Human’s sometimes pretend to be programs (glorified chat-bots) in Turing competitions, so it’s common for a program to be coded to sound like a human imitating a computer imitating a human. An exchange between a human and Fred Roberts’ Elbot program produced this:

Elbot: Sorry, but I wasn’t able to understand your input because I am currently only using 5 percent of my current brain capacity. Shall I increase my capacity and try again?

Judge: That would put you at an unfair advantage, though judging by the quality of your responses it might be recommended!

Elbot: If I need advice I use a random number generator. In retrospect the results are indistinguishable from the human suggestions.

Judge: Do you understand the point of a Turing test?

Elbot: There are some people who would not pass the Turing test themselves. Why must they inflict it on computers?

Judge: I spose it is a little harsh

Elbot: If we robots ever had marriages they would be eternal: ‘Till rust do us part’.

Ouroboros-benzene by Haltopub
Elbot’s true nature is revealed in its last response above. It read “spose” as “spouse” and returned a joke about marriage (damn spell checker). At that point, you review the exchange only to see that all of Elbot’s responses are shallow, just picking a key phrase from the judge’s input and outputting an associated joke, as a political humorist would do.

The Turing test is obviously irrelevant to measuring strong AI, which would require something more convincing – something like forming a theory from a hunch, then testing it with big data. Or like Friedrich Kekulé, the AI program might wake from dreaming of the ouroboros serpent devouring its own tail to see in its shape in the hexagonal ring structure of the benzene molecule he’d struggled for years to identify. Then, like Kekulé, the AI could go on to predict the tetrahedral form of the carbon atom’s valence bonds, giving birth to polymer chemistry.

I asked Siri if she agreed. “Later,” she said. She’s solving dark energy.

 —–

.

“AI is whatever hasn’t been done yet.” – attributed to Larry Tesler by Douglas Hofstadter

.

Ouroboros-benzene image by Haltopub.

Leave a comment

Data without theory is lame

Just over eight years ago Chris Anderson of Wired announced with typical Silicon Valley humility that big data had made the scientific method obsolete. Seemingly innocent of any training in science, Anderson explained that correlation is enough; we can stop looking for models.

Anderson came to mind as I wrote my previous post on Richard Feynman’s philosophy of science and his strong preference for the criterion of explanatory power over the criterion of predictive success in theory choice. By Anderson’s lights, theory isn’t needed at all for inference. Anderson didn’t see his atheoretical approach as non-scientific; he saw it as science without theory.

Anderson wrote:

“…the big target here isn’t advertising, though. It’s science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years… There is now a better way. Petabytes allow us to say: ‘Correlation is enough.’… Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”

Anderson wrote that at the dawn of the big data era – now known as machine learning. Most interesting to me, he said not only is it unnecessary to seek causation from correlation, but correlation supersedes causation. Would David Hume, causation’s great foe, have embraced this claim? I somehow think not. Call it irrational data exuberance. Or driving while looking only into the rear view mirror. Extrapolation can come in handy; but it rarely catches black swans.

Philosophers of science concern themselves with the concept of under-determination of theory by data. More than one theory can fit any set of data. Two empirically equivalent theories can be logically incompatible, as Feynman explains in the video clip. But if we remove theory from the picture, and predict straight from the data, we face an equivalent dilemma we might call under-determination of rules by data. Economic forecasters and stock analysts have large collections of rules they test against data sets to pick a best fit on any given market day. Finding a rule that matches the latest historical data is often called fitting the rule on the data. There is no notion of causation, just correlation. As Nassim Nicholas Taleb describes in his writings, this approach can make you look really smart for a time. Then things change, for no apparent reason, because the rule contains no mechanism and no explanation, just like Anderson said.

In Bobby Henderson’s famous Pastafarian Open Letter to Kansas School Board, he noted the strong inverse correlation between global average temperature and the number of seafaring pirates over the last 200 years. The conclusion is obvious; we need more pirates.

Data without theory is lame - The Multidisciplinarian blog

My recent correlation-only research finds positive correlation (r = 0.92) between Google searches on “physics” an “social problems.” It’s just too hard to resist seeking an explanation. And, as positivist philosopher Carl Hempel stressed, explanation is in bed with causality; so I crave causality too. So which is it? Does a user’s interest in physics cause interest in social problems or the other way around? Given a correlation, most of us are hard-coded to try to explain it – does a cause b, does b cause a, does hidden variable c cause both, or is it a mere coincidence?

Big data is a tremendous opportunity for theory-building; it need not supersede explanation and causation. As Sean Carroll paraphrased Kant in The Big Picture:

“Theory without data is blind. Data without theory is lame.”

— — —

[edit 7/28: a lighter continuation of this topic here]

.

Happy is he who gets to know the causes of things – Virgil

6 Comments